id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
61577
https://en.wikipedia.org/wiki/Electrical%20resistance%20and%20conductance
Electrical resistance and conductance
The electrical resistance of an object is a measure of its opposition to the flow of electric current. Its reciprocal quantity is , measuring the ease with which an electric current passes. Electrical resistance shares some conceptual parallels with mechanical friction. The SI unit of electrical resistance is the ohm (), while electrical conductance is measured in siemens (S) (formerly called the 'mho' and then represented by ). The resistance of an object depends in large part on the material it is made of. Objects made of electrical insulators like rubber tend to have very high resistance and low conductance, while objects made of electrical conductors like metals tend to have very low resistance and high conductance. This relationship is quantified by resistivity or conductivity. The nature of a material is not the only factor in resistance and conductance, however; it also depends on the size and shape of an object because these properties are extensive rather than intensive. For example, a wire's resistance is higher if it is long and thin, and lower if it is short and thick. All objects resist electrical current, except for superconductors, which have a resistance of zero. The resistance of an object is defined as the ratio of voltage across it to current through it, while the conductance is the reciprocal: For a wide variety of materials and conditions, and are directly proportional to each other, and therefore and are constants (although they will depend on the size and shape of the object, the material it is made of, and other factors like temperature or strain). This proportionality is called Ohm's law, and materials that satisfy it are called ohmic materials. In other cases, such as a transformer, diode or battery, and are not directly proportional. The ratio is sometimes still useful, and is referred to as a chordal resistance or static resistance, since it corresponds to the inverse slope of a chord between the origin and an – curve. In other situations, the derivative may be most useful; this is called the differential resistance. Introduction In the hydraulic analogy, current flowing through a wire (or resistor) is like water flowing through a pipe, and the voltage drop across the wire is like the pressure drop that pushes water through the pipe. Conductance is proportional to how much flow occurs for a given pressure, and resistance is proportional to how much pressure is required to achieve a given flow. The voltage drop (i.e., difference between voltages on one side of the resistor and the other), not the voltage itself, provides the driving force pushing current through a resistor. In hydraulics, it is similar: the pressure difference between two sides of a pipe, not the pressure itself, determines the flow through it. For example, there may be a large water pressure above the pipe, which tries to push water down through the pipe. But there may be an equally large water pressure below the pipe, which tries to push water back up through the pipe. If these pressures are equal, no water flows. (In the image at right, the water pressure below the pipe is zero.) The resistance and conductance of a wire, resistor, or other element is mostly determined by two properties: geometry (shape), and material Geometry is important because it is more difficult to push water through a long, narrow pipe than a wide, short pipe. In the same way, a long, thin copper wire has higher resistance (lower conductance) than a short, thick copper wire. Materials are important as well. A pipe filled with hair restricts the flow of water more than a clean pipe of the same shape and size. Similarly, electrons can flow freely and easily through a copper wire, but cannot flow as easily through a steel wire of the same shape and size, and they essentially cannot flow at all through an insulator like rubber, regardless of its shape. The difference between copper, steel, and rubber is related to their microscopic structure and electron configuration, and is quantified by a property called resistivity. In addition to geometry and material, there are various other factors that influence resistance and conductance, such as temperature; see below. Conductors and resistors Substances in which electricity can flow are called conductors. A piece of conducting material of a particular resistance meant for use in a circuit is called a resistor. Conductors are made of high-conductivity materials such as metals, in particular copper and aluminium. Resistors, on the other hand, are made of a wide variety of materials depending on factors such as the desired resistance, amount of energy that it needs to dissipate, precision, and costs. Ohm's law For many materials, the current through the material is proportional to the voltage applied across it: over a wide range of voltages and currents. Therefore, the resistance and conductance of objects or electronic components made of these materials is constant. This relationship is called Ohm's law, and materials which obey it are called ohmic materials. Examples of ohmic components are wires and resistors. The current–voltage graph of an ohmic device consists of a straight line through the origin with positive slope. Other components and materials used in electronics do not obey Ohm's law; the current is not proportional to the voltage, so the resistance varies with the voltage and current through them. These are called nonlinear or non-ohmic. Examples include diodes and fluorescent lamps. Relation to resistivity and conductivity The resistance of a given object depends primarily on two factors: what material it is made of, and its shape. For a given material, the resistance is inversely proportional to the cross-sectional area; for example, a thick copper wire has lower resistance than an otherwise-identical thin copper wire. Also, for a given material, the resistance is proportional to the length; for example, a long copper wire has higher resistance than an otherwise-identical short copper wire. The resistance and conductance of a conductor of uniform cross section, therefore, can be computed as where is the length of the conductor, measured in metres (m), is the cross-sectional area of the conductor measured in square metres (m2), (sigma) is the electrical conductivity measured in siemens per meter (S·m−1), and (rho) is the electrical resistivity (also called specific electrical resistance) of the material, measured in ohm-metres (Ω·m). The resistivity and conductivity are proportionality constants, and therefore depend only on the material the wire is made of, not the geometry of the wire. Resistivity and conductivity are reciprocals: . Resistivity is a measure of the material's ability to oppose electric current. This formula is not exact, as it assumes the current density is totally uniform in the conductor, which is not always true in practical situations. However, this formula still provides a good approximation for long thin conductors such as wires. Another situation for which this formula is not exact is with alternating current (AC), because the skin effect inhibits current flow near the center of the conductor. For this reason, the geometrical cross-section is different from the effective cross-section in which current actually flows, so resistance is higher than expected. Similarly, if two conductors near each other carry AC current, their resistances increase due to the proximity effect. At commercial power frequency, these effects are significant for large conductors carrying large currents, such as busbars in an electrical substation, or large power cables carrying more than a few hundred amperes. The resistivity of different materials varies by an enormous amount: For example, the conductivity of teflon is about 1030 times lower than the conductivity of copper. Loosely speaking, this is because metals have large numbers of "delocalized" electrons that are not stuck in any one place, so they are free to move across large distances. In an insulator, such as Teflon, each electron is tightly bound to a single molecule so a great force is required to pull it away. Semiconductors lie between these two extremes. More details can be found in the article: Electrical resistivity and conductivity. For the case of electrolyte solutions, see the article: Conductivity (electrolytic). Resistivity varies with temperature. In semiconductors, resistivity also changes when exposed to light. See below. Measurement An instrument for measuring resistance is called an ohmmeter. Simple ohmmeters cannot measure low resistances accurately because the resistance of their measuring leads causes a voltage drop that interferes with the measurement, so more accurate devices use four-terminal sensing. Typical values Static and differential resistance Many electrical elements, such as diodes and batteries do satisfy Ohm's law. These are called non-ohmic or non-linear, and their current–voltage curves are straight lines through the origin. Resistance and conductance can still be defined for non-ohmic elements. However, unlike ohmic resistance, non-linear resistance is not constant but varies with the voltage or current through the device; i.e., its operating point. There are two types of resistance: AC circuits Impedance and admittance When an alternating current flows through a circuit, the relation between current and voltage across a circuit element is characterized not only by the ratio of their magnitudes, but also the difference in their phases. For example, in an ideal resistor, the moment when the voltage reaches its maximum, the current also reaches its maximum (current and voltage are oscillating in phase). But for a capacitor or inductor, the maximum current flow occurs as the voltage passes through zero and vice versa (current and voltage are oscillating 90° out of phase, see image below). Complex numbers are used to keep track of both the phase and magnitude of current and voltage: where: is time; and are the voltage and current as a function of time, respectively; and indicate the amplitude of the voltage and current, respectively; is the angular frequency of the AC current; is the displacement angle; and are the complex-valued voltage and current, respectively; and are the complex impedance and admittance, respectively; indicates the real part of a complex number; and is the imaginary unit. The impedance and admittance may be expressed as complex numbers that can be broken into real and imaginary parts: where is resistance, is conductance, is reactance, and is susceptance. These lead to the complex number identities which are true in all cases, whereas is only true in the special cases of either DC or reactance-free current. The complex angle is the phase difference between the voltage and current passing through a component with impedance . For capacitors and inductors, this angle is exactly -90° or +90°, respectively, and and are nonzero. Ideal resistors have an angle of 0°, since is zero (and hence also), and and reduce to and respectively. In general, AC systems are designed to keep the phase angle close to 0° as much as possible, since it reduces the reactive power, which does no useful work at a load. In a simple case with an inductive load (causing the phase to increase), a capacitor may be added for compensation at one frequency, since the capacitor's phase shift is negative, bringing the total impedance phase closer to 0° again. is the reciprocal of () for all circuits, just as for DC circuits containing only resistors, or AC circuits for which either the reactance or susceptance happens to be zero ( or , respectively) (if one is zero, then for realistic systems both must be zero). Frequency dependence A key feature of AC circuits is that the resistance and conductance can be frequency-dependent, a phenomenon known as the universal dielectric response. One reason, mentioned above is the skin effect (and the related proximity effect). Another reason is that the resistivity itself may depend on frequency (see Drude model, deep-level traps, resonant frequency, Kramers–Kronig relations, etc.) Energy dissipation and Joule heating Resistors (and other elements with resistance) oppose the flow of electric current; therefore, electrical energy is required to push current through the resistance. This electrical energy is dissipated, heating the resistor in the process. This is called Joule heating (after James Prescott Joule), also called ohmic heating or resistive heating. The dissipation of electrical energy is often undesired, particularly in the case of transmission losses in power lines. High voltage transmission helps reduce the losses by reducing the current for a given power. On the other hand, Joule heating is sometimes useful, for example in electric stoves and other electric heaters (also called resistive heaters). As another example, incandescent lamps rely on Joule heating: the filament is heated to such a high temperature that it glows "white hot" with thermal radiation (also called incandescence). The formula for Joule heating is: where is the power (energy per unit time) converted from electrical energy to thermal energy, is the resistance, and is the current through the resistor. Dependence on other conditions Temperature dependence Near room temperature, the resistivity of metals typically increases as temperature is increased, while the resistivity of semiconductors typically decreases as temperature is increased. The resistivity of insulators and electrolytes may increase or decrease depending on the system. For the detailed behavior and explanation, see Electrical resistivity and conductivity. As a consequence, the resistance of wires, resistors, and other components often change with temperature. This effect may be undesired, causing an electronic circuit to malfunction at extreme temperatures. In some cases, however, the effect is put to good use. When temperature-dependent resistance of a component is used purposefully, the component is called a resistance thermometer or thermistor. (A resistance thermometer is made of metal, usually platinum, while a thermistor is made of ceramic or polymer.) Resistance thermometers and thermistors are generally used in two ways. First, they can be used as thermometers: by measuring the resistance, the temperature of the environment can be inferred. Second, they can be used in conjunction with Joule heating (also called self-heating): if a large current is running through the resistor, the resistor's temperature rises and therefore its resistance changes. Therefore, these components can be used in a circuit-protection role similar to fuses, or for feedback in circuits, or for many other purposes. In general, self-heating can turn a resistor into a nonlinear and hysteretic circuit element. For more details see Thermistor#Self-heating effects. If the temperature does not vary too much, a linear approximation is typically used: where is called the temperature coefficient of resistance, is a fixed reference temperature (usually room temperature), and is the resistance at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. The temperature coefficient is typically to for metals near room temperature. It is usually negative for semiconductors and insulators, with highly variable magnitude. Strain dependence Just as the resistance of a conductor depends upon temperature, the resistance of a conductor depends upon strain. By placing a conductor under tension (a form of stress that leads to strain in the form of stretching of the conductor), the length of the section of conductor under tension increases and its cross-sectional area decreases. Both these effects contribute to increasing the resistance of the strained section of conductor. Under compression (strain in the opposite direction), the resistance of the strained section of conductor decreases. See the discussion on strain gauges for details about devices constructed to take advantage of this effect. Light illumination dependence Some resistors, particularly those made from semiconductors, exhibit photoconductivity, meaning that their resistance changes when light is shining on them. Therefore, they are called photoresistors (or light dependent resistors). These are a common type of light detector. Superconductivity Superconductors are materials that have exactly zero resistance and infinite conductance, because they can have and . This also means there is no joule heating, or in other words no dissipation of electrical energy. Therefore, if superconductive wire is made into a closed loop, current flows around the loop forever. Superconductors require cooling to temperatures near with liquid helium for most metallic superconductors like niobium–tin alloys, or cooling to temperatures near with liquid nitrogen for the expensive, brittle and delicate ceramic high temperature superconductors. Nevertheless, there are many technological applications of superconductivity, including superconducting magnets.
Physical sciences
Electrical circuits
null
61580
https://en.wikipedia.org/wiki/Electrical%20resistivity%20and%20conductivity
Electrical resistivity and conductivity
Electrical resistivity (also called volume resistivity or specific electrical resistance) is a fundamental specific property of a material that measures its electrical resistance or how strongly it resists electric current. A low resistivity indicates a material that readily allows electric current. Resistivity is commonly represented by the Greek letter  (rho). The SI unit of electrical resistivity is the ohm-metre (Ω⋅m). For example, if a solid cube of material has sheet contacts on two opposite faces, and the resistance between these contacts is , then the resistivity of the material is . Electrical conductivity (or specific conductance) is the reciprocal of electrical resistivity. It represents a material's ability to conduct electric current. It is commonly signified by the Greek letter  (sigma), but  (kappa) (especially in electrical engineering) and  (gamma) are sometimes used. The SI unit of electrical conductivity is siemens per metre (S/m). Resistivity and conductivity are intensive properties of materials, giving the opposition of a standard cube of material to current. Electrical resistance and conductance are corresponding extensive properties that give the opposition of a specific object to electric current. Definition Ideal case In an ideal case, cross-section and physical composition of the examined material are uniform across the sample, and the electric field and current density are both parallel and constant everywhere. Many resistors and conductors do in fact have a uniform cross section with a uniform flow of electric current, and are made of a single material, so that this is a good model. (See the adjacent diagram.) When this is the case, the resistance of the conductor is directly proportional to its length and inversely proportional to its cross-sectional area, where the electrical resistivity  (Greek: rho) is the constant of proportionality. This is written as: where The resistivity can be expressed using the SI unit ohm metre (Ω⋅m) — i.e. ohms multiplied by square metres (for the cross-sectional area) then divided by metres (for the length). Both resistance and resistivity describe how difficult it is to make electrical current flow through a material, but unlike resistance, resistivity is an intrinsic property and does not depend on geometric properties of a material. This means that all pure copper (Cu) wires (which have not been subjected to distortion of their crystalline structure etc.), irrespective of their shape and size, have the same , but a long, thin copper wire has a much larger than a thick, short copper wire. Every material has its own characteristic resistivity. For example, rubber has a far larger resistivity than copper. In a hydraulic analogy, passing current through a high-resistivity material is like pushing water through a pipe full of sand - while passing current through a low-resistivity material is like pushing water through an empty pipe. If the pipes are the same size and shape, the pipe full of sand has higher resistance to flow. Resistance, however, is not determined by the presence or absence of sand. It also depends on the length and width of the pipe: short or wide pipes have lower resistance than narrow or long pipes. The above equation can be transposed to get Pouillet's law (named after Claude Pouillet): The resistance of a given element is proportional to the length, but inversely proportional to the cross-sectional area. For example, if  = ,  = (forming a cube with perfectly conductive contacts on opposite faces), then the resistance of this element in ohms is numerically equal to the resistivity of the material it is made of in Ω⋅m. Conductivity, , is the inverse of resistivity: Conductivity has SI units of siemens per metre (S/m). General scalar quantities If the geometry is more complicated, or if the resistivity varies from point to point within the material, the current and electric field will be functions of position. Then it is necessary to use a more general expression in which the resistivity at a particular point is defined as the ratio of the electric field to the density of the current it creates at that point: where The current density is parallel to the electric field by necessity. Conductivity is the inverse (reciprocal) of resistivity. Here, it is given by: For example, rubber is a material with large and small  — because even a very large electric field in rubber makes almost no current flow through it. On the other hand, copper is a material with small and large  — because even a small electric field pulls a lot of current through it. This expression simplifies to the formula given above under "ideal case" when the resistivity is constant in the material and the geometry has a uniform cross-section. In this case, the electric field and current density are constant and parallel. {| class="toccolours collapsible collapsed" width="80%" style="text-align:left;" ! Derivation of the constant case from the general case |- |We will combine three equations. Assume the geometry has a uniform cross-section and the resistivity is constant in the material. Then the electric field and current density are constant and parallel, and by the general definition of resistivity, we obtain Since the electric field is constant, it is given by the total voltage across the conductor divided by the length of the conductor: Since the current density is constant, it is equal to the total current divided by the cross sectional area: Plugging in the values of and into the first expression, we obtain: Finally, we apply Ohm's law, : |} Tensor resistivity When the resistivity of a material has a directional component, the most general definition of resistivity must be used. It starts from the tensor-vector form of Ohm's law, which relates the electric field inside a material to the electric current flow. This equation is completely general, meaning it is valid in all cases, including those mentioned above. However, this definition is the most complicated, so it is only directly used in anisotropic cases, where the more simple definitions cannot be applied. If the material is not anisotropic, it is safe to ignore the tensor-vector definition, and use a simpler expression instead. Here, anisotropic means that the material has different properties in different directions. For example, a crystal of graphite consists microscopically of a stack of sheets, and current flows very easily through each sheet, but much less easily from one sheet to the adjacent one. In such cases, the current does not flow in exactly the same direction as the electric field. Thus, the appropriate equations are generalized to the three-dimensional tensor form: where the conductivity and resistivity are rank-2 tensors, and electric field and current density are vectors. These tensors can be represented by 3×3 matrices, the vectors with 3×1 matrices, with matrix multiplication used on the right side of these equations. In matrix form, the resistivity relation is given by: where Equivalently, resistivity can be given in the more compact Einstein notation: In either case, the resulting expression for each electric field component is: Since the choice of the coordinate system is free, the usual convention is to simplify the expression by choosing an -axis parallel to the current direction, so . This leaves: Conductivity is defined similarly: or both resulting in: Looking at the two expressions, and are the matrix inverse of each other. However, in the most general case, the individual matrix elements are not necessarily reciprocals of one another; for example, may not be equal to . This can be seen in the Hall effect, where is nonzero. In the Hall effect, due to rotational invariance about the -axis, and , so the relation between resistivity and conductivity simplifies to: If the electric field is parallel to the applied current, and are zero. When they are zero, one number, , is enough to describe the electrical resistivity. It is then written as simply , and this reduces to the simpler expression. Conductivity and current carriers Relation between current density and electric current velocity Electric current is the ordered movement of electric charges. Causes of conductivity Band theory simplified According to elementary quantum mechanics, an electron in an atom or crystal can only have certain precise energy levels; energies between these levels are impossible. When a large number of such allowed levels have close-spaced energy values – i.e. have energies that differ only minutely – those close energy levels in combination are called an "energy band". There can be many such energy bands in a material, depending on the atomic number of the constituent atoms and their distribution within the crystal. The material's electrons seek to minimize the total energy in the material by settling into low energy states; however, the Pauli exclusion principle means that only one can exist in each such state. So the electrons "fill up" the band structure starting from the bottom. The characteristic energy level up to which the electrons have filled is called the Fermi level. The position of the Fermi level with respect to the band structure is very important for electrical conduction: Only electrons in energy levels near or above the Fermi level are free to move within the broader material structure, since the electrons can easily jump among the partially occupied states in that region. In contrast, the low energy states are completely filled with a fixed limit on the number of electrons at all times, and the high energy states are empty of electrons at all times. Electric current consists of a flow of electrons. In metals there are many electron energy levels near the Fermi level, so there are many electrons available to move. This is what causes the high electronic conductivity of metals. An important part of band theory is that there may be forbidden bands of energy: energy intervals that contain no energy levels. In insulators and semiconductors, the number of electrons is just the right amount to fill a certain integer number of low energy bands, exactly to the boundary. In this case, the Fermi level falls within a band gap. Since there are no available states near the Fermi level, and the electrons are not freely movable, the electronic conductivity is very low. In metals A metal consists of a lattice of atoms, each with an outer shell of electrons that freely dissociate from their parent atoms and travel through the lattice. This is also known as a positive ionic lattice. This 'sea' of dissociable electrons allows the metal to conduct electric current. When an electrical potential difference (a voltage) is applied across the metal, the resulting electric field causes electrons to drift towards the positive terminal. The actual drift velocity of electrons is typically small, on the order of magnitude of metres per hour. However, due to the sheer number of moving electrons, even a slow drift velocity results in a large current density. The mechanism is similar to transfer of momentum of balls in a Newton's cradle but the rapid propagation of an electric energy along a wire is not due to the mechanical forces, but the propagation of an energy-carrying electromagnetic field guided by the wire. Most metals have electrical resistance. In simpler models (non quantum mechanical models) this can be explained by replacing electrons and the crystal lattice by a wave-like structure. When the electron wave travels through the lattice, the waves interfere, which causes resistance. The more regular the lattice is, the less disturbance happens and thus the less resistance. The amount of resistance is thus mainly caused by two factors. First, it is caused by the temperature and thus amount of vibration of the crystal lattice. Higher temperatures cause bigger vibrations, which act as irregularities in the lattice. Second, the purity of the metal is relevant as a mixture of different ions is also an irregularity. The small decrease in conductivity on melting of pure metals is due to the loss of long range crystalline order. The short range order remains and strong correlation between positions of ions results in coherence between waves diffracted by adjacent ions. In semiconductors and insulators In metals, the Fermi level lies in the conduction band (see Band Theory, above) giving rise to free conduction electrons. However, in semiconductors the position of the Fermi level is within the band gap, about halfway between the conduction band minimum (the bottom of the first band of unfilled electron energy levels) and the valence band maximum (the top of the band below the conduction band, of filled electron energy levels). That applies for intrinsic (undoped) semiconductors. This means that at absolute zero temperature, there would be no free conduction electrons, and the resistance is infinite. However, the resistance decreases as the charge carrier density (i.e., without introducing further complications, the density of electrons) in the conduction band increases. In extrinsic (doped) semiconductors, dopant atoms increase the majority charge carrier concentration by donating electrons to the conduction band or producing holes in the valence band. (A "hole" is a position where an electron is missing; such holes can behave in a similar way to electrons.) For both types of donor or acceptor atoms, increasing dopant density reduces resistance. Hence, highly doped semiconductors behave metallically. At very high temperatures, the contribution of thermally generated carriers dominates over the contribution from dopant atoms, and the resistance decreases exponentially with temperature. In ionic liquids/electrolytes In electrolytes, electrical conduction happens not by band electrons or holes, but by full atomic species (ions) traveling, each carrying an electrical charge. The resistivity of ionic solutions (electrolytes) varies tremendously with concentration – while distilled water is almost an insulator, salt water is a reasonable electrical conductor. Conduction in ionic liquids is also controlled by the movement of ions, but here we are talking about molten salts rather than solvated ions. In biological membranes, currents are carried by ionic salts. Small holes in cell membranes, called ion channels, are selective to specific ions and determine the membrane resistance. The concentration of ions in a liquid (e.g., in an aqueous solution) depends on the degree of dissociation of the dissolved substance, characterized by a dissociation coefficient , which is the ratio of the concentration of ions to the concentration of molecules of the dissolved substance : The specific electrical conductivity () of a solution is equal to: where : module of the ion charge, and : mobility of positively and negatively charged ions, : concentration of molecules of the dissolved substance, : the coefficient of dissociation. Superconductivity The electrical resistivity of a metallic conductor decreases gradually as temperature is lowered. In normal (that is, non-superconducting) conductors, such as copper or silver, this decrease is limited by impurities and other defects. Even near absolute zero, a real sample of a normal conductor shows some resistance. In a superconductor, the resistance drops abruptly to zero when the material is cooled below its critical temperature. In a normal conductor, the current is driven by a voltage gradient, whereas in a superconductor, there is no voltage gradient and the current is instead related to the phase gradient of the superconducting order parameter. A consequence of this is that an electric current flowing in a loop of superconducting wire can persist indefinitely with no power source. In a class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but nonzero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. The resistance due to this effect is tiny compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen so that the resistance of the material becomes truly zero. Plasma Plasmas are very good conductors and electric potentials play an important role. The potential as it exists on average in the space between charged particles, independent of the question of how it can be measured, is called the plasma potential, or space potential. If an electrode is inserted into a plasma, its potential generally lies considerably below the plasma potential, due to what is termed a Debye sheath. The good electrical conductivity of plasmas makes their electric fields very small. This results in the important concept of quasineutrality, which says the density of negative charges is approximately equal to the density of positive charges over large volumes of the plasma (), but on the scale of the Debye length there can be charge imbalance. In the special case that double layers are formed, the charge separation can extend some tens of Debye lengths. The magnitude of the potentials and electric fields must be determined by means other than simply finding the net charge density. A common example is to assume that the electrons satisfy the Boltzmann relation: Differentiating this relation provides a means to calculate the electric field from the density: (∇ is the vector gradient operator; see nabla symbol and gradient for more information.) It is possible to produce a plasma that is not quasineutral. An electron beam, for example, has only negative charges. The density of a non-neutral plasma must generally be very low, or it must be very small. Otherwise, the repulsive electrostatic force dissipates it. In astrophysical plasmas, Debye screening prevents electric fields from directly affecting the plasma over large distances, i.e., greater than the Debye length. However, the existence of charged particles causes the plasma to generate, and be affected by, magnetic fields. This can and does cause extremely complex behavior, such as the generation of plasma double layers, an object that separates charge over a few tens of Debye lengths. The dynamics of plasmas interacting with external and self-generated magnetic fields are studied in the academic discipline of magnetohydrodynamics. Plasma is often called the fourth state of matter after solid, liquids and gases. It is distinct from these and other lower-energy states of matter. Although it is closely related to the gas phase in that it also has no definite form or volume, it differs in a number of ways, including the following: Resistivity and conductivity of various materials A conductor such as a metal has high conductivity and a low resistivity. An insulator such as glass has low conductivity and a high resistivity. The conductivity of a semiconductor is generally intermediate, but varies widely under different conditions, such as exposure of the material to electric fields or specific frequencies of light, and, most important, with temperature and composition of the semiconductor material. The degree of semiconductors doping makes a large difference in conductivity. To a point, more doping leads to higher conductivity. The conductivity of a water/aqueous solution is highly dependent on its concentration of dissolved salts, and other chemical species that ionize in the solution. Electrical conductivity of water samples is used as an indicator of how salt-free, ion-free, or impurity-free the sample is; the purer the water, the lower the conductivity (the higher the resistivity). Conductivity measurements in water are often reported as specific conductance, relative to the conductivity of pure water at . An EC meter is normally used to measure conductivity in a solution. A rough summary is as follows: This table shows the resistivity (), conductivity and temperature coefficient of various materials at . The effective temperature coefficient varies with temperature and purity level of the material. The 20 °C value is only an approximation when used at other temperatures. For example, the coefficient becomes lower at higher temperatures for copper, and the value 0.00427 is commonly specified at . The extremely low resistivity (high conductivity) of silver is characteristic of metals. George Gamow tidily summed up the nature of the metals' dealings with electrons in his popular science book One, Two, Three...Infinity (1947): More technically, the free electron model gives a basic description of electron flow in metals. Wood is widely regarded as an extremely good insulator, but its resistivity is sensitively dependent on moisture content, with damp wood being a factor of at least worse insulator than oven-dry. In any case, a sufficiently high voltage – such as that in lightning strikes or some high-tension power lines – can lead to insulation breakdown and electrocution risk even with apparently dry wood. Temperature dependence Linear approximation The electrical resistivity of most materials changes with temperature. If the temperature does not vary too much, a linear approximation is typically used: where is called the temperature coefficient of resistivity, is a fixed reference temperature (usually room temperature), and is the resistivity at temperature . The parameter is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, is different for different reference temperatures. For this reason it is usual to specify the temperature that was measured at with a suffix, such as , and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used. Metals In general, electrical resistivity of metals increases with temperature. Electron–phonon interactions can play a key role. At high temperatures, the resistance of a metal increases linearly with temperature. As the temperature of a metal is reduced, the temperature dependence of resistivity follows a power law function of temperature. Mathematically the temperature dependence of the resistivity of a metal can be approximated through the Bloch–Grüneisen formula: where is the residual resistivity due to defect scattering, A is a constant that depends on the velocity of electrons at the Fermi surface, the Debye radius and the number density of electrons in the metal. is the Debye temperature as obtained from resistivity measurements and matches very closely with the values of Debye temperature obtained from specific heat measurements. n is an integer that depends upon the nature of interaction:  = 5 implies that the resistance is due to scattering of electrons by phonons (as it is for simple metals)  = 3 implies that the resistance is due to s-d electron scattering (as is the case for transition metals)  = 2 implies that the resistance is due to electron–electron interaction. The Bloch–Grüneisen formula is an approximation obtained assuming that the studied metal has spherical Fermi surface inscribed within the first Brillouin zone and a Debye phonon spectrum. If more than one source of scattering is simultaneously present, Matthiessen's rule (first formulated by Augustus Matthiessen in the 1860s) states that the total resistance can be approximated by adding up several different terms, each with the appropriate value of . As the temperature of the metal is sufficiently reduced (so as to 'freeze' all the phonons), the resistivity usually reaches a constant value, known as the residual resistivity. This value depends not only on the type of metal, but on its purity and thermal history. The value of the residual resistivity of a metal is decided by its impurity concentration. Some materials lose all electrical resistivity at sufficiently low temperatures, due to an effect known as superconductivity. An investigation of the low-temperature resistivity of metals was the motivation to Heike Kamerlingh Onnes's experiments that led in 1911 to discovery of superconductivity. For details see History of superconductivity. Wiedemann–Franz law The Wiedemann–Franz law states that for materials where heat and charge transport is dominated by electrons, the ratio of thermal to electrical conductivity is proportional to the temperature: where is the thermal conductivity, is the Boltzmann constant, is the electron charge, is temperature, and is the electric conductivity. The ratio on the rhs is called the Lorenz number. Semiconductors In general, intrinsic semiconductor resistivity decreases with increasing temperature. The electrons are bumped to the conduction energy band by thermal energy, where they flow freely, and in doing so leave behind holes in the valence band, which also flow freely. The electric resistance of a typical intrinsic (non doped) semiconductor decreases exponentially with temperature following an Arrhenius model: An even better approximation of the temperature dependence of the resistivity of a semiconductor is given by the Steinhart–Hart equation: where , and are the so-called Steinhart–Hart coefficients. This equation is used to calibrate thermistors. Extrinsic (doped) semiconductors have a far more complicated temperature profile. As temperature increases starting from absolute zero they first decrease steeply in resistance as the carriers leave the donors or acceptors. After most of the donors or acceptors have lost their carriers, the resistance starts to increase again slightly due to the reducing mobility of carriers (much as in a metal). At higher temperatures, they behave like intrinsic semiconductors as the carriers from the donors/acceptors become insignificant compared to the thermally generated carriers. In non-crystalline semiconductors, conduction can occur by charges quantum tunnelling from one localised site to another. This is known as variable range hopping and has the characteristic form of where = 2, 3, 4, depending on the dimensionality of the system. Kondo insulators Kondo insulators are materials where the resistivity follows the formula where , , and are constant parameters, the residual resistivity, the Fermi liquid contribution, a lattice vibrations term and the Kondo effect. Complex resistivity and conductivity When analyzing the response of materials to alternating electric fields (dielectric spectroscopy), in applications such as electrical impedance tomography, it is convenient to replace resistivity with a complex quantity called impedivity (in analogy to electrical impedance). Impedivity is the sum of a real component, the resistivity, and an imaginary component, the reactivity (in analogy to reactance). The magnitude of impedivity is the square root of sum of squares of magnitudes of resistivity and reactivity. Conversely, in such cases the conductivity must be expressed as a complex number (or even as a matrix of complex numbers, in the case of anisotropic materials) called the admittivity. Admittivity is the sum of a real component called the conductivity and an imaginary component called the susceptivity. An alternative description of the response to alternating currents uses a real (but frequency-dependent) conductivity, along with a real permittivity. The larger the conductivity is, the more quickly the alternating-current signal is absorbed by the material (i.e., the more opaque the material is). For details, see Mathematical descriptions of opacity. Resistance versus resistivity in complicated geometries Even if the material's resistivity is known, calculating the resistance of something made from it may, in some cases, be much more complicated than the formula above. One example is spreading resistance profiling, where the material is inhomogeneous (different resistivity in different places), and the exact paths of current flow are not obvious. In cases like this, the formulas must be replaced with where and are now vector fields. This equation, along with the continuity equation for and the Poisson's equation for , form a set of partial differential equations. In special cases, an exact or approximate solution to these equations can be worked out by hand, but for very accurate answers in complex cases, computer methods like finite element analysis may be required. Resistivity-density product In some applications where the weight of an item is very important, the product of resistivity and density is more important than absolute low resistivity – it is often possible to make the conductor thicker to make up for a higher resistivity; and then a low-resistivity-density-product material (or equivalently a high conductivity-to-density ratio) is desirable. For example, for long-distance overhead power lines, aluminium is frequently used rather than copper (Cu) because it is lighter for the same conductance. Silver, although it is the least resistive metal known, has a high density and performs similarly to copper by this measure, but is much more expensive. Calcium and the alkali metals have the best resistivity-density products, but are rarely used for conductors due to their high reactivity with water and oxygen (and lack of physical strength). Aluminium is far more stable. Toxicity excludes the choice of beryllium. (Pure beryllium is also brittle.) Thus, aluminium is usually the metal of choice when the weight or cost of a conductor is the driving consideration. History John Walsh and the conductivity of a vacuum In a 1774 letter to Dutch-born British scientist Jan Ingenhousz, Benjamin Franklin relates an experiment by another British scientist, John Walsh, that purportedly showed this astonishing fact: Although rarified air conducts electricity better than common air, a vacuum does not conduct electricity at all. However, to this statement a note (based on modern knowledge) was added by the editors—at the American Philosophical Society and Yale University—of the webpage hosting the letter:
Physical sciences
Electrodynamics
Physics
61604
https://en.wikipedia.org/wiki/Tradescantia
Tradescantia
Tradescantia () is a genus of 85 species of herbaceous perennial wildflowers in the family Commelinaceae, native to the Americas from southern Canada to northern Argentina, including the West Indies. Members of the genus are known by many common names, including inchplant, wandering jew, spiderwort, dayflower and trad. Tradescantia grow , and are commonly found individually or in clumps in wooded areas and open fields. They were introduced into Europe as ornamental plants in the 17th century and are now grown in many parts of the world. Some species have become naturalized in regions of Europe, Asia, Africa, and Australia, and on some oceanic islands. The genus's many species are of interest to cytogenetics because of evolutionary changes in the structure and number of their chromosomes. They have also been used as bioindicators for the detection of environmental mutagens. Some species have become pests to cultivated crops and considered invasive. Description Tradescantia are herbaceous perennials and include both climbing and trailing species, reaching in height. The stems are usually succulent or semi-succulent, and the leaves are sometimes semi-succulent. The leaves are long, thin and blade-like to lanceolate, from . The flowers can be white, pink, purple or blue, with three petals and six yellow anthers (or rarely, four petals and eight anthers). The sap is mucilaginous and clear. A number of species have flowers that last for only a day, opening in the morning and closing by the evening. Etymology The scientific name of the genus chosen by Carl Linnaeus honours the English naturalists and explorers John Tradescant the Elder (c. 1570s – 1638) and John Tradescant the Younger (1608–1662), who introduced many new plants to English gardens. Tradescant the Younger visited the new colony of Virginia in 1637 (and possibly twice more in later years). From there, the type species, Tradescantia virginiana, was brought to England in 1629. Plants of the genus are called by many common names, varying by region and country. The name "inchplant" is thought to describe the plant's fast growth, or the fact that leaves are an inch apart on the stem. "Spiderwort" refers to the sap which dries into web-like threads when a stem is cut. The name "dayflower", shared with other members of the Commelinaceae family, refers to the flowers which open and close within a single day. The controversial name "wandering Jew" originates from the Christian myth of the Wandering Jew, condemned to wander the earth for taunting Jesus on the way to his crucifixion. In recent years there have been efforts to stop using this and other potentially offensive common names, in favour of alternatives such as "wandering dude" or "wandering willie". In Spanish, Tradescantia plants are sometimes referred to as flor de Santa Lucía (Saint Lucy's flower), in reference to the Saint's reputation as the patron saint of sight, and the use of the juice of the plant as eye drops to relieve congestion. Taxonomy Subdivisions and species The number of species and infrageneric taxa has changed throughout history. The first major classification proposed by Hunt (1980) included 60 species divided into eight sections, with one section divided into a further four series. Hunt's 1986 revision united several small genera with Tradescantia as sections, resulting in a total of twelve sections comprising 68 species, and this infrageneric classification was accepted for several decades. A recent study by Pellegrini (2017) proposed a new classification based on recent morphological research, dividing the genus into five subgenera. As of December 2023, The Royal Botanic Gardens, Kew recognises 86 species. Unclassified Tradescantia petiolaris M.E.Jones Formerly placed here Tradescantia × andersoniana W.Ludw. & Rohweder The name was published with no description, so is not a valid botanical name; the taxon is now treated as a cultivar group. Callisia navicularis (Ortgies) D.R.Hunt (as T. navicularis Ortgies) Callisia warszewicziana (Kunth & C.D.Bouché) D.R.Hunt (as T. warszewicziana Kunth & C.D.Bouché) Gibasis geniculata (Jacq.) Rohweder (as T. geniculata Jacq.) Gibasis karwinskyana (Schult. & Schult.f.) Rohweder (as T. karwinskyana Schult. & Schult.f.) Gibasis pellucida (M.Martens & Galeotti) D.R.Hunt (as T. pellucida M.Martens & Galeotti) Siderasis fuscata (Lodd. et al.) H.E.Moore (as T. fuscata Lodd. et al.) Tinantia anomala (Torr.) C.B.Clarke (as T. anomala Torr.) Tripogandra diuretica (Mart.) Handlos (as T. diuretica Mart.) Elasis hirsuta (Kunth) D.R.Hunt (as T. hirsuta) Distribution and habitat The first species described, the Virginia spiderwort, T. virginiana, is native to the eastern United States from Maine to Alabama, and Canada in southern Ontario. Virginia spiderwort was introduced to Europe in 1629, where it is cultivated as a garden flower. The natural range of the genus as a whole spans nearly the entire length and width of mainland North America, from Canada through Mexico and Central America, and thrives in a great diversity of temperate and tropical habitats. It is frequently found in thinly wooded deciduous forests, plains, prairies, and healthy fields, often alongside other native wildflowers. Conservation The western spiderwort T. occidentalis is listed as an endangered species in Canada, where the northernmost populations of the species are found at a few sites in southern Saskatchewan, Manitoba and Alberta; it is more common further south in the United States to Texas and Arizona. Cultivation Spiderworts are popular in Europe and North America as ornamental plants. Temperate species are grown as hardy garden perennials, while tropical species such as T. zebrina and T. spathacea are used as house plants. Their popularity and easy spreading nature has led to some species being considered serious weeds in certain places (see below). Most cold-hardy garden plants belong to the Andersoniana Group (often referred to with the invalid name Tradescantia × andersoniana). This is a group of interspecific hybrids developed from Tradescantia virginiana, T. ohiensis, and T. subaspera, which have overlapping ranges within continental North America. These plants are clump-forming herbaceous perennials, with individual cultivars mainly differing in flower colour. A wide range of tender tropical species are cultivated as houseplants or outdoor annuals in temperate locations, including Tradescantia zebrina, T. fluminensis, T. spathacea, T. sillamontana, and T. pallida. They are typically grown for their foliage, and many have colourful variegated patterns of silver, purple, green, pink, and gold. Cultivars The following cultivars have gained the Royal Horticultural Society's Award of Garden Merit: T. (Andersoniana Group) 'Concord Grape' T. cerinthoides 'Nanouk' T. cerinthoides 'Variegata' T. fluminensis 'Aurea' T. fluminensis 'Quicksilver' T. pallida 'Purpurea' T. spathacea 'Rainbow' T. zebrina 'Purpusii' T. zebrina 'Quadricolor' The International Society for Horticultural Science appointed Tradescantia Hub as an International Cultivar Registration Authority (ICRA) for Tradescantia in 2022. As an ICR authority, the Hub is responsible for recording and maintaining a checklist of the correct names for all cultivars in the genus. Weeds Due to its ready propagation from stem fragments and its domination of the ground layer in many forest environments, T. fluminensis has become a major environmental weed in Australia, New Zealand and the southern United States. Other species considered invasive weeds in certain places include T. pallida, T. spathacea, and T. zebrina. Toxicity Some members of the genus Tradescantia may cause allergic reactions in pets (especially cats and dogs) characterised by red, itchy skin. Notable culprits include T. albiflora (scurvy weed), T. spathacea (Moses in the cradle), and T. pallida (purple heart). Uses Native Americans used T. virginiana to treat a number of conditions, including stomachache. It was also used as a food source. The cells of the stamen hairs of some Tradescantia are colored blue, but when exposed to sources of ionizing radiation such as gamma rays or pollutants like sulphur dioxide from industries, the cells mutate and change color to pink; they are one of the few tissues known to serve as an effective bioassay for ambient radiation levels. Gallery
Biology and health sciences
Commelinales
Plants
61692
https://en.wikipedia.org/wiki/PC%20Card
PC Card
PC Card is a parallel peripheral interface for laptop computers and PDAs. The PCMCIA originally introduced the 16-bit ISA-based PCMCIA Card in 1990, but renamed it to PC Card in March 1995 to avoid confusion with the name of the organization. The CardBus PC Card was introduced as a 32-bit version of the original PC Card, based on the PCI specification. CardBus slots are backwards compatible, but older slots are not forward compatible with CardBus cards. Although originally designed as a standard for memory-expansion cards for computer storage, the existence of a usable general standard for notebook peripherals led to the development of many kinds of devices including network cards, modems, and hard disks. The PC Card port has been superseded by the ExpressCard interface since 2003, which was also initially developed by the PCMCIA. The organization dissolved in 2009, with its assets merged into the USB Implementers Forum. Applications Many notebooks in the 1990s had two adjacent type-II slots, which allowed installation of two type-II cards or one, double-thickness, type-III card. The cards were also used in early digital SLR cameras, such as the Kodak DCS 300 series. However, their original use as storage expansion is no longer common. Some manufacturers such as Dell continued to offer them into 2012 on their ruggedized XFR notebooks. Mercedes-Benz used a PCMCIA card reader in the W221 S-Class for model years 2006-2009. It was used for reading media files such as MP3 audio files to play through the COMAND infotainment system. After 2009, it was replaced with a standard SD Card reader. , some vehicles from Honda equipped with a navigation system still included a PC Card reader integrated into the audio system. Some Japanese brand consumer entertainment devices such as TV sets include a PC Card slot for playback of media. Adapters for PC Cards to Personal Computer ISA slots were available when these technologies were current. Cardbus adapters for PCI slots have been made. These adapters were sometimes used to fit Wireless (802.11) PCMCIA cards into desktop computers with PCI slots. History Before the introduction of the PCMCIA card, the parallel port was commonly used for portable peripherals. The PCMCIA 1.0 card standard was published by the Personal Computer Memory Card International Association in November 1990 and was soon adopted by more than eighty vendors. It corresponds with the Japanese JEIDA memory card 4.0 standard. It was originally developed to support Memory cards. Intel authored the Exchangable Card Architecture (ExCA) specification, but later merged this into the PCMCIA. SanDisk (operating at the time as "SunDisk") launched its PCMCIA card in October 1992. The company was the first to introduce a writeable Flash RAM card for the HP 95LX (an early MS-DOS pocket computer). These cards conformed to a supplemental PCMCIA-ATA standard that allowed them to appear as more conventional IDE hard drives to the 95LX or a PC. This had the advantage of raising the upper limit on capacity to the full 32 MB available under DOS 3.22 on the 95LX. New Media Corporation was one of the first companies established for the express purpose of manufacturing PC Cards; they became a major OEM for laptop manufacturers such as Toshiba and Compaq for PC Card products. It soon became clear that the PCMCIA card standard needed expansion to support "smart" I/O cards to address the emerging need for fax, modem, LAN, harddisk and floppy disk cards. It also needed interrupt facilities and hot plugging, which required the definition of new BIOS and operating system interfaces. This led to the introduction of release 2.0 of the PCMCIA standard and JEIDA 4.1 in September 1991, which saw corrections and expansion with Card Services (CS) in the PCMCIA 2.1 standard in November 1992. To recognize increased scope beyond memory, and to aid in marketing, the association acquired the rights to the simpler term "PC Card" from IBM. This was the name of the standard from version 2 of the specification onwards. These cards were used for wireless networks, modems, and other functions in notebook PCs. After the release of PCIe-based ExpressCard in 2003, laptop manufacturers started to fit ExpressCard slots to new laptops instead of PC Card slots. Form factors All PC Card devices use a similar sized package which is long and wide, the same size as a credit card. Type I Cards designed to the original specification (PCMCIA 1.0) are type I and have a 16-bit interface. They are thick and have a dual row of 34 holes (68 in total) along a short edge as a connecting interface. Type-I PC Card devices are typically used for memory devices such as RAM, flash memory, OTP (One-Time Programmable), and SRAM cards. Type II introduced with version 2.0 of the standard. Type-II and above PC Card devices use two rows of 34 sockets, and have a 16- or 32-bit interface. They are thick. Type-II cards introduced I/O support, allowing devices to attach an array of peripherals or to provide connectors/slots to interfaces for which the host computer had no built-in support. For example, many modem, network, and TV cards accept this configuration. Due to their thinness, most Type II interface cards have miniature interface connectors on the card connecting to a dongle, a short cable that adapts from the card's miniature connector to an external full-size connector. Some cards instead have a lump on the end with the connectors. This is more robust and convenient than a separate adapter but can block the other slot where slots are present in a pair. Some Type II cards, most notably network interface and modem cards, have a retractable jack, which can be pushed into the card and will pop out when needed, allowing insertion of a cable from above. When use of the card is no longer needed, the jack can be pushed back into the card and locked in place, protecting it from damage. Most network cards have their jack on one side, while most modems have their jack on the other side, allowing the use of both at the same time as they do not interfere with each other. Wireless Type II cards often had a plastic shroud that jutted out from the end of the card to house the antenna. In the mid-90s, PC Card Type II hard disk drive cards became available; previously, PC Card hard disk drives were only available in Type III. Type III introduced with version 2.01 of the standard in 1992. Type-III PC Card devices are 16-bit or 32-bit. These cards are thick, allowing them to accommodate devices with components that would not fit type I or type II height. Examples are hard disk drive cards, and interface cards with full-size connectors that do not require dongles (as is commonly required with type II interface cards). Type IV Type-IV cards, introduced by Toshiba, were not officially standardized or sanctioned by the PCMCIA. These cards are thick. Bus Original The original standard was defined for both 5 V and 3.3 volt cards, with 3.3 V cards having a key on the side to prevent them from being inserted fully into a 5 V-only slot. Some cards and some slots operate at both voltages as needed. The original standard was built around an 'enhanced' 16-bit ISA bus platform. A newer version of the PCMCIA standard is CardBus (see below), a 32-bit version of the original standard. In addition to supporting a wider bus of 32 bits (instead of the original 16), CardBus also supports bus mastering and operation speeds up to 33 MHz. CardBus CardBus are PCMCIA 5.0 or later (JEIDA 4.2 or later) 32-bit PCMCIA devices, introduced in 1995 and present in laptops from late 1997 onward. CardBus is effectively a 32-bit, 33 MHz PCI bus in the PC Card design. CardBus supports bus mastering, which allows a controller on the bus to talk to other devices or memory without going through the CPU. Many chipsets, such as those that support Wi-Fi, are available for both PCI and CardBus. The notch on the left hand front of the device is slightly shallower on a CardBus device so, by design, a 32-bit device cannot be plugged into earlier equipment supporting only 16-bit devices. Most new slots accept both CardBus and the original 16-bit PC Card devices. CardBus cards can be distinguished from older cards by the presence of a gold band with eight small studs on the top of the card next to the pin sockets. The speed of CardBus interfaces in 32-bit burst mode depends on the transfer type: in byte mode, transfer is 33 MB/s; in word mode it is 66 MB/s; and in dword (double-word) mode 132 MB/s. CardBay CardBay is a variant added to the PCMCIA specification introduced in 2001. It was intended to add some forward compatibility with USB and IEEE 1394, but was not universally adopted and only some notebooks have PC Card controllers with CardBay features. This is an implementation of Microsoft and Intel's joint Drive Bay initiative. Design The card information structure (CIS) is metadata stored on a PC card that contains information about the formatting and organization of the data on the card. The CIS also contains information such as: Type of card Supported power supply options Supported power saving capabilities Manufacturer Model number When a card is unrecognized it is frequently because the CIS information is either lost or damaged. Descendants and variants ExpressCard ExpressCard is a later specification from the PCMCIA, intended as a replacement for PC Card, built around the PCI Express and USB 2.0 standards. The PC Card standard is closed to further development and PCMCIA strongly encourages future product designs to utilize the ExpressCard interface. From about 2006, ExpressCard slots replaced PCMCIA slots in laptop computers, with a few laptops having both in the transition period. ExpressCard and CardBus sockets are physically and electrically incompatible. ExpressCard-to-CardBus and Cardbus-to-ExpressCard adapters are available that connect a Cardbus card to an Expresscard slot, or vice versa, and carry out the required electrical interfacing. These adapters do not handle older non-Cardbus PCMCIA cards. PC Card devices can be plugged into an ExpressCard adaptor, which provides a PCI-to-PCIe Bridge. Despite being much faster in speed/bandwidth, ExpressCard was not as popular as PC Card, due in part to the ubiquity of USB ports on modern computers. Most functionality provided by PC Card or ExpressCard devices is now available as an external USB device. These USB devices have the advantage of being compatible with desktop computers as well as portable devices. (Desktop computers were rarely fitted with a PC Card or ExpressCard slot.) This reduced the requirement for internal expansion slots; by 2011, many laptops had none. Some IBM ThinkPad laptops took their onboard RAM (in sizes ranging from 4 to 16 MB) in the factor of an IC-DRAM Card. While very similar in form-factor, these cards did not go into a standard PC Card Slot, often being installed under the keyboard, for example. They also were not pin-compatible, as they had 88 pins but in two staggered rows, as opposed to even rows like PC Cards. These correspond to versions 1 and 2 of the JEIDA memory card standard. Others The shape is also used by the Common Interface form of conditional-access modules for DVB, and by Panasonic for their professional "P2" video acquisition memory cards. A CableCARD conditional-access module is a type II PC Card intended to be plugged into a cable set-top box or digital cable-ready television. The interface has spawned a generation of flash memory cards that set out to improve on the size and features of Type I cards: CompactFlash, MiniCard, P2 Card and SmartMedia. For example, the PC Card electrical specification is also used for CompactFlash, so a PC Card CompactFlash adapter can be a passive physical adapter rather than requiring additional circuitry. CompactFlash is a smaller dimensioned 50 pin subset of the 68 pin PC Card interface. It requires a setting for the interface mode of either "memory" or "ATA storage". The EOMA68 open-source hardware standard uses the same 68-pin PC Card connectors and corresponds to the PC Card form factor in many other ways.
Technology
Computer hardware
null
61697
https://en.wikipedia.org/wiki/Aye-aye
Aye-aye
The aye-aye (Daubentonia madagascariensis) is a long-fingered lemur, a strepsirrhine primate native to Madagascar with rodent-like teeth that perpetually grow and a special thin middle finger that they can use to catch grubs and larvae out of tree trunks. It is the world's largest nocturnal primate. It is characterized by its unusual method of finding food: it taps on trees to find grubs, then gnaws holes in the wood using its forward-slanting incisors to create a small hole into which it inserts its narrow middle finger to pull the grubs out. This foraging method is called percussive foraging, and takes up 5–41% of foraging time. The only other living mammal species known to find food in this way are the striped possum and trioks (genus Dactylopsila) of northern Australia and New Guinea, which are marsupials. From an ecological point of view, the aye-aye fills the niche of a woodpecker, as it is capable of penetrating wood to extract the invertebrates within. The aye-aye is the only extant member of the genus Daubentonia and family Daubentoniidae. It is currently classified as Endangered by the IUCN. A second species, Daubentonia robusta, appears to have become extinct at some point within the last 1000 years, and is known from subfossil finds. Etymology The genus Daubentonia was named after the French naturalist Louis-Jean-Marie Daubenton by his student, Étienne Geoffroy Saint-Hilaire, in 1795. Initially, Geoffroy considered using the Greek name Scolecophagus ("worm-eater") in reference to its eating habits, but he decided against it because he was uncertain about the aye-aye's habits and whether other related species might eventually be discovered. In 1863, British zoologist John Edward Gray coined the family name Daubentoniidae. The French naturalist Pierre Sonnerat was the first to use the vernacular name "aye-aye" in 1782 when he described and illustrated the lemur, though it was also called the "long-fingered lemur" by English zoologist George Shaw in 1800—a name that did not stick. According to Sonnerat, the name "aye-aye" was a "" (cry of exclamation and astonishment). However, American paleoanthropologist Ian Tattersall noted in 1982 that the name resembles the Malagasy name "hai hai" or "hay hay", (also ahay, , haihay) which refers to the animal and is used around the island. According to Dunkel et al. (2012), the widespread use of the Malagasy name indicates that the name could not have come from Sonnerat. Another hypothesis proposed by Simons and Meyers (2001) is that it derives from "heh heh", which is Malagasy for "I don't know". If correct, then the name might have originated from Malagasy people saying "heh heh" to avoid saying the name of a feared, magical animal. Evolutionary history and taxonomy Due to its derived morphological features, the classification of the aye-aye was debated following its discovery. The possession of continually growing incisors (front teeth) parallels those of rodents, leading early naturalists to mistakenly classify the aye-aye within the mammalian order Rodentia and as a squirrel, due to its toes, hair coloring, and tail. However, the aye-aye is also similar to felines in its head shape, eyes, ears and nostrils. The aye-aye's classification with the order Primates has been just as uncertain. It has been considered a highly derived member of the family Indridae, a basally diverging branch of the strepsirrhine suborder, and of indeterminate relation to all living primates. In 1931, Anthony and Coupin classified the aye-aye under infraorder Chiromyiformes, a sister group to the other strepsirrhines. Colin Groves upheld this classification in 2005 because he was not entirely convinced the aye-aye formed a clade with the rest of the Malagasy lemurs. However, molecular results have consistently placed Daubentonia as the most basally diverging of lemurs. The most parsimonious explanation for this is that all lemurs are derived from a single ancestor that rafted from Africa to Madagascar during the Paleogene. Similarities in dentition between aye-ayes and several African primate fossils (Plesiopithecus and Propotto) have led to the alternate theory that the ancestors of aye-ayes colonized Madagascar separately from other lemurs. In 2008, Russell Mittermeier, Colin Groves, and others ignored addressing higher-level taxonomy by defining lemurs as monophyletic and containing five living families, including Daubentoniidae. Further evidence indicating that the aye-aye belongs in the superfamily Lemuroidea can be inferred from the presence of petrosal bullae encasing the ossicles of the ear. The aye-ayes are also similar to lemurs in their shorter back legs. Anatomy and morphology A full-grown aye-aye is typically about long with a tail longer than its body. The species has an average head and body length of plus a tail of , and weighs around . Young aye-ayes typically are silver colored on their front and have a stripe down their back. However, as the aye-ayes begin to reach maturity, their bodies will be completely covered in thick fur and are typically not one solid color. On the head and back, the ends of the hair are typically tipped with white while the rest of the body will ordinarily be a yellow and/or brown color. Among the aye-aye's signature traits are its fingers. The third finger, which is much thinner than the others, is used for extracting grubs and insects out of trees, using the hooked nail. The finger is unique in the animal kingdom in that it possesses a ball-and-socket metacarpophalangeal joint, can reach the throat through a nostril and is used for picking one's nose and eating mucus (mucophagy) so harvested from inside the nose. The aye-aye has also evolved a sixth digit, a pseudothumb, to aid in gripping. The complex geometry of ridges on the inner surface of aye-aye ears helps to sharply focus not only echolocation signals from the tapping of its finger, but also to passively listen for any other sound produced by the prey. These ridges can be regarded as the acoustic equivalent of a Fresnel lens, and may be seen in a large variety of unrelated animals, such as lesser galago, bat-eared fox, mouse lemur, and others. Females have two nipples located in the region of the groin. The male's genitalia are similar to those of canids, with a large prostate and long baculum. Behaviour and lifestyle The aye-aye is a nocturnal and arboreal animal meaning that it spends most of its life high in the trees. Although they are known to come down to the ground on occasion, aye-ayes sleep, eat, travel and mate in the trees and are most commonly found close to the canopy where there is plenty of cover from the dense foliage. During the day, aye-ayes sleep in spherical nests in the forks of tree branches that are constructed out of leaves, branches and vines before emerging after dark to begin their hunt for food. Aye-aye are solitary animals that mark their large home range with scent. The smaller territories of females often overlap those of at least a couple of males. Male aye-ayes tend to share their territories with other males and are even known to share the same nests (although not at the same time), and can seemingly tolerate each other until they hear the call of a female that is looking for a mate. Mating season extends throughout the year, with females typically starting to breed at the age of three or four. They give birth to one offspring every two to three years. During the period of parenting, a female becomes the dominant figure over males, likely to secure better access to food while caring for her young. The infant remains in a nest for up to two months before venturing out, but it takes another seven months before the young aye-aye can maneuver the canopy as skillfully as an adult. Diet and foraging The aye-aye is an omnivore and commonly eats seeds, nuts, fruits, nectar, plant exudates and fungi, but also xylophagous, or wood boring, insect larvae (especially cerambycid beetle larvae) and honey. Aye-ayes tap on the trunks and branches of trees at a rate of up to eight times per second, and listen to the echo produced to find hollow chambers. Studies have suggested that the acoustic properties associated with the foraging cavity have no effect on excavation behavior. Once a chamber is found, they chew a hole into the wood and get grubs out of that hole with their highly adapted narrow and bony middle fingers. The aye-aye begins foraging between 30 minutes before and three hours after sunset. Up to 80% of the night is spent foraging in the canopy, separated by occasional rest periods. It climbs trees by making successive vertical leaps, much like a squirrel. Horizontal movement is more difficult, but the aye-aye rarely descends to jump to another tree, and can often travel up to a night. Though foraging is usually solitary, they occasionally forage in groups. Individual movements within the group are coordinated using both vocalisations and scent signals. Social systems The aye-aye is classically considered 'solitary' as they have not been observed to groom each other. However, recent research suggests that it is more social than once thought. It usually sticks to foraging in its own personal home range, or territory. The home ranges of males often overlap, and the males can be very social with each other. Female home ranges never overlap, though a male's home range often overlaps that of several females. The male aye-ayes live in large areas up to , while females have smaller living spaces that go up to . It is difficult for the males to defend a singular female because of the large home range. They are seen exhibiting polygyny because of this. Regular scent marking with their cheeks and neck is how aye-ayes let others know of their presence and repel intruders from their territory. Like many other prosimians, the female aye-aye is dominant to the male. They are not typically monogamous, and will often challenge each other for mates. Male aye-ayes are very assertive in this way, and sometimes even pull other males away from a female during mating. Males are normally locked to females during mating in sessions that may last up to an hour. Outside of mating, males and females interact only occasionally, usually while foraging. The aye-aye is thought to be the only primate which uses echolocation to find its prey. Distribution and habitat The aye-aye lives primarily on the east coast of Madagascar. Its natural habitat is rainforest or dry deciduous forest, but many live in cultivated areas due to deforestation. Rainforest aye-ayes, the most common, dwell in canopy areas, and are usually sighted above 70 meters altitude. They sleep during the day in nests built from interwoven twigs and dead leaves up in the canopy among the vines and branches. Conservation The aye-aye was thought to be extinct in 1933, but was rediscovered in 1957. In 1966, nine individuals were transported to Nosy Mangabe, an island near Maroantsetra off eastern Madagascar. Recent research shows the aye-aye is more widespread than was previously thought, but its conservation status was changed to endangered in 2014. This is for four main reasons: the aye-aye is considered evil by local cultures, and is killed as such. The forests of Madagascar are declining in range due to deforestation. Local farmers will kill aye-ayes to protect their crops; aye-aye poaching is another major issue. However, there is no direct evidence to suggest aye-ayes pose any legitimate threat to crops and therefore are killed based on superstition. As many as 50 aye-ayes can be found in zoological facilities worldwide. Folk belief The aye-aye is often viewed as a harbinger of evil and death and killed on sight. Others believe, if one points its narrowest finger at someone, they are marked for death. Some say that the appearance of an aye-aye in a village predicts the death of a villager, and the only way to prevent this is to kill it. The Sakalava people go so far as to claim aye-ayes sneak into houses through the thatched roofs and murder the sleeping occupants by using their middle fingers to puncture their victims' aorta. Captive breeding The conservation of this species has been aided by captive breeding, primarily at the Duke Lemur Center in Durham, North Carolina. This center has been influential in keeping, researching and breeding aye-ayes and other lemurs. They have sent multiple teams to capture lemurs in Madagascar and have since created captive breeding groups for their lemurs. Specifically, they were responsible for the first aye-aye born into captivity and studied how he and the other aye-aye infants born at the center develop through infancy. They have also revolutionized the understanding of the aye-aye diet.
Biology and health sciences
Strepsirrhini
Animals
61700
https://en.wikipedia.org/wiki/Sahelanthropus
Sahelanthropus
Sahelanthropus is an extinct genus of hominid dated to about during the Late Miocene. The type species, Sahelanthropus tchadensis, was first announced in 2002, based mainly on a partial cranium, nicknamed Toumaï, discovered in northern Chad. The definitive phylogenetic position of Sahelanthropus within hominids is uncertain. It was initially described as a possible hominin ancestral to both humans and chimpanzees, but subsequent interpretations suggest that it could be an early member of the tribe Gorillini or a stem-hominid outside the hominins. Examinations on the postcranial skeleton of Sahelanthropus also indicated that this taxon was not a habitual biped. Taxonomy Discovery Four employees of the Centre National d'Appui à la Recherche (CNAR, National Research Support Center) of the Ministry of Higher Education of the Republic of Chad, three Chadians (Ahounta Djimdoumalbaye, Fanoné Gongdibé and Mahamat Adoum) and one French (Alain Beauvilain) collected and identified the first remains in the Toros-Menalla area (TM 266 locality) in the Djurab Desert of northern Chad, July 19, 2001. By the time Michel Brunet and colleagues formally described the remains in 2002, a total of six specimens had been recovered: a nearly complete but heavily deformed skull, a fragment of the midline of the jaw with the tooth sockets for an incisor and canine, a right third molar, a right first incisor, a right jawbone with the last premolar to last molar, and a right canine. With the skull as the holotype specimen, they were grouped into a new genus and species as Sahelanthropus tchadensis, the genus name referring to the Sahel, and the species name to Chad. These, along with Australopithecus bahrelghazali, were the first discoveries of any fossil African great ape (outside the genus Homo) made beyond eastern and southern Africa. By 2005, a third premolar was recovered from the TM 266 locality, a lower jaw missing the region behind the second molar from the TM 292 locality, and a lower left jaw preserving the sockets for premolars and molars from the TM 247 locality. The skull was nicknamed Toumaï by the then-president of the Republic of Chad, Idriss Déby, not only because it designates in the local Daza language meaning "hope of life", given to infants born just before the dry season and who, therefore, have fairly limited chances of survival, but also to celebrate the memory of one of his comrades-in-arms, living in the north of the country where the fossil was discovered, and killed fighting to overthrow President Hissène Habré supported by France. Toumaï also became a source of national pride, and Brunet announced the discovery before the Ministry of Foreign Affairs and a television audience in the capital of N'Djamena, "l'ancêtre de l'humanité est Tchadien...Le berceau de l'humanité se trouve au Tchad. Toumaï est votre ancêtre" ("The ancestor of humanity is Chadian...The cradle of humanity is in Chad. Toumaï is your ancestor."). Toumaï had been found with a femur, but this was stored with animal bones and shipped to the University of Poitiers in 2003, where it was stumbled upon by graduate student Aude Bergeret the next year. She took the bone to the head of the Department of Geosciences, Roberto Macchiarelli, who considered it to be inconsistent with bipedalism contra what Brunet et al. had earlier stated in their description analysing only the distorted skull. This was conspicuous because Brunet and his team had already explicitly stated Toumaï was associated with no limb bones, which could have proven or disproven their conclusions of locomotion. Because Brunet had declined to comment on the subject, Macchiarelli and Bergeret petitioned to present their preliminary findings during an annual conference organised by the Anthropological Society of Paris, which would be held at Poitiers that year. This was rejected as they had not formally published their findings yet. They were able to publish a full description in 2020, and concluded Sahelanthropus was not bipedal. In 2022, French primatologist Franck Guy and colleagues reported that a hominin left femur (TM 266-01-063), and a right (TM 266-01-358) and a left (TM 266-01-050) ulna (forearm bone) were also discovered at the site in 2001, but were excluded originally from Sahelanthropus because they could not be reliably associated with the skull. They decided to include it because Sahelanthropus is the only hominin known from the site, and they concluded that the material is consistent with obligate bipedalism, the earliest evidence of such. In 2023, Meyer and colleagues suggested that its phylogenetic position and its status as a hominin still remain equivocal. All Sahelanthropus specimens, representing six to nine different adults, have been recovered within the area. Taphonomy Upon description, Brunet and colleagues were able to constrain the TM 266 locality to 7 or 6 million years ago (near the end of the Late Miocene) based on the animal assemblage, which made Sahelanthropus the earliest African ape at the time. In 2008, Anne-Elisabeth Lebatard and colleagues (which includes Brunet) attempted to radiometrically date using the 10Be/9Be ratio the sediments Toumaï was found near (dubbed the "anthracotheriid unit" after the commonplace Libycosaurus petrochii). Averaging the ages of 28 samples, they reported an approximate date of 7.2–6.8 million years ago. Their methods were soon challenged by Beauvilain, who clarified that Toumaï was found on loose sediments at the surface rather than being "unearthed", and had probably been exposed to the harsh sun and wind for some time considering it was encrusted in an iron shell and desert varnish. This would mean it is unsafe to assume that the skull and nearby sediments were deposited at the same time, making such radiometric dating impossible. Further, the Sahelanthropus fossils lack white silaceous cement which is present on every other fossil in the site, which would mean they date to different time periods. Because the large mammal fossils were scattered across the area instead of concentrated like the Sahelanthropus fossils, the discoverers originally believed the Sahelanthropus fossils were dumped there by a palaeontologist or geologist, but later dismissed this because the skull was too complete to have been thrown away like that. In 2009, Alain Beauvilain and argued that Toumaï was purposefully buried in a "grave", because the skull was also found with two parallel rows of large mammal fossils, seemingly forming a box. Because the "grave" is orientated in a northeast–southwest direction towards Mecca, and all sides of the skull were exposed to the wind and were eroded (meaning the skull had somehow turned), they argued that Toumaï was first buried by nomads who identified the skull as human and collected nearby limb fossils (believing them to belong with the skull) and buried them, and was reburied again sometime after the 11th century by Muslims who reorientated the grave towards Mecca when the fossils were re-exposed. Classification When describing the species in 2002, Brunet et al. noted the combination of features that would be considered archaic or derived for a species on the human line (the subtribe Hominina), the latter being bipedal locomotion and reduced canine teeth, which they interpreted as evidence of its position near the chimpanzee–human last common ancestor (CHLCA). This classification made Sahelanthropus the oldest Hominina, shifting the centre of origin for the clade away from East Africa. They also suggested that Sahelanthropus could be a sister group to the 5.5-to-4.5-million-year-old Ardipithecus and later Hominina. The classification of Sahelanthropus in Hominina, as well as Ardipithecus and the 6-million-year-old Orrorin, was at odds with molecular analyses of the time, which had placed the CHLCA between 6 and 4 million years ago based on a high mutation rate of about 70 mutations per generation. All these genera were anatomically too derived to represent a basal hominin (the group containing chimps and humans), so molecular data would only permit their classification into more ancient and now-extinct lineages. This was overturned in 2012 by geneticists Aylwyn Scally and Richard Durbin, who studied the genomes of children and their parents and found the mutation rate was actually half that, placing the CHLCA anywhere from 14 to 7 million years ago, though most geneticists and palaeoanthropologists use 8 to 7 million years ago. A recent phylogenetic analysis classified Orrorin as a hominin, but placed Sahelanthropus as a stem-hominid outside hominins, though dental metric analysis supports its position as a hominin. A further possibility is that Toumaï is not ancestral to either humans or chimpanzees at all, but rather an early representative of the Gorillini lineage. Brigitte Senut and Martin Pickford, the discoverers of Orrorin tugenensis, suggested that the features of S. tchadensis are consistent with a female proto-gorilla. Even if this claim is upheld the find would lose none of its significance, because at present very few chimpanzee or gorilla ancestors have been found anywhere in Africa. Thus, if S. tchadensis is an ancestral relative of the chimpanzees or gorillas, then it represents the earliest known member of their lineage. S. tchadensis does indicate that the last common ancestor of humans and chimpanzees is unlikely to closely resemble extant chimpanzees, as had been previously supposed by some paleontologists. Additionally, with the significant sexual dimorphism known to have existed in early hominins, the difference between Ardipithecus and Sahelanthropus may not be large enough to warrant a separate genus for the latter. Anatomy Cranium Existing fossils include a relatively small cranium, five pieces of jaw, and some teeth, making up a head that has a mixture of derived and primitive features. A virtual reconstruction of the interior of the braincase indicated a cranial capacity of 378 cm3, similar to that of extant chimpanzees and approximately a third the size of modern human brains. The teeth, brow ridges, and facial structure differ markedly from those found in modern humans. Cranial features show a flatter face, U-shaped tooth rows, small canines, an anterior foramen magnum, and heavy brow ridges. The only known skull suffered a large amount of distortion during the time of fossilisation and discovery, as the cranium is dorsoventrally flattened, and the right side is depressed. Locomotion In the original description in 2002, Brunet et al. said it "would not be unreasonable" to speculate that Sahelanthropus was capable of maintaining an upright posture while walking bipedally. Because they had not reported any limb bones or other post-cranial material (anything other than the skull), this was based on the reconstructed original orientation of the foramen magnum (where the skull connects the spine), and their classification of Sahelanthropus into Hominina based on facial comparisons (one of the diagnostic characteristics of Hominina is bipedalism). This was soon disputed because the orientation of the foramen magnum is not an entirely conclusive piece of evidence in regard to the question of habitual posture, and the features used to classify Sahelanthropus into Hominina are not entirely unique to Hominina. In 2020, the femur had been formally described, and the study concluded it was not consistent with habitual bipedalism. But, in 2022, Daver and colleagues suggested that the ulnar and femoral morphologies do show characteristics consistent with habitual bipedalism. In 2023, however, Meyer and colleagues examined its ulna shaft and argued that Sahelanthropus is not an obligate biped based on the mathematical analysis of its locomotor behavior which indicated that its forelimbs had different functions compared to modern humans and hominins, and that it probably walked on its knuckles like modern gorillas and chimpanzees, so more examination is required to truly identify its locomotor behavior (i.e. whether it exhibited facultative bipedalism) and its phylogenetic position as a hominin in the evolution of humans. A 2024 study re-examined the 2022 study's postcranial evidence, and concluded that it is not sufficient to determine whether Sahelanthropus was a habitual biped, since none of the features are consistent with or unique to bipedal hominins, but with non-hominin apes or even non-primates.
Biology and health sciences
Australopithecines
Biology
61701
https://en.wikipedia.org/wiki/Venn%20diagram
Venn diagram
A Venn diagram is a widely used diagram style that shows the logical relation between sets, popularized by John Venn (1834–1923) in the 1880s. The diagrams are used to teach elementary set theory, and to illustrate simple set relationships in probability, logic, statistics, linguistics and computer science. A Venn diagram uses simple closed curves drawn on a plane to represent sets. Very often, these curves are circles or ellipses. Similar ideas had been proposed before Venn such as by Christian Weise in 1712 (Nucleus Logicoe Wiesianoe) and Leonhard Euler (Letters to a German Princess) in 1768. The idea was popularised by Venn in Symbolic Logic, Chapter V "Diagrammatic Representation", published in 1881. Details A Venn diagram, also called a set diagram or logic diagram, shows all possible logical relations between a finite collection of different sets. These diagrams depict elements as points in the plane, and sets as regions inside closed curves. A Venn diagram consists of multiple overlapping closed curves, usually circles, each representing a set. The points inside a curve labelled S represent elements of the set S, while points outside the boundary represent elements not in the set S. This lends itself to intuitive visualizations; for example, the set of all elements that are members of both sets S and T, denoted S ∩ T and read "the intersection of S and T", is represented visually by the area of overlap of the regions S and T. In Venn diagrams, the curves are overlapped in every possible way, showing all possible relations between the sets. They are thus a special case of Euler diagrams, which do not necessarily show all relations. Venn diagrams were conceived around 1880 by John Venn. They are used to teach elementary set theory, as well as illustrate simple set relationships in probability, logic, statistics, linguistics, and computer science. A Venn diagram in which the area of each shape is proportional to the number of elements it contains is called an area-proportional (or scaled) Venn diagram. Example This example involves two sets of creatures, represented here as colored circles. The orange circle represents all types of creatures that have two legs. The blue circle represents creatures that can fly. Each separate type of creature can be imagined as a point somewhere in the diagram. Living creatures that have two legs and can fly—for example, parrots—are then in both sets, so they correspond to points in the region where the blue and orange circles overlap. This overlapping region would only contain those elements (in this example, creatures) that are members of both the orange set (two-legged creatures) and the blue set (flying creatures). Humans and penguins are bipedal, and so are in the orange circle, but since they cannot fly, they appear in the left part of the orange circle, where it does not overlap with the blue circle. Mosquitoes can fly, but have six, not two, legs, so the point for mosquitoes is in the part of the blue circle that does not overlap with the orange one. Creatures that are neither two-legged nor able to fly (for example, whales and spiders) would all be represented by points outside both circles. The combined region of the two sets is called their union, denoted by , where A is the orange circle and B the blue. The union in this case contains all living creatures that either are two-legged or can fly (or both). The region included in both A and B, where the two sets overlap, is called the intersection of A and B, denoted by . History Venn diagrams were introduced in 1880 by John Venn in a paper entitled "On the Diagrammatic and Mechanical Representation of Propositions and Reasonings" in the Philosophical Magazine and Journal of Science, about the different ways to represent propositions by diagrams. The use of these types of diagrams in formal logic, according to Frank Ruskey and Mark Weston, predates Venn but are "rightly associated" with him as he "comprehensively surveyed and formalized their usage, and was the first to generalize them". Diagrams of overlapping circles representing unions and intersections were introduced by Catalan philosopher Ramon Llull (c. 1232–1315/1316) in the 13th century, who used them to illustrate combinations of basic principles. Gottfried Wilhelm Leibniz (1646–1716) produced similar diagrams in the 17th century (though much of this work was unpublished), as did Johann Christian Lange in a work from 1712 describing Christian Weise's contributions to logic. Euler diagrams, which are similar to Venn diagrams but don't necessarily contain all possible unions and intersections, were first made prominent by mathematician Leonhard Euler in the 18th century. Venn did not use the term "Venn diagram" and referred to the concept as "Eulerian Circles". He became acquainted with Euler diagrams in 1862 and wrote that Venn diagrams did not occur to him "till much later", while attempting to adapt Euler diagrams to Boolean logic. In the opening sentence of his 1880 article Venn wrote that Euler diagrams were the only diagrammatic representation of logic to gain "any general acceptance". Venn viewed his diagrams as a pedagogical tool, analogous to verification of physical concepts through experiment. As an example of their applications, he noted that a three-set diagram could show the syllogism: 'All A is some B. No B is any C. Hence, no A is any C.' Charles L. Dodgson (Lewis Carroll) includes "Venn's Method of Diagrams" as well as "Euler's Method of Diagrams" in an "Appendix, Addressed to Teachers" of his book Symbolic Logic (4th edition published in 1896). The term "Venn diagram" was later used by Clarence Irving Lewis in 1918, in his book A Survey of Symbolic Logic. In the 20th century, Venn diagrams were further developed. David Wilson Henderson showed, in 1963, that the existence of an n-Venn diagram with n-fold rotational symmetry implied that n was a prime number. He also showed that such symmetric Venn diagrams exist when n is five or seven. In 2002, Peter Hamburger found symmetric Venn diagrams for n = 11 and in 2003, Griggs, Killian, and Savage showed that symmetric Venn diagrams exist for all other primes. These combined results show that rotationally symmetric Venn diagrams exist, if and only if n is a prime number. Venn diagrams and Euler diagrams were incorporated as part of instruction in set theory, as part of the new math movement in the 1960s. Since then, they have also been adopted in the curriculum of other fields such as reading. Popular culture Venn diagrams have been commonly used in memes. At least one politician has been mocked for misusing Venn diagrams. Overview A Venn diagram is constructed with a collection of simple closed curves drawn in a plane. According to Lewis, the "principle of these diagrams is that classes [or sets] be represented by regions in such relation to one another that all the possible logical relations of these classes can be indicated in the same diagram. That is, the diagram initially leaves room for any possible relation of the classes, and the actual or given relation, can then be specified by indicating that some particular region is null or is not-null". Venn diagrams normally comprise overlapping circles. The interior of the circle symbolically represents the elements of the set, while the exterior represents elements that are not members of the set. For instance, in a two-set Venn diagram, one circle may represent the group of all wooden objects, while the other circle may represent the set of all tables. The overlapping region, or intersection, would then represent the set of all wooden tables. Shapes other than circles can be employed as shown below by Venn's own higher set diagrams. Venn diagrams do not generally contain information on the relative or absolute sizes (cardinality) of sets. That is, they are schematic diagrams generally not drawn to scale. Venn diagrams are similar to Euler diagrams. However, a Venn diagram for n component sets must contain all 2n hypothetically possible zones, that correspond to some combination of inclusion or exclusion in each of the component sets. Euler diagrams contain only the actually possible zones in a given context. In Venn diagrams, a shaded zone may represent an empty zone, whereas in an Euler diagram, the corresponding zone is missing from the diagram. For example, if one set represents dairy products and another cheeses, the Venn diagram contains a zone for cheeses that are not dairy products. Assuming that in the context cheese means some type of dairy product, the Euler diagram has the cheese zone entirely contained within the dairy-product zone—there is no zone for (non-existent) non-dairy cheese. This means that as the number of contours increases, Euler diagrams are typically less visually complex than the equivalent Venn diagram, particularly if the number of non-empty intersections is small. The difference between Euler and Venn diagrams can be seen in the following example. Take the three sets: The Euler and the Venn diagram of those sets are: Extensions to higher numbers of sets Venn diagrams typically represent two or three sets, but there are forms that allow for higher numbers. Shown below, four intersecting spheres form the highest order Venn diagram that has the symmetry of a simplex and can be visually represented. The 16 intersections correspond to the vertices of a tesseract (or the cells of a 16-cell, respectively). For higher numbers of sets, some loss of symmetry in the diagrams is unavoidable. Venn was keen to find "symmetrical figures ... elegant in themselves," that represented higher numbers of sets, and he devised an elegant four-set diagram using ellipses (see below). He also gave a construction for Venn diagrams for any number of sets, where each successive curve that delimits a set interleaves with previous curves, starting with the three-circle diagram. Edwards–Venn diagrams Anthony William Fairbank Edwards constructed a series of Venn diagrams for higher numbers of sets by segmenting the surface of a sphere, which became known as Edwards–Venn diagrams. For example, three sets can be easily represented by taking three hemispheres of the sphere at right angles (x = 0, y = 0 and z = 0). A fourth set can be added to the representation, by taking a curve similar to the seam on a tennis ball, which winds up and down around the equator, and so on. The resulting sets can then be projected back to a plane, to give cogwheel diagrams with increasing numbers of teeth—as shown here. These diagrams were devised while designing a stained-glass window in memory of Venn. Other diagrams Edwards–Venn diagrams are topologically equivalent to diagrams devised by Branko Grünbaum, which were based around intersecting polygons with increasing numbers of sides. They are also two-dimensional representations of hypercubes. Henry John Stephen Smith devised similar n-set diagrams using sine curves with the series of equations Charles Lutwidge Dodgson (also known as Lewis Carroll) devised a five-set diagram known as Carroll's square. Joaquin and Boyles, on the other hand, proposed supplemental rules for the standard Venn diagram, in order to account for certain problem cases. For instance, regarding the issue of representing singular statements, they suggest to consider the Venn diagram circle as a representation of a set of things, and use first-order logic and set theory to treat categorical statements as statements about sets. Additionally, they propose to treat singular statements as statements about set membership. So, for example, to represent the statement "a is F" in this retooled Venn diagram, a small letter "a" may be placed inside the circle that represents the set F. Related concepts Venn diagrams correspond to truth tables for the propositions , , etc., in the sense that each region of Venn diagram corresponds to one row of the truth table. This type is also known as Johnston diagram. Another way of representing sets is with John F. Randolph's R-diagrams.
Mathematics
Discrete mathematics
null
61708
https://en.wikipedia.org/wiki/Shrub
Shrub
A shrub or bush is a small-to-medium-sized perennial woody plant. Unlike herbaceous plants, shrubs have persistent woody stems above the ground. Shrubs can be either deciduous or evergreen. They are distinguished from trees by their multiple stems and shorter height, less than tall. Small shrubs, less than 2 m (6.6 ft) tall are sometimes termed as subshrubs. Many botanical groups have species that are shrubs, and others that are trees and herbaceous plants instead. Some define a shrub as less than and a tree as over 6 m. Others use as the cutoff point for classification. Many trees do not reach this mature height because of hostile, less than ideal growing conditions, and resemble shrub-sized plants. Others in such species have the potential to grow taller in ideal conditions. For longevity, most shrubs are classified between perennials and trees. Some only last about five years in good conditions. Others, usually larger and more woody, live beyond 70. On average, they die after eight years. Shrubland is the natural landscape dominated by various shrubs; there are many distinct types around the world, including fynbos, maquis, shrub-steppe, shrub swamp and moorland. In gardens and parks, an area largely dedicated to shrubs (now somewhat less fashionable than a century ago) is called a shrubbery, shrub border or shrub garden. There are many garden cultivars of shrubs, bred for flowering, for example rhododendrons, and sometimes even leaf colour or shape. Compared to trees and herbaceous plants, a small number of shrubs have culinary usage. Apart from the several berry-bearing species (using the culinary rather than botanical definition), few are eaten directly, and they are generally too small for much timber use unlike trees. Those that are used include several perfumed species such as lavender and rose, and a wide range of plants with medicinal uses. Tea and coffee are on the tree-shrub boundary; they are normally harvested from shrub-sized plants, but these would be large enough to become small trees if left to grow instead. Definition Shrubs are perennial woody plants, and therefore have persistent woody stems above ground (compare with succulent stems of herbaceous plants). Usually, shrubs are distinguished from trees by their height and multiple stems. Some shrubs are deciduous (e.g. hawthorn) and others evergreen (e.g. holly). Ancient Greek philosopher Theophrastus divided the plant world into trees, shrubs and herbs. Small, low shrubs, generally less than tall, such as lavender, periwinkle and most small garden varieties of rose, are often termed as subshrubs. Most definitions characterize shrubs as possessing multiple stems with no main trunk below. This is because the stems have branched below ground level. There are exceptions to this, with some shrubs having main trunks, but these tend to be very short and divide into multiple stems close to ground level without a reasonable length beforehand. Many trees can grow in multiple stemmed forms also while being tall enough to be trees, such as oak or ash. Use in gardens and parks An area of cultivated shrubs in a park or a garden is known as a shrubbery. When clipped as topiary, suitable species or varieties of shrubs develop dense foliage and many small leafy branches growing close together. Many shrubs respond well to renewal pruning, in which hard cutting back to a "stool", removes everything but vital parts of the plant, resulting in long new stems known as "canes". Other shrubs respond better to selective pruning to dead or unhealthy, or otherwise unattractive parts to reveal their structure and character. Shrubs in common garden practice are generally considered broad-leaved plants, though some smaller conifers such as mountain pine and common juniper are also shrubby in structure. Species that grow into a shrubby habit may be either deciduous or evergreen. Botanical structure In botany and ecology, a shrub is more specifically used to describe the particular physical canopy structure or plant life-form of woody plants which are less than high and usually multiple stems arising at or near the surface of the ground. For example, a descriptive system widely adopted in Australia is based on structural characteristics based on life-form, plus the height and amount of foliage cover of the tallest layer or dominant species. For shrubs that are high, the following structural forms are categorized: dense foliage cover (70–100%) — closed-shrubs mid-dense foliage cover (30–70%) — open-shrubs sparse foliage cover (10–30%) — tall shrubland very sparse foliage cover (<10%) — tall open shrubland For shrubs less than high, the following structural forms are categorized: dense foliage cover (70–100%) — closed-heath or closed low shrubland—(North America) mid-dense foliage cover (30–70%) — open-heath or mid-dense low shrubland—(North America) sparse foliage cover (10–30%) — low shrubland very sparse foliage cover (<10%) — low open shrubland List Those marked with * can also develop into tree form if in ideal conditions. A Abelia (Abelia) Acer (Maple) * Actinidia (Actinidia) Aloe (Aloe) Aralia (Angelica Tree, Hercules' Club) * Arctostaphylos (Bearberry, Manzanita) * Aronia (Chokeberry) Artemisia (Sagebrush) Aucuba (Aucuba) B Berberis (Barberry) Bougainvillea (Bougainvillea) Brugmansia (Angel's trumpet) Buddleja (Butterfly bush) Buxus (Box) * C Calia (Mescalbean) Callicarpa (Beautyberry) * Callistemon (Bottlebrush) * Calluna (Heather) Calycanthus (Sweetshrub) Camellia (Camellia, Tea) * Caragana (Pea-tree) * Carpenteria (Carpenteria) Caryopteris (Blue Spiraea) Cassiope (Moss-heather) Ceanothus (Ceanothus) * Celastrus (Staff vine) * Ceratostigma (Hardy Plumbago) Cercocarpus (Mountain-mahogany) * Chaenomeles (Japanese Quince) Chamaebatiaria (Fernbush) Chamaedaphne (Leatherleaf) Chimonanthus (Wintersweet) Chionanthus (Fringe-tree) * Choisya (Mexican-orange Blossom) * Cistus (Rockrose) Clerodendrum (Clerodendrum) Clethra (Summersweet, Pepperbush) * Clianthus (Glory Pea) Colletia (Colletia) Colutea (Bladder Senna) Comptonia (Sweetfern) Cornus (Dogwood) * Corylopsis (Winter-hazel) * Cotinus (Smoketree) * Cotoneaster (Cotoneaster) * Cowania (Cliffrose) Crataegus (Hawthorn) * Crinodendron (Crinodendron) * Cytisus and allied genera (Broom) * D Daboecia (Heath) Danae (Alexandrian laurel) Daphne (Daphne) Decaisnea (Decaisnea) Dasiphora (Shrubby Cinquefoil) Dendromecon (Tree poppy) Desfontainea (Desfontainea) Deutzia (Deutzia) Diervilla (Bush honeysuckle) Dipelta (Dipelta) Dirca (Leatherwood) Dracaena (Dragon tree) * Drimys (Winter's Bark) * Dryas (Mountain Avens) E Edgeworthia (Paper Bush) * Elaeagnus (Elaeagnus) * Embothrium (Chilean Firebush) * Empetrum (Crowberry) Enkianthus (Pagoda Bush) Ephedra (Ephedra) Epigaea (Trailing Arbutus) Erica (Heath) Eriobotrya (Loquat) * Escallonia (Escallonia) Eucryphia (Eucryphia) * Euonymus (Spindle) * Exochorda (Pearl Bush) F Fabiana (Fabiana) Fallugia (Apache Plume) Fatsia (Fatsia) Forsythia (Forsythia) Fothergilla (Fothergilla) Franklinia (Franklinia) * Fremontodendron (Flannelbush) Fuchsia (Fuchsia) * G Garrya (Silk-tassel) * Gaultheria (Salal) Gaylussacia (Huckleberry) Genista (Broom) * Gordonia (Loblolly-bay) * Grevillea (Grevillea) Griselinia (Griselinia) * H Hakea (Hakea) * Halesia (Silverbell) * Halimium (Rockrose) Hamamelis (Witch-hazel) * Hebe (Hebe) Hedera (Ivy) Helianthemum (Rockrose) Hibiscus (Hibiscus) * Hippophae (Sea-buckthorn) * Hoheria (Lacebark) * Holodiscus (Creambush) Hudsonia (Hudsonia) Hydrangea (Hydrangea) Hypericum (Rose of Sharon) Hyssopus (Hyssop) I Ilex (Holly) * Illicium (Star Anise) * Indigofera (Indigo) Itea (Sweetspire) J Jamesia (Cliffbush) Jasminum (Jasmine) Juniperus (Juniper) * K Kalmia (Mountain-laurel) Kerria (Kerria) Kolkwitzia (Beauty-bush) L Lagerstroemia (Crape-myrtle) * Lapageria (Copihue) Lantana (Lantana) Lavandula (Lavender) Lavatera (Tree Mallow) Ledum (Ledum) Leitneria (Corkwood) * Lespedeza (Bush Clover) * Leptospermum (Manuka) * Leucothoe (Doghobble) Leycesteria (Leycesteria) Ligustrum (Privet) * Lindera (Spicebush) * Linnaea (Twinflower) Lonicera (Honeysuckle) Lupinus (Tree Lupin) Lycium (Boxthorn) M Magnolia (Magnolia) Mahonia (Mahonia) Malpighia (Acerola) Menispermum (Moonseed) Menziesia (Menziesia) Mespilus (Medlar) * Microcachrys (Microcachrys) Myrica (Bayberry) * Myricaria (Myricaria) Myrtus and allied genera (Myrtle) * N Neillia (Neillia) Nerium (Oleander) O Olearia (Daisy bush) * Osmanthus (Osmanthus) P Pachysandra (Pachysandra) Paeonia (Tree-peony) Persoonia (Geebungs) Philadelphus (Mock orange) * Phlomis (Jerusalem Sage) Photinia (Photinia) * Physocarpus (Ninebark) * Pieris (Pieris) Pistacia (Pistachio, Mastic) * Pittosporum (Pittosporum) * Plumbago (Leadwort) Polygala (Milkwort) Poncirus * Prunus (Cherry) * Purshia (Antelope Bush) Pyracantha (Firethorn) Q Quassia (Quassia) * Quercus (Oak) * Quillaja (Quillay) Quintinia (Tawheowheo) * R Rhamnus (Buckthorn) * Rhododendron (Rhododendron, Azalea) * Rhus (Sumac) * Ribes (Currant, Gooseberry) Romneya (Tree poppy) Rosa (Rose) Rosmarinus (Rosemary) Rubus (Bramble, Raspberry, Salmonberry, Wineberry) Ruta (Rue) S Sabia * Salix (Willow) * Salvia (Sage) Sambucus (Elder) * Santolina (Lavender Cotton) Sapindus (Soapberry) * Senecio (Senecio) Simmondsia (Jojoba) Skimmia (Skimmia) Smilax (Smilax) Sophora (Kōwhai) * Sorbaria (Sorbaria) Spartium (Spanish Broom) Spiraea (Spiraea) * Staphylea (Bladdernut) * Stephanandra (Stephanandra) Styrax * Symphoricarpos (Snowberry) Syringa (Lilac) * T Tamarix (Tamarix) * Taxus (Yew) * Telopea (Waratah) * Thuja cvs. (Arborvitae) * Thymelaea Thymus (Thyme) Trochodendron * U Ulex (Gorse) Ulmus pumila celer (Turkestan elm – Wonder Hedge) Ungnadia (Mexican Buckeye) V Vaccinium (Bilberry, Blueberry, Cranberry) Verbesina centroboyacana Verbena (Vervain) Viburnum (Viburnum) * Vinca (Periwinkle) Viscum (Mistletoe) W Weigela (Weigela) X Xanthoceras Xanthorhiza (Yellowroot) Xylosma Y Yucca (Yucca, Joshua tree) * Z Zanthoxylum * Zauschneria Zenobia Ziziphus *
Biology and health sciences
Plant: General
null
61762
https://en.wikipedia.org/wiki/Class%20%28biology%29
Class (biology)
In biological classification, class () is a taxonomic rank, as well as a taxonomic unit, a taxon, in that rank. It is a group of related taxonomic orders. Other well-known ranks in descending order of size are life, domain, kingdom, phylum, order, family, genus, and species, with class ranking between phylum and order. History The class as a distinct rank of biological classification having its own distinctive name – and not just called an top-level genus (genus summum) – was first introduced by French botanist Joseph Pitton de Tournefort in the classification of plants that appeared in his Eléments de botanique of 1694. Insofar as a general definition of a class is available, it has historically been conceived as embracing taxa that combine a distinct grade of organization—i.e. a 'level of complexity', measured in terms of how differentiated their organ systems are into distinct regions or sub-organs—with a distinct type of construction, which is to say a particular layout of organ systems. This said, the composition of each class is ultimately determined by the subjective judgment of taxonomists. In the first edition of his Systema Naturae (1735), Carl Linnaeus divided all three of his kingdoms of nature (minerals, plants, and animals) into classes. Only in the animal kingdom are Linnaeus's classes similar to the classes used today; his classes and orders of plants were never intended to represent natural groups, but rather to provide a convenient "artificial key" according to his Systema Sexuale, largely based on the arrangement of flowers. In botany, classes are now rarely discussed. Since the first publication of the APG system in 1998, which proposed a taxonomy of the flowering plants up to the level of orders, many sources have preferred to treat ranks higher than orders as informal clades. Where formal ranks have been assigned, the ranks have been reduced to a very much lower level, e.g. class Equisitopsida for the land plants, with the major divisions within the class assigned to subclasses and superorders. The class was considered the highest level of the taxonomic hierarchy until George Cuvier's embranchements, first called Phyla by Ernst Haeckel, were introduced in the early nineteenth century.
Biology and health sciences
Taxonomic rank
Biology
61763
https://en.wikipedia.org/wiki/Order%20%28biology%29
Order (biology)
Order () is one of the eight major hierarchical taxonomic ranks in Linnaean taxonomy. It is classified between family and class. In biological classification, the order is a taxonomic rank used in the classification of organisms and recognized by the nomenclature codes. An immediately higher rank, superorder, is sometimes added directly above order, with suborder directly beneath order. An order can also be defined as a group of related families. What does and does not belong to each order is determined by a taxonomist, as is whether a particular order should be recognized at all. Often there is no exact agreement, with different taxonomists each taking a different position. There are no hard rules that a taxonomist needs to follow in describing or recognizing an order. Some taxa are accepted almost universally, while others are recognized only rarely. The name of an order is usually written with a capital letter. For some groups of organisms, their orders may follow consistent naming schemes. Orders of plants, fungi, and algae use the suffix (e.g. Dictyotales). Orders of birds and fishes use the Latin suffix meaning 'having the form of' (e.g. Passeriformes), but orders of mammals and invertebrates are not so consistent (e.g. Artiodactyla, Actiniaria, Primates). Hierarchy of ranks Zoology For some clades covered by the International Code of Zoological Nomenclature, several additional classifications are sometimes used, although not all of these are officially recognized. In their 1997 classification of mammals, McKenna and Bell used two extra levels between superorder and order: grandorder and mirorder. Michael Novacek (1986) inserted them at the same position. Michael Benton (2005) inserted them between superorder and magnorder instead. This position was adopted by Systema Naturae 2000 and others. Botany In botany, the ranks of subclass and suborder are secondary ranks pre-defined as respectively above and below the rank of order. Any number of further ranks can be used as long as they are clearly defined. The superorder rank is commonly used, with the ending that was initiated by Armen Takhtajan's publications from 1966 onwards. History The order as a distinct rank of biological classification having its own distinctive name (and not just called a higher genus ()) was first introduced by the German botanist Augustus Quirinus Rivinus in his classification of plants that appeared in a series of treatises in the 1690s. Carl Linnaeus was the first to apply it consistently to the division of all three kingdoms of nature (then minerals, plants, and animals) in his Systema Naturae (1735, 1st. Ed.). Botany For plants, Linnaeus' orders in the Systema Naturae and the Species Plantarum were strictly artificial, introduced to subdivide the artificial classes into more comprehensible smaller groups. When the word was first consistently used for natural units of plants, in 19th-century works such as the Prodromus Systematis Naturalis Regni Vegetabilis of Augustin Pyramus de Candolle and the Genera Plantarum of Bentham & Hooker, it indicated taxa that are now given the rank of family (see ordo naturalis, 'natural order'). In French botanical publications, from Michel Adanson's (1763) and until the end of the 19th century, the word (plural: ) was used as a French equivalent for this Latin . This equivalence was explicitly stated in the 's (1868), the precursor of the currently used International Code of Nomenclature for algae, fungi, and plants. In the first international Rules of botanical nomenclature from the International Botanical Congress of 1905, the word family () was assigned to the rank indicated by the French , while order () was reserved for a higher rank, for what in the 19th century had often been named a 'cohort' (, plural ). Some of the plant families still retain the names of Linnaean "natural orders" or even the names of pre-Linnaean natural groups recognized by Linnaeus as orders in his natural classification (e.g. Palmae or Labiatae). Such names are known as descriptive family names. Zoology In the field of zoology, the Linnaean orders were used more consistently. That is, the orders in the zoology part of the Systema Naturae refer to natural groups. Some of his ordinal names are still in use, e.g. Lepidoptera (moths and butterflies) and Diptera (flies, mosquitoes, midges, and gnats). Virology In virology, the International Committee on Taxonomy of Viruses's virus classification includes fifteen taxonomic ranks to be applied for viruses, viroids and satellite nucleic acids: realm, subrealm, kingdom, subkingdom, phylum, subphylum, class, subclass, order, suborder, family, subfamily, genus, subgenus, and species. There are currently fourteen viral orders, each ending in the suffix .
Biology and health sciences
Taxonomic rank
Biology
61836
https://en.wikipedia.org/wiki/Treponema%20pallidum
Treponema pallidum
Treponema pallidum, formerly known as Spirochaeta pallida, is a microaerophilic, gram-negative, spirochaete bacterium with subspecies that cause the diseases syphilis, bejel (also known as endemic syphilis), and yaws. It is known to be transmitted only among humans and baboons. T. pallidum can enter the host through mucosal membranes or open lesions in the skin and is primarily spread through sexual contact. It is a helically coiled microorganism usually 6–15 μm long and 0.1–0.2 μm wide. T. pallidum's lack of both a tricarboxylic acid cycle and processes for oxidative phosphorylation results in minimal metabolic activity. As a chemoorganoheterotroph, Treponema pallidum is an obligate parasite that acquires its glucose carbon source from its host. Glucose can be used not only as a primary carbon source but also in glycolytic mechanisms to generate ATP needed to power the bacterium given its minimal genome. The treponemes have cytoplasmic and outer membranes. Using light microscopy, treponemes are visible only by using dark-field illumination. T. pallidum consists of three subspecies, T. p. pallidum, T. p. endemicum, and T. p. pertenue, each of which has a distinct related disorder. The ability of T. pallidum to avoid host immune defenses has allowed for stealth pathogenicity. The unique outer membrane structure and minimal expression of surface proteins of T. pallidum has made vaccine development difficult. Treponema pallidum can be treated with high efficacy by antibiotics that inhibit bacterial cell wall synthesis such as the beta-lactam antimicrobial penicillin-G. Subspecies Three subspecies of T. pallidum are known: Treponema pallidum pallidum, which causes syphilis T. p. endemicum, which causes bejel or endemic syphilis T. p. pertenue, which causes yaws The three subspecies causing yaws, bejel, and syphilis are morphologically and serologically indistinguishable. The three subspecies can be distinguished by genetics, using restriction fragment length polymorphism (RFLP), which utilizes techniques such as PCR, restriction digest and gel electrophoresis. Genes TprC, TprI, and the 5' flanking region of tpp15 can be used to differentiate between the three subspecies based on DNA fragment lengths and location of bands in gel electrophoresis. These bacteria were originally classified as members of separate species, but DNA hybridization analysis indicates they are members of the same species. Treponema carateum, the cause of pinta, remains a separate species because no isolate is available for DNA analysis. Disease transmittance in subspecies T. p. endemicum and T. p. pertenue is considered non-venereal. T. p. pallidum is the most invasive pathogenic subspecies, while T. carateum is the least invasive of the species. T. p. endemicum and T. p. pertenue are intermediately invasive. Laboratory identification Treponema pallidum was first microscopically identified in syphilitic chancres by Fritz Schaudinn and Erich Hoffmann at the Charité in Berlin in 1905. Historically, this bacterium was identified in the clinical laboratory through visualization in dark field microscopy. This bacterium can be detected with special stains, such as the Dieterle stain. T. pallidum is also detected by serology, including nontreponemal VDRL, rapid plasma reagin, treponemal antibody tests (FTA-ABS), T. pallidum immobilization reaction, and syphilis TPHA test. Microbiology Physiology Treponema pallidum is a helically shaped bacterium with high motility consisting of an outer membrane, peptidoglycan layer, inner membrane, protoplasmic cylinder, and periplasmic space. It is often described as gram-negative, but its outer membrane lacks lipopolysaccharide, which is found in the outer membrane of other gram-negative bacteria. It has an endoflagellum (periplasmic flagellum) consisting of four main polypeptides, a core structure, and a sheath. The flagellum is located within the periplasmic space and wraps around the protoplasmic cylinder. The peptidoglycan layer interacts with the endoflagellum which may aid in motility. T. pallidum's outer membrane has the most contact with host cells and contains few transmembrane proteins, limiting antigenicity, while its cytoplasmic membrane is covered in lipoproteins. The outer membrane's treponemal ligands' main function is attachment to host cells, with functional and antigenic relatedness between ligands. The genus Treponema has ribbons of cytoskeletal cytoplasmic filaments that run the length of the cell just underneath the cytoplasmic membrane. Outer membrane The outer membrane (OM) of T. pallidum has several features that have made it historically difficult to research. These include details such as its low protein content, its fragility, and that it contains fewer gene sequences related to other gram negative outer membranes. Progress has been made using genomic sequencing and advanced computational models. The treponemal outer membrane proteins are key factors for the bacterium's pathogenesis, persistence, and immune evasion strategies. The relatively low protein content prevents antigen recognition by the immune system and the proteins that do exist protrude out of the OM, enabling its interaction with the host. Treponema's reputation as a "stealth pathogen" is primarily due to this unique OM structure, which serves to evade immune detection. TP0126 The TP0126 protein has been linked to the outer membrane protein family (OMP). This protein will sit in the outer membrane like a porin, which is supported by circular dichroism recombinant TP0126, and will increase the virulence factor. Researchers have classified the TP0126 protein in this class due to the homology between the protein and the porins of the OMPs. This protein is encoded by the tp0126 gene, which is conserved over all strains of T. pallidum. TP0326 TP0326 is an ortholog of the β-barrel assembly machine Bam A. BamA apparatus inserts newly synthetized and exported outer membrane proteins into the outer membrane. TP0453 TP0453 is a 287 amino acid protein associated with the inner membrane of the microbe's outer membrane. This protein lacks the extensive beta sheet structure that is characteristic of other membrane proteins, and does not traverse the outer membrane. This protein's function has been hypothesized to be involved with control of nutrient uptake. TP0624 Outer Membrane Protein A (OmpA) domain-containing proteins are necessary for maintaining structural integrity in gram-negative bacteria. These domains contain peptidoglycan binding sites which creates a "structural bridge between the peptidoglycan layer and the outer memebrane." The protein TP0624 found in T. pallidum has been proposed to facilitate this structural link, as well as interactions between outer membrane proteins and corresponding domains on the thin peptidoglycan layer. TP0751 The TP0751 protein is a protein that is unique to T. pallidum, and it is thought to aid in attachment to the host's extra cellular membrane. Since this protein aids in the attachment to the host, it sits on the surface of the cells, and in 2005, it was discovered that the TP0751 protein will attach to the laminin component in the host's extracellular matrix. With that, it is thought that the TP0751 protein plays a key role in dissemination with the host. TP0965 TP0965 is a protein that is critical for membrane fusion in T. pallidum, and is located in the periplasm. TP0965 causes endothelial barrier dysfunction, a hallmark of late-stage pathogenesis of syphilis. It does this by reducing the expression of tight junction proteins, which in turn increases the expression of adhesion molecules and endothelial cell permeability, which eventually leads to disruption of the endothelial layer. Treponema repeat family of proteins The Treponema repeat family of proteins (Tpr) are proteins expressed during the infection process. Tprs are formed by a conserved N-terminal domain, an amino-terminal stretch of about 50 amino acids, a central variable region, and a conserved C-terminal domain. The many different types of Tpr include TprA, TprB, TprC, TprD, and TprE, but variability of TprK is the most relevant due to the immune escape characteristics it allows. Antigen variation in TprK is regulated by gene conversion. In this way, fragments of the seven variable regions (V1–V7), by nonreciprocal recombination, present in TprK and the 53 donor sites of TprD can be combined to produce new structured sequences. TprK antigen variation can help T. pallidum to evade a strong host immune reaction and can also allow the reinfection of individuals. This is possible because the newly structured proteins can avoid antibody-specific recognition. This is possible because the newly structured proteins can avoid antibody-specific recognition. It is also suspected that the genes that encode for the TprK protein are essential in pathogenesis during the infection of syphilis. To introduce more phenotypic diversity, T. pallidum may undergo phase variation. This process mainly happens in TprF, TprI, TprG, TprJ, and TprL, and it consists of a reversible expansion or contraction of polymeric repeats. These size variations can help the bacterium to quickly adapt to its microenvironment, dodge immune response, or even increase affinity to its host. Culture In the past century since its initial discovery, culturing the bacteria in vitro has been difficult. Without the ability to grow and maintain the bacteria in a laboratory setting, discoveries regarding its metabolism and antimicrobial sensitivity were greatly impaired. However, successful long-term cultivation of T. pallidum in vitro was reported in 2017. This was achieved using Sf1Ep epithelial cells from rabbits, which were a necessary condition for the continued multiplication and survival of the system. The medium TpCM-2 was used, an alteration of more simple media which previously only yielded a few weeks of culture growth. This success was the result of switching out minimal essential medium (MEM) with CMRL 1066, a complex tissue culture medium. With development, new discoveries about T. pallidum's requirements for growth and gene expression may occur and in turn, yield research beneficial for the treatment and prevention of syphilis, outside of a host. However, continuous efforts to grow T. pallidum in axenic culture have been unsuccessful, indicating that it does not satisfy Koch's postulates. The challenge likely stems from the organism's strong adaptation to residing in mammalian tissue, resulting in a reduced genome and significant impairments in metabolic and biosynthetic functions. Genome The genome of T. pallidum was first sequenced in 1998. It is characterized by its helical, corkscrew-like shape. T. pallidum is not obtainable in a pure culture, meaning that this sequencing played an important role in filling gaps of understanding regarding the microbes' functions. The DNA sequences of T. pallidum species are more than 99.7% identical, and PCR-based assays are effective at differentiating these species. About 92.9% of DNA was determined to be open reading frames, 55% of which had predicted biological functions. T. pallidum was found to rely on its host for many molecules typically provided by biosynthetic pathways, and it is missing genes responsible for encoding key enzymes in oxidative phosphorylation and the tricarboxylic acid cycle. The T. pallidum group and its reduced genome is likely the result of various adaptations, such that it no longer contains the ability to synthesize fatty acids, nucleic acids, and amino acids, instead relying on its mammalian hosts for these materials. The recent sequencing of the genomes of several spirochetes permits a thorough analysis of the similarities and differences within this bacterial phylum and within the species. The chromosomes of the T. pallidum species are small, about 1.14 Mbp. It has one of the smallest bacterial genomes and has limited metabolic capabilities, reflecting its adaptation through genome reduction to the rich environment of mammalian tissue. It conserves almost 99.8% of its small genome, and uses its constantly mutating protein TprK to avoid immune response from its host. To avoid antibodies attacking it, the cell has few proteins exposed on the outer membrane sheath. Its chromosome is about 1000 kilobase pairs and is circular with a 52.8% G + C average. Sequencing has revealed a bundle of 12 proteins and some putative hemolysins are potential virulence factors of T. pallidum. These virulence factors are thought to contribute to the bacterium's ability to evade the immune system and cause disease. Clinical significance The clinical features of syphilis, yaws, and bejel occur in multiple stages that affect the skin. The skin lesions observed in the early stage last for weeks or months. The skin lesions are highly infectious, and the spirochetes in the lesions are transmitted by direct contact. The lesions regress as the immune response develops against T. pallidum. The latent stage that results can last a lifetime in many cases. In a few cases, the disease exits latency and enters a tertiary phase, in which destructive lesions of skin, bone, and cartilage ensue. Unlike yaws and bejels, syphilis in its tertiary stage often affects the heart, eyes, and nervous system, as well. Syphilis Treponema pallidum pallidum is a motile spirochete that is generally acquired by close sexual contact, entering the host via breaches in squamous or columnar epithelium. The organism can also be transmitted to a fetus by transplacental passage during the later stages of pregnancy, giving rise to congenital syphilis. The helical structure of T. p. pallidum allows it to move in a corkscrew motion through mucous membranes or enter minuscule breaks in the skin. In women, the initial lesion is usually on the labia, the walls of the vagina, or the cervix; in men, it is on the shaft or glans of the penis. It gains access to the host's blood and lymph systems through tissue and mucous membranes. In more severe cases, it may gain access to the host by infecting the skeletal bones and central nervous system of the body. The incubation period for a T. p. pallidum infection is usually around 21 days, but can range from 10 to 90 days. Yaws The causative agent of yaws is Treponema pallidum pertenue, which is transmissible by direct physical contact between infected people. Yaws is not sexually transmitted, and occurs in tropical, humid environments of Africa, Pacific Islands, Asia and South America. Unlike syphilis, which displays vertical transmission, one strain of T. p. pertenue researched was not vertically transmissible in a guinea pig model, and yaws cannot be spread from mother to offspring. Yaws appears as skin lesions, usually papules, commonly on the lower extremities, but present in other areas such as the arms, trunk and hands. Three stages of yaws disease have been documented: primary yaws which presents as inflamed sores on the lower body, secondary yaws which presents as a variety of skin abnormalities along with bone inflammation, and tertiary yaws, also referred to as latent yaws, which occurs when T. p. pertenue is serologically detected in the host but no clinical signs are displayed until relapse, which often occurs years later. Yaws is treated with antibiotics such as azithromycin and benzathine penicillin-G. Bejel Bejel is caused by Treponema pallidum endemicum and is a disease is that endemic in hot and dry climates. The transmission path has not been fully mapped, however infections are thought to be transmitted via direct contact with lesion secretions or fomites rather than by sexual transmission. Bejel typically causes skin lesions, which first appear as small ulcers in the mouth, and secondary lesions that form in the oropharynx, or around the nipples of nursing women. Bejel can be treated with benzathine penicillin-G. Treatment During the early 1940s, rabbit models in combination with the drug penicillin allowed for a long-term drug treatment. These experiments established the groundwork that modern scientists use for syphilis therapy. Penicillin can inhibit T. pallidum in 6–8 hours, though the cells still remain in lymph nodes and regenerate. Penicillin is not the only drug that can be used to inhibit T. pallidum; any β-lactam antibiotics or macrolides can be used. The T. pallidum strain 14 has built-in resistance to some macrolides, including erythromycin and azithromycin. Resistance to macrolides in T. pallidum strain 14 is believed to derive from a single-point mutation that increased the organism's livability. Many of the syphilis treatment therapies only lead to bacteriostatic results, unless larger concentrations of penicillin are used for bactericidal effects. Penicillin overall is the most recommended antibiotic by the Centers for Disease Control, as it shows the best results with prolonged use. It can inhibit and may even kill T. pallidum at low to high doses, with each increase in concentration being more effective. The Guideline Development Group has recommended the development of a new treatment, a short course treatment that is administered orally and can cross the placental barriers in pregnant women. Vaccine No vaccine for syphilis is available as of 2024, but doxycycline postexposure prophylaxis can be used to prevent infections. The outer membrane of T. pallidum has too few surface proteins for an antibody to be effective. Efforts to develop a safe and effective syphilis vaccine have been hindered by uncertainty about the relative importance of humoral and cellular mechanisms to protective immunity, and because T. pallidum outer membrane proteins have not been unambiguously identified. In contrast, some of the known antigens are intracellular, and antibodies are ineffective against them to clear the infection. In the last century, several prototypes have been developed, and while none of them provided protection from the infection, some prevented bacteria from disseminating to distal organs and promoted accelerated healing.
Biology and health sciences
Spirochaetes
Plants
61837
https://en.wikipedia.org/wiki/Neisseria%20gonorrhoeae
Neisseria gonorrhoeae
Neisseria gonorrhoeae, also known as gonococcus (singular) or gonococci (plural), is a species of Gram-negative diplococci bacteria first isolated by Albert Neisser in 1879. An obligate human pathogen, it primarily colonizes the mucosal lining of the urogenital tract; however, it is also capable of adhering to the mucosa of the nose, pharynx, rectum, and conjunctiva. It causes the sexually transmitted genitourinary infection gonorrhea as well as other forms of gonococcal disease including disseminated gonococcemia, septic arthritis, and gonococcal ophthalmia neonatorum. N. gonorrhoeae is oxidase positive and a microaerophile that is capable of surviving phagocytosis and growing inside neutrophils. Culturing it requires carbon dioxide supplementation and enriched agar (chocolate agar) with various antibiotics (Thayer–Martin). It exhibits antigenic variation through genetic recombination of its pili and surface proteins that interact with the immune system. Sexual transmission is through vaginal, anal, or oral sex. Sexual transmission may be prevented through the use of barrier protection. Perinatal transmission may occur during childbirth, though it is preventable through antibiotic treatment of the mother before birth and application of antibiotic eye gel on the eyes of the newborn. Gonococcal infections do not result in protective immunity; therefore, individuals may be infected multiple times. Reinfection is possible due to N. gonorrhoeae's ability to evade the immune system by varying its surface proteins. Asymptomatic infection is common in both males and females. Untreated infection may spread to the rest of the body (disseminated gonorrhea infection), especially the joints (septic arthritis). Untreated infection in women may cause pelvic inflammatory disease and possible infertility due to the resulting scarring. Gonorrhoea is diagnosed through cultures, Gram staining, or nucleic acid tests (i.e. polymerase chain reaction) of urine samples, urethral swabs, or cervical swabs. Chlamydia co-testing and testing for other STIs is recommended due to high rates of co-infection. Antibiotic resistance in N. gonorrhoeae is a growing public health concern, especially given its propensity to develop resistance easily. This ability of N. gonorrhoeae to rapidly adapt to novel antimicrobial treatments has been seen several times since the 1930s, making numerous treatment plans obsolete. Some strains have exhibited resistance to the current ceftriaxone treatments. Microbiology Neisseria species are fastidious, Gram-negative cocci (though some species are rod-shaped and occur in pairs or short chains) that require nutrient supplementation to grow in laboratory cultures. They are facultative intracellular pathogens, meaning they are able to persist and colonize within host cells but can also multiply outside the host cellular environment. They typically appear in pairs (diplococci), resembling the shape of coffee beans. Members of this genus do not form endospores and are nonmotile, with the exception of pathogenic species, which capable of moving using twitching motility; most are also obligate aerobes. Of the 17 species that colonize humans, only two are pathogenic: N. gonorrhoeae, which causes gonorrhea, and N. meningitidis, a leading cause of bacterial meningitis. Culture and identification N. gonorrhoeae can be isolated on Thayer–Martin agar (or VPN) agar in an atmosphere enriched with 3-7% carbon dioxide. Thayer–Martin agar is a chocolate agar plate (heated blood agar) containing nutrients and antimicrobials (vancomycin, colistin, nystatin, and trimethoprim). This agar preparation facilitates the growth of Neisseria species while inhibiting the growth of contaminating bacteria and fungi. Martin Lewis and New York City agar are other types of selective chocolate agar commonly used for Neisseria growth. N. gonorrhoeae is oxidase positive (possessing cytochrome c oxidase) and catalase positive (able to convert hydrogen peroxide to oxygen). When incubated with the carbohydrates lactose, maltose, sucrose, and glucose, N. gonorrhoeae will oxidize only the glucose. Metabolism Carbon Unlike other Neisseria species that can also metabolize maltose, N. gonorrhoeae is capable of using only glucose, pyruvate, and lactate as central carbon sources, and glucose is catabolized via both the Entner-Doudoroff (ED) and pentose phosphate (PP) pathways,and the ED pathway is the primary oxidative method. Use of these pathways is necessary as N. gonorrhoeae is incapable of glucose catabolism via the Embden-Meyerhof-Parnas (EMP) pathway due its lack of the phosphofructokinase (PFK) gene; however, the fructose 1,6-bisphosphatase enzyme is present to allow for gluconeogenesis to occur. Glucose is first metabolized through the ED pathway to produce pyruvate and glyceraldehyde 3-phosphate, the latter of which can then further metabolized by enzymes of the EMP pathway to yield another molecule of pyruvate. The resultant pyruvate molecules are then converted into acetyl-CoA, which can then be incorporated as a substrate for the citric acid cycle (CAC) to yield high-energy electron carriers that will be used by the electron transport chain (ETC) for ATP production; however, the CAC is largely used for generating biosynthetic precursors rather than for catabolic purposes. This is due in part to inhibited expression of several CAC enzymes in the presence of glucose, pyruvate, or lactate. These enzymes, namely citrate synthase, aconitase, and isocitrate dehydrogenase, are needed for the incorporation of acetate. Instead, a partial CAC has been observed, where α-ketoglutarate is formed by glutamate dehydrogenase or transamination of oxaloacetate and glutamate by aspartate aminotransferase (yielding aspartate and α-ketoglutarate). The CAC then continues from there to yield oxaloacetate, which is an important precursor molecule for a number of biosynthetic pathways. Another differentiating aspect of the gonococcal CAC is the lack of malate dehydrogenase, which is instead replaced by a membrane-bound malate:quinone-oxidoreductase that operates independently of NAD+ by directly transferring electrons to ubiquinone. Conversely, acetyl-CoA that does not enter the CAC but enters the phosphotransacetylase-acetate kinase (PTA-AckA) pathway, where it can be converted into acetate by phosphorylation (to form acetyl phosphate and release coenzyme A) and dephosphorylation to form ATP. While this acetate can enter the CAC for further oxidation, this does not occur so long as other carbon sources such as glucose or lactate are present, in which case it is excreted from the cell or incorporated for lipid synthesis. N. gonorrhoeae lack the glyoxylate shunt, preventing them from using acetate to form CAC intermediates to replenish the cycle. A significant portion of the glyceraldehyde 3-phosphate formed in gonococci is recycled via the gluconeogenic pathway to reform glucose 6-phosphate, as well as the intermediate fructose 6-phosphate. Both of these can then be used for pentose synthesis in the PP pathway via the oxidative and non-oxidative pathways, respectively, for subsequent nucleotide formation as well as energy production. N. gonorrhoeae, like other pathogenic members of the genus Neisseria, are capnophiles, meaning they require higher-than-normal concentrations of carbon dioxide (CO2) to grow , either in the form of CO2 or bicarbonate (HCO3−) depending on the bacterial strain. This requirement must be met exogenously during the lag and stationary growth phases, though it appears to be met through high metabolic CO2 productions in the exponential phase. Assimilation of this CO2 in Neisseria species is done by carbonic anhydrase and phosphoenolpyruvate enzymes in the periplasmic space and the cytoplasm, respectively. Lactate catabolism is also of particular importance for gonococci, both for pathogenicity and for growth. External lactate is transported in to the cell via lactate permease (LctP). The N. gonorrhoeae genome encodes for three lactate dehydrogenase (LDH) enzymes for that allow for metabolism of both L-lactate and D-lactate: a cytoplasmic NAD+-dependent D-lactate dehydrogenase (LdhA), which is responsible for and two membrane-bound LDHs, one specific to L-lactate (LldD) and the other specific to D-lactate (LdhD). The membrane-bound LDHs have been determined to be flavoprotein-containing respiratory enzymes that directly oxidize lactate to reduce ubiquinone. While these enzymes do not directly pump protons (H+ ions) into the periplasmic space, it is proposed that the reduction of ubiquinone by these enzymes is capable of feeding into the larger ETC. Electron Transport Chain and Oxidative Phosphorylation As an obligate human pathogen and a facultative anaerobic capnophile, Neisseria gonorrhoeae typically colonize mucosal surfaces in microaerobic environments, such as those in the genitourinary tract. Growth in areas where oxygen concentrations are limited requires a terminal oxidase with a high affinity for oxygen; in gonococci, oxygen reduction is performed by a ccb3 -type cytochrome oxidase. In addition to aerobic respiration, gonococci can also perform anaerobic respiration via the reduction of nitrite (NO2) to nitric oxide (NO) as well as reduction of NO to nitrous oxide (N2O). There are several enzymes that contribute electrons to the intramembranous ubiquinone pool, the first step in the ETC. These include the membrane bound LDHs (LldD and LdhD), NADH:ubiquinone oxidoreductase (aka NADH dehydrogenase; Nuo complex I), Na+-translocating NADH dehydrogenase (Nqr), succinate dehydrogenase (SDH), and the membrane-bound NAD+-independent malate:quinone-oxidoreductase (MqR). Following the initial transfer of electrons to ubiquinone, proposed schematics for the organization of the gonococcal ETC suggest the electrons can be further passed down the chain by reduction of the cytochrome bc1 complex or can be directly transferred to NO as a terminal electron acceptor by NO reductase (NorB). In the case of the former, electrons can then be passed from the bc1 complex along two alternative pathways via the reduction of either cytochrome c4 or c5. Both of these cytochromes transfer electrons to the terminal cytochrome ccb3 oxidase for the reduction of O2 to form H2O under aerobic conditions. Gonococci are also reduce NO2 via an inducible outer membrane-attached copper-containing nitrite reductase (AniA, a member of the NirK protein family) under anaerobic conditions, though this process has also been noted in microaerobic conditions as a means of supplementing growth. This leads to the formation of NO that is subsequently reduced to N2O in a partial denitrification pathway. The ccb3 oxidase of N. gonorrhoeae, dissimilarly to other members of the Neisseria genus, is a tri-heme protein that can transfer electrons not only to O2 (conserved across Neisseria species) but also to AniA for NO2 reduction. This is in addition to the typical process of receiving electrons transferred from cytochrome c5. The general purpose of the ETC is the formation of the electrochemical gradient of hydrogen ions (H+ or protons), resulting from concentration differences across the plasma membrane, needed to power ATP production in a process known as oxidative phosphorylation. In gonococci, movement of protons into the periplasmic space is accomplished by the Nuo complex I, the cytochrome bc1 complex, and cytochrome ccb3. Subsequently, ATP synthesis is performed by the F1F0 ATP synthase, a two-part protein complex present in gonococci as well as numerous other species across phylogenetic domains. This complex couples proton translocation back into the cytoplasm along its gradient with mechanical rotation to generate ATP. Iron The general purpose of the ETC is the formation of the electrochemical gradient of hydrogen ions (H+ or protons), resulting from concentration differences across the plasma membrane, needed to power ATP production in a process known as oxidative phosphorylation. In gonococci, movement of protons into the periplasmic space is accomplished by the Nuo complex I, the cytochrome bc1 complex, and cytochrome ccb3. Subsequently, ATP synthesis is performed by the F1F0 ATP synthase, a two-part protein complex present in gonococci as well as numerous other species across phylogenetic domains. This complex couples proton translocation back into the cytoplasm along its gradient with mechanical rotation to generate ATP. To acquire the necessary iron, gonococci produce TonB-dependent transporters (TDTs) on the surface of their outer membrane that are able to directly extract iron, along with other metals, from their respective carrier proteins. Some of these include transferrin binding proteins A (TbpA) and B (TbpB), lactoferrin-binding proteins A (LbpA) and B (LbpB), and hemoglobin/hemoglobin-haptoglobin binding proteins HpuB and HpuA. In addition to these proteins, gonococci are also capable of using siderophores, or compounds that are capable of chelating iron in the environment, that are produced by other bacteria; however, gonococcal cells are incapable of synthesizing siderophores themselves. These xenosiderophores are taken up by the TDT FetA through the outer membrane and then brought into the cell by the fetBCDEF transporter system. Along with the sequestration defence that can be further upregulated by host inflammation, humans also produce siderocalins that are able to chelate siderophores to as a further method of inhibiting pathogenic bacterial growth. These are sometimes ineffective against N. gonorrhoeae, which is able to colonize intracellularly, particularly in phagocytic cells such as macrophages and neutrophils. Increases in host intracellular iron also down regulates some of the intracellular pathogen-killing mechanisms; coincidentally, pathogenic Neisseria are able to alter several host cell mechanisms that ultimately allow the pathogen to take most of the available iron away from the host immune cell. Surface molecules On its surface, N. gonorrhoeae bears hair-like pili, surface proteins with various functions, and sugars called lipooligosaccharide. The pili mediate adherence, movement, and DNA exchange. The opacity-associated (Opa) proteins interact with the immune system, as do the porins. Lipooligosaccharide is an endotoxin that provokes an immune response. All of these are antigenic and exhibit antigenic variation. The pili, Opa proteins, porins, and even the lipooligosaccharide have mechanisms to inhibit the immune response, making asymptomatic infection possible. Opa Proteins Phase-variable opacity-associated (Opa) adhesin proteins are used by N. gonorrhoeae as part of evading immune response in a host cell. At least 12 Opa proteins are known and the many variations of surface proteins make recognizing N. gonorrhoeae and mounting a defense by immune cells more difficult. Opa proteins are in the outer membrane and facilitate a response when the bacteria interacts with a variety of host cells. These proteins bind to various epithelial cells, and allow N. gonorrhoeae to increase the length of infection as well as increase the amount of invasion into other host cells. Type IV Pili Dynamic polymeric protein filaments called type IV pili allow N. gonorrhoeae to do many bacterial processes including adhesion to surfaces, transformation competence, twitching motility, and immune response evasions. To enter the host the bacteria uses the pili to adhere to and penetrate mucosal surfaces. The pili are a pivotal virulence factor for N. gonorrhoeae; without them, the bacterium is unable to promote colonization. For motility, individual bacteria use their pili in a manner that resembles a grappling hook: first, they are extended from the cell surface and attach to a substrate. Subsequent pilus retraction drags the cell forward. The resulting movement is referred to as twitching motility. N. gonorrhoeae is able to pull 100,000 times its own weight, and the pili used to do so are amongst the strongest biological motors known to date, exerting one nanonewton. The PilF and PilT ATPase proteins are responsible for powering the extension and retraction of the type IV pilus, respectively. The adhesive functions of the gonococcal pilus play a role in microcolony aggregation and biofilm formation. These pili are also used to avoid immune responses from the cell they are invading by having their type IV pili antigenically vary. The main pilus filament is replaced by variable DNA sequences very frequently. By doing this process rapidly, they are able to create a diversity of pili on their surface and evade the host cell's immune response. Lipooligosaccharide Lipooligosaccharide is a low-weight version of lipopolysaccharide present on the surfaces of most other Gram-negative bacteria. It is a sugar (saccharide) side chain attached to lipid A (thus "lipo-") in the outer membrane coating the cell wall of the bacteria. The root "oligo" refers to the fact that it is a few sugars shorter than the typical lipopolysaccharide. As an endotoxin, it provokes inflammation. The shedding of lipooligosaccharide by the bacteria are sometimes responsible for issues associated with pelvic inflammatory disease. Although it functions primarily as an endotoxin, lipooligosaccharide may disguise itself with host sialic acid and block initiation of the complement cascade. Antigenic variation N. gonorrhoeae evades the immune system through a process called antigenic variation. This process allows N. gonorrhoeae to recombine its genes and alter the antigenic determinants that adorn its surface, such as the Type IV pili. Simply stated, the chemical composition of molecules are changed due to changes at the genetic level. N. gonorrhoeae is able to vary the composition of its pili and lipooligosaccharide. Of these, the pili exhibit the most antigenic variation due to chromosomal rearrangement. The pilS gene is an example of this ability to rearrange, as its combination with the pilE gene is estimated to produce over 100 variants of the PilE protein. These changes allow for adjustment to local environmental differences at the site of infection, evasion of recognition by targeted antibodies, and inhibit the formation of an effective vaccine. In addition to gene rearrangement, it is also naturally competent, meaning it can acquire extracellular DNA from the environment via its type IV pilus, specifically proteins PilQ and PilT. These processes allow N. gonorrhoeae to acquire and spread new genes, disguise itself with different surface proteins, and prevent the development of immunological memory – an ability which has contributed to antibiotic resistance and impeded vaccine development. Phase variation Phase variation is similar to antigenic variation, but instead of changes at the genetic level altering the composition of molecules, these genetic changes result in the activation or deactivation of a gene. Phase variation most often arises from a frameshift in the expressed gene. The Opa proteins of N. gonorrhoeae rely strictly on phase variation. Every time the bacteria replicate, they may switch multiple Opa proteins on or off through slipped-strand mispairing. That is, the bacteria introduce frameshift mutations that bring genes in or out of frame. The result is that different Opa genes are translated every time. Pili are varied by antigenic variation, but also phase variation. Frameshifts occur in both the pilE and pilC genes, effectively turning off the expression of pili in situations when they are not needed, such as during intracellular colonization as opposed to extracellular mucosal cell surface adhesion. Survival of gonococci After gonococci invade and transcytose the host epithelial cells, they land in the submucosa, where neutrophils promptly consume them. The pili and Opa proteins on the surface may interfere with phagocytosis, but most gonococci end up in neutrophils. The exudates from infected individuals contain many neutrophils with ingested gonococci. Neutrophils release an oxidative burst of reactive oxygen species in their phagosomes to kill the gonococci. However, a significant fraction of the gonococci can resist killing through the action of their catalase which breaks down reactive oxygen species and is able to reproduce within the neutrophil phagosomes. The bacterial RecA protein, which mediates repair of DNA damage, plays an important role in gonococcal survival. N. gonorrhoeae may replace DNA damaged in neutrophil phagosomes with DNA from neighboring gonococci. The process in which recipient gonococci integrate DNA from neighboring gonococci into their genome is called transformation. Genome The genomes of several strains of N. gonorrhoeae have been sequenced. Most of them are about 2.1 Mb in size and encode 2,100 to 2,600 proteins (although most seem to be in the lower range). For instance, strain NCCP11945 consists of one circular chromosome (2,232,025 bp) encoding 2,662 predicted open reading frames (ORFs) and one plasmid (4,153 bp) encoding 12 predicted ORFs. The estimated coding density over the entire genome is 87%, and the average G+C content is 52.4%, values that are similar to those of strain FA1090. The NCCP11945 genome encodes 54 tRNAs and four copies of 16S-23S-5S rRNA operons. Horizontal gene transfer Horizontal gene transfer, also termed lateral gene transfer, is the sharing of genetic information amongst living organisms. This transmission of information is a driving force of antibiotic resistance in N. gonorrhoeae. Studies have identified that N. gonorrhoeae has obtained methods of antimicrobial resistance by way of horizontal gene transfer from other Neisseria species including N. lactamica, N. macacae, and N. mucosa. Transformation in N. gonorrhoeae is performed by the type IV pilus, where the DNA is bound and brought into the cell, followed by processing and homologous recombination. Found in some genomes of Neisseria gonorrhoeae, the gonococcal genetic island (GGI), a genomic island (GI) specific to gonococci, has been identified as a mobile genetic element that is horizontally acquired. GGI is involved with antimicrobial resistance, transmission of genetic information, and iron acquisition. The genes within the gonococcal genetic island encode for the infamous type IV secretion system (T4SS), which is responsible for DNA secretion and is essential for biofilm formation. In 2011, researchers at Northwestern University found evidence of a human DNA fragment in a N. gonorrhoeae genome, the first example of horizontal gene transfer from humans to a bacterial pathogen. Disease Symptoms Symptoms of infection with N. gonorrhoeae differ depending on the site of infection and many infections are asymptomatic independent of sex. Depending on the route of transmission, N. gonorrhoeae may cause infection of the throat (pharyngitis) or infection of the anus/rectum (proctitis). Disseminated gonococcal infections can occur when N. gonorrhoeae enters the bloodstream, often spreading to the joints and causing a rash (dermatitis-arthritis syndrome). Dermatitis-arthritis syndrome results in joint pain (arthritis), tendon inflammation (tenosynovitis), and painless non-pruritic (non-itchy) dermatitis. Disseminated infection and pelvic inflammatory disease in women tend to begin after menses due to reflux during menses, facilitating spread. In rare cases, disseminated infection may cause infection of the meninges of the brain and spinal cord (meningitis) or infection of the heart valves (endocarditis). Male In symptomatic men, the primary symptom of genitourinary infection is urethritis – burning with urination (dysuria), increased urge to urinate, and a pus-like (purulent) discharge from the penis. The discharge may be foul smelling. If untreated, scarring of the urethra may result in difficulty urinating. Infection may spread from the urethra in the penis to nearby structures, including the testicles (epididymitis/orchitis), or to the prostate (prostatitis). Female In symptomatic women, the primary symptoms of genitourinary infection are increased vaginal discharge, burning with urination (dysuria), increased urge to urinate, pain with intercourse, or menstrual abnormalities. Pelvic inflammatory disease results if N. gonorrhoeae ascends into the pelvic peritoneum (via the cervix, endometrium, and fallopian tubes). The resulting inflammation and scarring of the fallopian tubes can lead to infertility and increased risk of ectopic pregnancy. Pelvic inflammatory disease develops in 10 to 20% of the females infected with N. gonorrhoeae. Neonates (perinatal infection) In perinatal infection, the primary manifestation is infection of the eye (neonatal conjunctivitis or ophthalmia neonatorum) when the newborn is exposed to N. gonorrhoeae in the birth canal. The eye infection can lead to corneal scarring or perforation, ultimately resulting in blindness. If the newborn is exposed during birth, conjunctivitis occurs within 2–5 days after birth and is severe. Gonococcal ophthalmia neonatorum, once common in newborns, is prevented by the application of erythromycin (antibiotic) gel to the eyes of babies at birth as a public health measure. Silver nitrate is no longer used in the United States. Transmission N. gonorrhoeae is most often transmitted through vaginal, oral, or anal sex; nonsexual transmission is unlikely in adult infection. It can also be transmitted to a newborn during passage through the birth canal if the mother has an untreated genitourinary infection. Given the high rate of asymptomatic infection, it is recommended that pregnant women be tested for gonococcal infection prior to birth. Communal baths, shared towels or fabrics, rectal thermometers, and improper hand hygiene by caregivers have been identified as potential means of transmission in pediatric settings. Traditionally, the bacterium was thought to move attached to spermatozoa, but this hypothesis did not explain female to male transmission of the disease. A recent study suggests that rather than "surf" on wiggling sperm, N. gonorrhoeae bacteria use pili to anchor onto proteins in the sperm and move through coital liquid. Infection Successful transmission is followed by adherence to the epithelial cells found at the infected mucosal site by the bacterium's type IV pili. The pili's ability to attach and subsequently retract pulls N. gonorrhoeae towards the epithelial membrane at the surface of the mucosal cell. Post attachment, N. gonorrhoeae replicates its genome and divides to form microcolonies. Gonococcal infection is sometimes aided by the membrane cofactor protein, CD46, as it has been known to act as a receptor for gonococcal pilus. Additionally, interaction with pili has been shown to cause cytoskeletal rearrangement of the host cell, further demonstrating that gonococcal pili engagement disrupts the response of the host cell and increases the likelihood of successful infection. During growth and colonization, N. gonorrhoeae stimulates the release of pro-inflammatory cytokines and chemokines from host immune cells that result in the recruitment of neutrophils to the area. These phagocytic cells typically take in foreign pathogens and destroy them, however, N. gonorrhoeae'''s ability to manipulate the host cell response allows the pathogen to survive within these immune cells and evade elimination. Laboratory diagnosis The primary detection methods for Neisseria gonorrhoeae are nucleic acid amplification tests, which are the most sensitive techniques available. Other methods of detection include microscopy and culture. Prevention Transmission is reduced by using latex barriers (e.g. condoms or dental dams) during sex and by limiting sexual partners. Condoms and dental dams should be used during oral and anal sex as well. Spermicides, vaginal foams, and douches are not effective methods for transmission prevention. Vaccine A vaccine against N. gonorrhoeae is becoming more necessary due to the growing incidence of cases, increasing antimicrobial resistance, and its impact on reproductive health. There are problems that have hampered vaccine development including: the absence of immunity post-infection, exclusively human hosts, and antigenic and phase variation of potential vaccine targets. Currently, there are several N. gonorrhoeae vaccines in development including an outer membrane vesicle vaccine. This includes the NGoXIM, the native OMV, and Bexsero/4CMenB vaccine candidates, which are all in the late clinical stages of development. The creation of a vaccine for N. gonorrhoeae has several potential public health impacts. In one estimate, a vaccine for the heterosexual population given before sexual activity occurs showed that prevalence of N. gonorrhoeae could be reduced by up to 90% after 20 years. Treatment Currently, the CDC recommends a single dose of the injectable cephalosporin, ceftriaxone, as the first line of defense against gonococcal infections. Individuals weighing less than 150 kg are typically prescribed a ceftriaxone concentration of 500 mg, while individuals who weigh over 150 kg are typically prescribed a dose of 1 g. Although ceftriaxone is not the only cephalosporin that has been effective at treating gonorrhoeae, it is the most advantageous. In the event of a cephalosporin allergy, the CDC recommends a dual treatment of gentamicin and azithromycin. Each drug should be administered as a single dose, with the gentamicin entering intramuscularly at a concentration of 240 mg, along with 2 g of azithromycin taken orally. If an individual is not allergic to cephalosporins but ceftriaxone is unavailable, an alternative treatment is a single dose of 800 mg cefixime consumed orally. In all of these cases, combination therapy and co-treatment for chlamydia is recommended, as simultaneous infections are common. Antibiotic resistance Antibiotic resistance in gonorrhea was first identified in the 1940s. Gonorrhea was treated with penicillin, but doses had to be progressively increased to remain effective. By the 1970s, penicillin-and tetracycline-resistant gonorrhea emerged in the Pacific Basin. These resistant strains then spread to Hawaii, California, the rest of the United States, Australia and Europe. Fluoroquinolones were the next line of defense, but soon resistance to this antibiotic emerged, as well. Since 2007, standard treatment has been third-generation cephalosporins, such as ceftriaxone, which are considered to be our "last line of defense". Recently, a high-level ceftriaxone-resistant strain of gonorrhea called H041 was discovered in Japan. Lab tests found it to be resistant to high concentrations of ceftriaxone, as well as most of the other antibiotics tested. Within N. gonorrhoeae, genes exist that confer resistance to every single antibiotic used to cure gonorrhea, but thus far they do not coexist within a single gonococcus. However, because of N. gonorrhoeaes high affinity for horizontal gene transfer, antibiotic-resistant gonorrhea is seen as an emerging public health threat. Prior to 2007, fluoroquinolones were a common treatment recommendation for gonorrhoeae. The CDC stopped suggesting these systemic bacterial agents once a resistant strain of N. gonorrhoeae emerged in the United States. The removal of fluoroquinolones as a potential treatment left cephalosporins as the only viable antimicrobial option for gonorrhea treatment. Wary of further gonococcal resistance, the CDC's recommendations shifted in 2010 to a dual therapy strategy—cephalosporin with either azithromycin or doxycycline. Despite these efforts, resistant N. gonorrhoeae had been reported in five continents by 2011, further limiting treatment options and recommendations. Antimicrobial resistance is not universal and N. gonorrhoeae strains in the United States continue to respond to a combination regimen of ceftriaxone and azithromycin. Serum resistance As a Gram negative bacterium, N. gonorrhoeae requires defense mechanisms to protect itself against the complement system (or complement cascade), whose components are found with human serum. There are three different pathways that activate this system however, they all result in the activation of complement protein 3 (C3). A cleaved portion of this protein, C3b, is deposited on pathogenic surfaces and results in opsonization as well as the downstream activation of the membrane attack complex. N. gonorrhoeae has several mechanisms to avoid this action. As a whole, these mechanisms are referred to as serum resistance. History Name origin Neisseria gonorrhoeae is named for Albert Neisser, who isolated it as the causative agent of the disease gonorrhea in 1878. Galen (130 AD) coined the term "gonorrhea" from the Greek gonos which means "seed" and rhoe which means "flow". Thus, gonorrhea means "flow of seed", a description referring to the white penile discharge, assumed to be semen, seen in male infection. Discovery In 1878, Albert Neisser isolated and visualized N. gonorrhoeae diplococci in samples of pus from 35 men and women with the classic symptoms of genitourinary infection with gonorrhea – two of whom also had infections of the eyes. In 1882, Leistikow and Loeffler were able to grow the organism in culture. Then in 1883, Max Bockhart proved conclusively that the bacterium isolated by Albert Neisser was the causative agent of the disease known as gonorrhea by inoculating the penis of a healthy man with the bacteria. The man developed the classic symptoms of gonorrhea days after, satisfying the last of Koch's postulates. Until this point, researchers debated whether syphilis and gonorrhea were manifestations of the same disease or two distinct entities. One such 18th-century researcher, John Hunter, tried to settle the debate in 1767 by inoculating a man with pus taken from a patient with gonorrhea. He erroneously concluded that syphilis and gonorrhea were indeed the same disease when the man developed the copper-colored rash that is classic for syphilis. Although many sources repeat that Hunter inoculated himself, others have argued that it was in fact another man. After Hunter's experiment other scientists sought to disprove his conclusions by inoculating other male physicians, medical students, and incarcerated men with gonorrheal pus, who all developed the burning and discharge of gonorrhea. One researcher, Ricord, took the initiative to perform 667 inoculations of gonorrheal pus on patients of a mental hospital, with zero cases of syphilis. Notably, the advent of penicillin in the 1940s made effective treatments for gonorrhea available.
Biology and health sciences
Gram-negative bacteria
Plants
61899
https://en.wikipedia.org/wiki/Phloem
Phloem
Phloem (, ) is the living tissue in vascular plants that transports the soluble organic compounds made during photosynthesis and known as photosynthates, in particular the sugar sucrose, to the rest of the plant. This transport process is called translocation. In trees, the phloem is the innermost layer of the bark, hence the name, derived from the Ancient Greek word (phloiós), meaning "bark". The term was introduced by Carl Nägeli in 1858. Different types of phloem can be distinguished. The early phloem formed in the growth apices is called protophloem. Protophloem eventually becomes obliterated once it connects to the durable phloem in mature organs, the metaphloem. Further, secondary phloem is formed during the thickening of stem structures. Structure Phloem tissue consists of conducting cells, generally called sieve elements, parenchyma cells, including both specialized companion cells or albuminous cells and unspecialized cells and supportive cells, such as fibres and sclereids. Conducting cells (sieve elements) Sieve tube elements are the type of cell that are responsible for transporting sugars throughout the plant. At maturity they lack a nucleus and have very few organelles, so they rely on companion cells or albuminous cells for most of their metabolic needs. Sieve tube cells do contain vacuoles and other organelles, such as ribosomes, before they mature, but these generally migrate to the cell wall and dissolve at maturity; this ensures there is little to impede the movement of fluids. One of the few organelles they do contain at maturity is the rough endoplasmic reticulum, which can be found at the plasma membrane, often nearby the plasmodesmata that connect them to their companion or albuminous cells. All sieve cells have groups of pores at their ends that grow from modified and enlarged plasmodesmata, called sieve areas. The pores are reinforced by platelets of a polysaccharide called callose. Parenchyma cells Other parenchyma cells within the phloem are generally undifferentiated and used for food storage. Companion cells The metabolic functioning of sieve-tube members depends on a close association with the companion cells, a specialized form of parenchyma cell. All of the cellular functions of a sieve-tube element are carried out by the (much smaller) companion cell, a typical nucleate plant cell except the companion cell usually has a larger number of ribosomes and mitochondria. The dense cytoplasm of a companion cell is connected to the sieve-tube element by plasmodesmata. The common sidewall shared by a sieve tube element and a companion cell has large numbers of plasmodesmata. There are three types of companion cells. Ordinary companion cells, which have smooth walls and few or no plasmodesmatal connections to cells other than the sieve tube. Transfer cells, which have much-folded walls that are adjacent to non-sieve cells, allowing for larger areas of transfer. They are specialized in scavenging solutes from those in the cell walls that are actively pumped requiring energy. Intermediary cells, which possess many vacuoles and plasmodesmata and synthesize raffinose family oligosaccharides. Albuminous cells Albuminous cells have a similar role to companion cells, but are associated with sieve cells only and are hence found only in seedless vascular plants and gymnosperms. Supportive cells Although its primary function is transport of sugars, phloem may also contain cells that have a mechanical support function. These are sclerenchyma cells which generally fall into two categories: fibres and sclereids. Both cell types have a secondary cell wall and are dead at maturity. The secondary cell wall increases their rigidity and tensile strength, especially because they contain lignin. Fibres Bast fibres are the long, narrow supportive cells that provide tension strength without limiting flexibility. They are also found in xylem, and are the main component of many textiles such as paper, linen, and cotton. Sclereids Sclereids are irregularly shaped cells that add compression strength but may reduce flexibility to some extent. They also serve as anti-herbivory structures, as their irregular shape and hardness will increase wear on teeth as the herbivores chew. For example, they are responsible for the gritty texture in pears, and in winter pears. Function Unlike xylem (which is composed primarily of dead cells), the phloem is composed of still-living cells that transport sap. The sap is a water-based solution, but rich in sugars made by photosynthesis. These sugars are transported to non-photosynthetic parts of the plant, such as the roots, or into storage structures, such as tubers or bulbs. During the plant's growth period, usually during the spring, storage organs such as the roots are sugar sources, and the plant's many growing areas are sugar sinks. The movement in phloem is multidirectional, whereas, in xylem cells, it is unidirectional (upward). After the growth period, when the meristems are dormant, the leaves are sources, and storage organs are sinks. Developing seed-bearing organs (such as fruit) are always sinks. Because of this multi-directional flow, coupled with the fact that sap cannot move with ease between adjacent sieve-tubes, it is not unusual for sap in adjacent sieve-tubes to be flowing in opposite directions. While movement of water and minerals through the xylem is driven by negative pressures (tension) most of the time, movement through the phloem is driven by positive hydrostatic pressures. This process is termed translocation, and is accomplished by a process called phloem loading and unloading. Phloem sap is also thought to play a role in sending informational signals throughout vascular plants. "Loading and unloading patterns are largely determined by the conductivity and number of plasmodesmata and the position-dependent function of solute-specific, plasma membrane transport proteins. Recent evidence indicates that mobile proteins and RNA are part of the plant's long-distance communication signaling system. Evidence also exists for the directed transport and sorting of macromolecules as they pass through plasmodesmata." Organic molecules such as sugars, amino acids, certain phytohormones, and even messenger RNAs are transported in the phloem through sieve tube elements. Phloem is also used as a popular site for oviposition and breeding of insects belonging to the order Diptera, including the fruit fly Drosophila montana. Girdling Because phloem tubes are located outside the xylem in most plants, a tree or other plant can be killed by stripping away the bark in a ring on the trunk or stem. With the phloem destroyed, nutrients cannot reach the roots, and the tree/plant will die. Trees located in areas with animals such as beavers are vulnerable since beavers chew off the bark at a fairly precise height. This process is known as girdling, or ring-barking, and can be used for agricultural purposes. For example, enormous fruits and vegetables seen at fairs and carnivals are produced via girdling. A farmer would place a girdle at the base of a large branch, and remove all but one fruit/vegetable from that branch. Thus, all the sugars manufactured by leaves on that branch have no sinks to go to but the one fruit/vegetable, which thus expands to many times its normal size. Origin When the plant is an embryo, vascular tissue emerges from procambium tissue, which is at the center of the embryo. Protophloem itself appears in the mid-vein extending into the cotyledonary node, which constitutes the first appearance of a leaf in angiosperms, where it forms continuous strands. The hormone auxin, transported by the protein PIN1 is responsible for the growth of those protophloem strands, signaling the final identity of those tissues. SHORTROOT (SHR), and microRNA165/166 also participate in that process, while Callose Synthase 3 inhibits the locations where SHR, and microRNA165 can go. Additionally, the expression of NAC45/86 genes during phloem differentiation functions to enucleate specific cells in the plants to produce the sieve elements. In the embryo, root phloem develops independently in the upper hypocotyl, which lies between the embryonic root, and the cotyledon. In an adult, the phloem originates, and grows outwards from, meristematic cells in the vascular cambium. Phloem is produced in phases. Primary phloem is laid down by the apical meristem and develops from the procambium. Secondary phloem is laid down by the vascular cambium to the inside of the established layer(s) of phloem. The molecular control of phloem development from stem cell to mature sieve element is best understood for the primary root of the model plant Arabidopsis thaliana. In some eudicot families (Apocynaceae, Convolvulaceae, Cucurbitaceae, Solanaceae, Myrtaceae, Asteraceae, Thymelaeaceae), phloem also develops on the inner side of the vascular cambium; in this case, a distinction between external and internal or intraxylary phloem is made. Internal phloem is mostly primary, and begins differentiation later than the external phloem and protoxylem, though it is not without exceptions. In some other families (Amaranthaceae, Nyctaginaceae, Salvadoraceae), the cambium also periodically forms inward strands or layers of phloem, embedded in the xylem: Such phloem strands are called included or interxylary phloem. Nutritional use Phloem of pine trees has been used in Finland and Scandinavia as a substitute food in times of famine and even in good years in the northeast. Supplies of phloem from previous years helped stave off starvation in the great famine of the 1860s which hit both Finland and Sweden. Phloem is dried and milled to flour (pettu in Finnish) and mixed with rye to form a hard dark bread, bark bread. The least appreciated was silkko, a bread made only from buttermilk and pettu without any real rye or cereal flour. Recently, pettu has again become available as a curiosity, and some have made claims of health benefits. Phloem from silver birch has been also used to make flour in the past.
Biology and health sciences
Plant tissues
Biology
61962
https://en.wikipedia.org/wiki/Mikoyan-Gurevich%20MiG-21
Mikoyan-Gurevich MiG-21
The Mikoyan-Gurevich MiG-21 (; NATO reporting name: Fishbed) is a supersonic jet fighter and interceptor aircraft, designed by the Mikoyan-Gurevich Design Bureau in the Soviet Union. Its nicknames include: "Balalaika", because its planform resembles the stringed musical instrument of the same name; "Ołówek", Polish for "pencil", due to the shape of its fuselage, and "Én Bạc", meaning "silver swallow", in Vietnamese. Approximately 60 countries across four continents have flown the MiG-21, and it still serves many nations seven decades after its maiden flight. It set aviation records, becoming the most-produced supersonic jet aircraft in aviation history, the most-produced combat aircraft since the Korean War and, previously, the longest production run of any combat aircraft. Development Origins The MiG-21 jet fighter was a continuation of Soviet jet fighters, starting with the subsonic MiG-15 and MiG-17, and the supersonic MiG-19. A number of experimental Mach 2 Soviet designs were based on nose intakes with either swept-back wings, such as the Sukhoi Su-7, or tailed deltas, of which the MiG-21 would be the most successful. Development of what would become the MiG-21 began in the early 1950s when Mikoyan OKB finished a preliminary design study for a prototype designated Ye-1 in 1954. This project was very quickly reworked when it was determined that the planned engine was underpowered; the redesign led to the second prototype, the Ye-2. Both these and other early prototypes featured swept wings. The first prototype with the delta wings found on production variants was the Ye-4. It made its maiden flight on 16 June 1955 and its first public appearance during the Soviet Aviation Day display at Moscow's Tushino airfield in July 1956. In the West, due to the lack of available information, early details of the MiG-21 often were confused with those of similar Soviet fighters of the era. In one instance, Jane's All the World's Aircraft 1960–1961 listed the "Fishbed" as a Sukhoi design and used an illustration of the Su-9 'Fishpot'. Design The MiG-21 was the first successful Soviet aircraft combining fighter and interceptor characteristics in a single aircraft. It was a lightweight fighter, achieving Mach 2 with a relatively low-powered afterburning turbojet, and is thus comparable to the American Lockheed F-104 Starfighter and Northrop F-5 Freedom Fighter and the French Dassault Mirage III. Its basic layout was used for numerous other Soviet designs; delta-winged aircraft included the Su-9 interceptor and fast E-150 prototype from the MiG bureau, while the successful mass-produced frontline fighter Su-7 and Mikoyan's I-75 experimental interceptor combined a similar fuselage shape with swept-back wings. However, the characteristic layout with the shock cone and front air intake did not see widespread use outside the USSR and ultimately proved to have limited development potential, mainly due to the small available space for the radar. Like many aircraft designed as interceptors, the MiG-21 had a short range. This was exacerbated by the poor placement of the internal fuel tanks ahead of the centre of gravity. As the internal fuel was consumed, the center of gravity would shift rearward beyond acceptable parameters. This had the effect of making the plane statically unstable to the point of being difficult to control, resulting in an endurance of only 45 minutes in clean condition. This can be somewhat countered by carrying fuel in external tanks closer to the center of gravity. The Chinese variants somewhat improved the internal fuel tank layout (as did the second generation of Soviet variants), and also carried significantly larger external fuel tanks to counter this issue. Additionally, when more than half the fuel was used up, violent maneuvers prevented fuel from flowing into the engine, thereby causing it to shut down in flight. This increased the risk of tank implosions (MiG-21 had tanks pressurized with air from the engine's compressor), a problem inherited from the MiG-15, MiG-17 and MiG-19. The short endurance and low fuel capacity of the MiG-21F, PF, PFM, S/SM and M/MF variants—though each had a somewhat greater fuel capacity than its predecessor—led to the development of the MT and SMT variants. These had an increased range of compared to the MiG-21SM, but at the cost of worsening all other performance figures, such as a lower service ceiling and slower time to altitude. The delta wing, while excellent for a fast-climbing interceptor, meant any form of turning combat led to a rapid loss of speed. However, the light loading of the aircraft could mean that a climb rate of 235 m/s (46,250 ft/min) was possible with a combat-loaded MiG-21bis, not far short of the performance of the later F-16A. MiG-21's Tumansky R-25 jet engine's specialty was the addition of a second fuel pump in the afterburning stage. Activating the ЧР (rus. "чрезвычайный режим" - emergency mode)(Emergency Power Rating, EPR in India) booster feature allows the engine to develop 97.4 kilonewtons (21,896 lbf) of thrust under 2,000 meters (6,600 ft) of altitude. The rpm of the engine would increase by 2.5% and the compression ratio would thus increase, with a rise in exhaust temperature. The limit of operation is 2 minutes for both practice and actual wartime use, as further use causes the engine to overheat. The fuel consumption increased by 50% over the rate in full afterburner. Use of this temporary power gave the MiG-21bis slightly better than 1:1 thrust-to-weight ratio and a climbing rate of 254 meters/second, equalling the F-16's nominal capabilities in a close-quarters dogfight. The use of WEP thrust was limited to 2 minutes to reduce stress on the engines' 750 (250+250+250) flight hours lifetime since every second of super-afterburner counted as several minutes of regular power run due to extreme thermal stress. With WEP on, the MiG-21bis's R-25 engine produced a huge 10–12 meter long blowtorch exhaust, with six or seven brightly glowing rhomboid "shock diamonds" visible inside the exhaust. The Russians gave the emergency power setting its "diamond regime" name, never used in India. Given a skilled pilot and capable missiles, it could give a good account of itself against contemporary fighters. Its G-limits were increased from +7Gs in initial variants to +8.5Gs in the latest variants. It was replaced by the newer variable-geometry MiG-23 and MiG-27 for ground support duties. However, not until the MiG-29 would the Soviet Union ultimately replace the MiG-21 as a maneuvering dogfighter to counter new American air superiority types. The MiG-21 was exported widely and remains in use. The aircraft's simple controls, engine, weapons, and avionics were typical of Soviet-era military designs. The use of a tail with the delta wing aids stability and control at the extremes of the flight envelope, enhancing safety for lower-skilled pilots; this, in turn, enhanced its marketability in exports to developing countries with limited training programs and restricted pilot pools. While technologically inferior to the more advanced fighters it often faced, low production and maintenance costs made it a favorite of nations buying Eastern Bloc military hardware. Several Russian, Israeli and Romanian firms have begun to offer upgrade packages to MiG-21 operators, designed to bring the aircraft up to a modern standard, with greatly upgraded avionics and armaments. Production A total of 10,645 aircraft were built in the USSR. They were produced in three factories: AZ 30 (3,203 aircraft) in Moscow (also known as MMZ Znamya Truda), GAZ 21 (5,765 aircraft) in Gorky, and TAZ 31 (1,678 aircraft) in Tbilisi. Generally, Gorky built single-seaters for the Soviet forces. Moscow constructed single-seaters for export, and Tbilisi manufactured two-seaters both for export and the USSR, though there were exceptions. The MiG-21R and MiG-21bis for export and for the USSR were built in Gorky, 17 single-seaters were built in Tbilisi (MiG-21 and MiG-21F), the MiG-21MF was first constructed in Moscow and then Gorky, and the MiG-21U was built in Moscow as well as in Tbilisi. A total of 194 MiG-21F-13s were built under licence in Czechoslovakia, and Hindustan Aeronautics Ltd. of India built 657 MiG-21FL, MiG-21M and MiG-21bis (of which 225 were bis) Cost Due to the mass production, the aircraft was very cheap: the MiG-21MF, for example, was cheaper than the BMP-1. The F-4 Phantom's cost was several times higher than MiG-21. Design The MiG-21 has a delta wing. The sweep angle on the leading edge is 57° with a TsAGI S-12 airfoil. The angle of incidence is 0° while the dihedral angle is −2°. On the trailing edge there are ailerons with an area of 1.18 m2, and flaps with an area of 1.87 m2. In front of the ailerons there are small wing fences. The fuselage is semi-monocoque with an elliptical profile and a maximum width of . The air flow to the engine is regulated by an inlet cone in the air intake. On early model MiG-21s, the cone has three positions. For speeds up to Mach 1.5, the cone is fully retracted to the maximum aft position. For speeds between Mach 1.5 and Mach 1.9 the cone moves to the middle position. For speeds higher than Mach 1.9 the cone moves to the maximum forward position. On the later model MiG-21PF, the intake cone moves to a position based on the actual speed. The cone position for a given speed is calculated by the UVD-2M system using air pressures from in front and behind the compressor of the engine. On both sides of the nose, there are gills to supply the engine with more air while on the ground and during takeoff. In the first variant of the MiG-21, the pitot tube is attached to the bottom of the nose. After the MiG-21P variant, this tube is attached to the top of the air intake. Later versions shifted the pitot tube attachment point 15 degrees to the right, as seen from the cockpit, and had an emergency pitot head on the right side, just ahead of the canopy and below the pilot's eyeline. The cabin is pressurized and air-conditioned. On variants prior to the MiG-21PFM, the cabin canopy is hinged at the front. When ejecting, the SK-1 ejection seat connects with the canopy to provide a windbreak from the high-speed airflow encountered during high-speed ejections. After ejection, the canopy opens to allow the pilot to parachute to the ground. However, ejecting at low altitudes can cause the canopy to take too long to separate, sometimes resulting in pilot death. The minimum height for ejection in level flight was 110 m. Starting with the MiG-21PFM, a new ejection seat proved to be very reliable and did not need the canopy to protect the pilot which had never been fully satisfactory. The canopy is hinged on the right side of the cockpit. On the underside of the aircraft, there are three air brakes, two at the front and one at the rear. The front air brakes have an area of 0.76 m2, and a deflection angle of 35°. The rear air brake has an area of 0.47 m2 and a deflection angle of 40°. The rear air brake is blocked if the airplane carries an external fuel tank. Behind the air brakes are the bays for the main landing gear. On the underside of the airplane, just behind the trailing edge of the wing are attachment points for two JATO rockets. The front section of the fuselage ends at former #28. The rear section of the fuselage starts at former #28a and is removable for engine maintenance. The empennage of the MiG-21 consists of a vertical stabilizer, a stabilator and a small fin on the bottom of the tail to improve yaw control. The vertical stabilizer has a sweep angle of 60° and an area of 5.32 m2 (on earlier version 3.8 m2) and a rudder. The stabilator has a sweep angle of 57°, an area of 3.94 m2 and a span of 2.6 m. The MiG-21 uses a tricycle type undercarriage. On most variants, the main landing gear uses tires that are 800 mm in diameter and 200 mm in width. Only the MiG-21F variants use tires with the size 660×200 mm. The wheels of the main landing gear retract into the fuselage after rotating 87° and the shock absorbers retract into the wing. The nose gear retracts forward into the fuselage under the radar. The nose wheel can be lowered manually by simply unlocking its hatch from inside the cockpit. Thus, landing with undercarriage locked in the up position due to an internal failure was not a major issue, with a number of such successful landings on the nosewheel and ventral fuel tank or the airbrake. Operational history India Overview India is the largest operator of MiG-21s. In 1961, the Indian Air Force (IAF) opted to purchase the MiG-21 over several other Western competitors. As part of the deal, the Soviet Union offered India full transfer of technology and rights for local assembly. In 1964, the MiG-21 became the first supersonic fighter jet to enter service with the IAF. Due to limited induction numbers and lack of pilot training, the IAF MiG-21 played a limited role in the Indo-Pakistani War of 1965. However, the IAF gained valuable experience while operating the MiG-21 for defensive sorties during the war. The positive feedback from IAF pilots during the 1965 war prompted India to place more orders for the fighter jet and also invest heavily in building the MiG-21's maintenance infrastructure and pilot training programs. Since 1963, India inducted more than 1,200 MiG-21s into its air force. As of 2024, around 40 MiG-21s are known to be in operation with the IAF. At its peak, IAF operated 400 MiG-21s in 19 squadrons. In 2023, the IAF announced that it would replace its MiG-21 Bisons with indigenously built Tejas fighter jet. Safety record The plane has been plagued by safety problems. Since 1970 more than 170 Indian pilots and 40 civilians have been killed in MiG-21 accidents, thus the unofficial nickname "flying coffin". Over half of the 840 aircraft built between 1966 and 1984 were lost to crashes. At least 14 MiG-21s crashed between 2010 and 2013. Poor maintenance and quality of replacement parts has been considered to be a factor in this phenomenon. When in afterburner, the engine operates very close to its surge line and the ingestion of even a small bird can lead to an engine surge/seizure and flame out. Future In view of the several incidents that have occurred after the 1999 Kargil War, the modernized MiG-21 Bison seems to have at present the role of an interceptor and possibly a limited role of a fighter aircraft. On 11 December 2013, India's second-generation supersonic jet fighter, MiG-21FL was decommissioned after being in service for 50 years. The Indian Air Force plans to decommission all MiG-21s by 2025. 1971 Indo-Pakistan War The expansion of the IAF MiG-21 fleet marked a developing India-Soviet Union military partnership, which enabled India to field a formidable air force to counter Chinese and Pakistani threats. The capabilities of the MiG-21 were put to the test during the Bangladesh Liberation War. During the war, the MiG-21s played a crucial role in giving the IAF air superiority over vital points and areas in the western theater of the conflict. The 1971 war witnessed the first supersonic air combat in the subcontinent when an Indian MiG-21FL claimed a PAF F-104A Starfighter with its GSh-23 twin-barrelled 23 mm cannon. By the time the hostilities came to an end, the IAF MiG-21FLs had claimed four PAF F-104As, two PAF Shenyang F-6s, one PAF North American F-86 Sabre and one PAF Lockheed C-130 Hercules. Only two kills were confirmed (both F-104As). Two more F-104s were critically damaged by MiG-21 fighters. Pakistan decommissioned all F-104s shortly after the end of the war. According to one Western military analyst, the MiG-21FLs had clearly "won" the much anticipated air combat between the MiG-21FL and the F-104A Starfighter. Because of the performance of India's MiG-21s, several nations, including Iraq, approached India for MiG-21 pilot training. By the early 1970s, more than 120 Iraqi pilots were being trained by the Indian Air Force. Kargil War One MiG-21 was shot down by a Pakistani soldier using a shoulder-fired MANPADS missile during the Kargil war. Other clashes On 10 August 1999, two MiG-21FLs of the Indian Air Force intercepted and shot down a Pakistani Bréguet 1150 Atlantic maritime patrol aircraft with an R-60 missile after it allegedly entered Indian airspace for surveillance, killing all on board. During the 2019 Jammu and Kashmir airstrikes, the Pakistan Air Force shot down an Indian MiG-21 and captured its pilot. The MiG-21's debris had fallen in Pakistani-administered Kashmir. The pilot was later returned to India. Indonesia The Indonesian Air Force purchased 22 MiG-21s. In 1962, 20 MiG-21F-13s and MiG-21Us were received during Operation Trikora in the Western New Guinea conflict. Indonesian MiG-21s never fought in any dogfights. Right after the U.S.-backed anti-communist forces took over the government, 13 Indonesian MiG-21s were delivered to the U.S. in exchange for T-33, UH-34D, and later, F-5 and OV-10 aircraft. All remaining MiG-21s were grounded and retired due to a lack of spare parts and the withdrawal of Soviet maintenance support. The MiGs were added to the 4477th Test and Evaluation Squadron ("Red Eagles"), a USAF aggressor squadron at Tonopah Test Range. Vietnam The MiG-21 was designed for very short ground-controlled interception (GCI) missions. It became renowned for this type of mission in the skies over North Vietnam. The first MiG-21s arrived directly from the Soviet Union by ship in April 1966. After being unloaded and assembled they were given to the Vietnam People's Air Force's (VPAF) oldest fighter unit, the 921st Fighter Regiment (921st FR), which had been created on 3 February 1964 as a MiG-17 unit. Because the VPAF's 923rd FR was newer and less experienced, they continued to operate MiG-17s, while the arrival of the MiG-19s (J-6 versions) from China in 1969 led to North Vietnam's only MiG-19 unit, the 925th FR. On 3 February 1972, North Vietnam commissioned its fourth and last fighter regiment created during the war with South Vietnam, the MiG-21PFM (Type 94)-equipped 927th FR. Former MiG-17 pilot Nguyen Nhat Chieu and his wingman Tran Ngoc Siu intercepted USAF F-105Ds while on CAP duty over Phuc Yen Airbase (a.k.a. Noi Bai Airbase) on 7 July 1966, shooting down one piloted by Capt. Tomes with a salvo from Tran's UB-16-57/S-5M unguided rocket-equipped MiG-21, while flight leader Nguyen was unable to establish a lock on another, wildly-evading F-105 with his R-3S AAM; this was the first instance of a VPAF MiG-21 shooting down a piloted enemy aircraft in the Vietnam War. Although 13 of North Vietnam's flying aces attained their status while flying the MiG-21 (cf. three in the MiG-17), many VPAF pilots preferred the MiG-17 because the high wing loading of the MiG-21 made it relatively less maneuverable and the lighter framed canopy of the MiG-17 gave better visibility. However, this is not the impression British author Roger Boniface got when he interviewed Pham Ngoc Lan and ace Nguyễn Nhật Chiêu (who scored victories flying both the MiG-17 and MiG-21). Pham Ngoc Lan told Boniface that "The MiG-21 was much faster, and it had two Atoll missiles which were very accurate and reliable when fired between 1,000 and 1,200 yards." And Chiêu asserted that "... for me personally, I preferred the MiG-21 because it was superior in all specifications in climb, speed and armament. The Atoll missile was very accurate and I scored four kills with the Atoll. ... In general combat conditions, I was always confident of a kill over an F-4 Phantom when flying a MiG-21." Although the MiG-21 lacked the long-range radar, missiles, and heavy bomb load of its contemporary multi-mission U.S. fighters, its RP-21 Sapfir radar helped make it a challenging adversary in the hands of experienced pilots, especially when used in high-speed hit-and-run attacks under GCI control. MiG-21 intercepts of Republic F-105 Thunderchief strike groups were effective in downing US aircraft or forcing them to jettison their bomb loads. Aerial combat victories 1966–1972 The VPAF flew their interceptors with guidance from ground controllers, who positioned the MiGs in ambush battle stations to make "one pass, then haul ass" attacks. The MiGs made fast and often accurate attacks against US formations from several directions (usually the MiG-17s performed head-on attacks and the MiG-21s attacked from the rear). After shooting down a few American planes and forcing some of the F-105s to drop their bombs prematurely, the MiGs did not wait for retaliation but disengaged rapidly. These "guerrilla warfare in the air" tactics generally proved successful during the war. In December 1966, the MiG-21 pilots of the 921st FR downed 14 F-105 Thunderchiefs without any losses. The USAF and the US Navy had high expectations of the F-4 Phantom, assuming that their massive firepower, best available on-board radar, highest speed and acceleration properties, coupled with new tactics, would provide an advantage over the MiGs. But in confrontations with the lighter MiG-21, F-4s began to suffer losses. From May to December 1966, the USAF lost 47 aircraft, destroying only 12 VPAF fighters in return. From April 1965 to November 1968, over 268 air battles occurred over the skies of North Vietnam. North Vietnam claimed 244 downed U.S. aircraft while admitting to the loss of 85 MiGs. Of 46 air battles between F-4s and MiG-21s, losses amounted to 27 F-4 Phantoms and 20 MiG-21s. After a million sorties and nearly 1,000 US aircraft losses, Operation Rolling Thunder came to an end on 1 November 1968. A poor air-to-air combat loss-exchange ratio against the smaller, more agile enemy MiGs during the early part of the war eventually led the US Navy to create their Navy Fighter Weapons School, also known as "TOPGUN", at Naval Air Station Miramar, California, on 3 March 1969. The USAF quickly followed with its own version, called the Dissimilar Air Combat Training (sometimes referred to as Red Flag) program at Nellis Air Force Base, Nevada. These two programs employed the subsonic Douglas A-4 Skyhawk and supersonic F-5 Tiger II, as well as the Mach 2.4-capable USAF Convair F-106 Delta Dart, to mimick the MiG-21. The culmination of the air struggle over Vietnam in early 1972 was 10 May, when VPAF aircraft completed 64 sorties, resulting in 15 air battles. The VPAF claimed 7 F-4s were shot down (the U.S. confirmed five F-4s were lost.) The F-4s, in turn, managed to destroy two MiG-21s, three MiG-17s and one MiG-19. On 11 May, two MiG-21s, playing the "bait", brought four F-4s to 2 MiG-21s circling at low altitude. The MiGs quickly stormed the Phantoms and 3 missiles shot down two F-4s. On 13 May, a MiG-21 unit intercepted a group of F-4s and a second pair of MiGs made a missile attack before being hit by two F-4s. On 18 May, VPAF aircraft made 26 sorties, eight of which resulted in combat, downing four F-4s without any VPAF losses. Over the course of the air war, between 3 April 1965 and 8 January 1973, each side would ultimately claim favorable kill ratios. In 1972, the number of air battles between American and Vietnamese planes stood at 201. The VPAF lost 54 MiGs (including 36 MiG-21s and one MiG-21US) and claimed 90 U.S. aircraft shot down, including 74 F-4 fighters and two RF-4C reconnaissance jets (MiG-21s shot down 67 enemy aircraft while MiG-17s shot down 11 and MiG-19s downed another 12). One MiG-21 was shot down on 21 February 1972 by a USAF F-4 Phantom based at Udorn RTAFB, Thailand and piloted by Major Lodge with 1st Lt Roger Locher as his weapon systems officer (WSO). This was claimed as the first-ever USAF MiG kill at night, and the first in four years at that time. Two MiG-21s were claimed shot down by USAF Boeing B-52 Stratofortress tail gunners; the only confirmed air-to-air kills ever made by the B-52. The first aerial victory was scored on 18 December 1972 by tail gunner Staff Sgt Samuel Turner, who was awarded the Silver Star. The second took place on 24 December 1972, when A1C Albert E. Moore downed a MiG-21 over the Thai Nguyen railroad yards. Both actions occurred during Operation Linebacker II, also known as the Christmas Bombings. These air-to-air kills were not confirmed by VPAF. The biggest threat to North Vietnam during the war had always been the Strategic Air Command's B-52 bombers. Hanoi's MiG-17 and MiG-19 interceptors could not deal with the B-52s at their flying altitude. In the summer of 1972, the VPAF was directed to train 12 MiG-21 pilots for the specific mission of shooting the B-52 bombers, with two-thirds of the pilots specifically trained in night attacks. On 26 December 1972, just two days after tail gunner Albert Moore downed a MiG-21, a VPAF MiG-21MF (number 5121) from the 921st Fighter Regiment, flown by Major Phạm Tuân over Hanoi, claimed the first aerial combat kill of a B-52. The B-52 had been above Hanoi at over when Major Tuân launched two Atoll missiles from 2 kilometres away and claimed to have destroyed one of the bombers flying in the three-plane formation. Other sources argue that the Atoll missiles failed to hit their mark, but as it was disengaging, a B-52 from a three-bomber cell in front of his target took a hit from a surface-to-air missile (SAM), exploding in mid-air: this may have caused Tuân to think his missiles destroyed the target he had been aiming for. The Vietnamese claimed another kill on 28 December 1972 by a MiG-21 from the 921st FR, this time flown by Vu Xuan Thieu. Thieu is said to have perished in the explosion of a B-52 hit by his own missiles, having approached the target too closely. In this case, the Vietnamese version appears to be erroneous: while one MiG-21 kill was claimed by Phantoms that night (this may have been Thieu's MiG), no B-52s were lost for any reason on the date of the claimed kill. Year-by-year kill claims involving MiG-21s 1966: U.S. claimed six MiG-21s destroyed; North Vietnam claimed seven F-4 Phantom IIs and 11 F-105 Thunderchiefs shot down by MiG-21s. 1967: U.S. claimed 21 MiG-21s destroyed; North Vietnam claimed 17 F-105 Thunderchiefs, 11 F-4 Phantom IIs, two RF-101 Voodoos, one A-4 Skyhawk, one Vought F-8 Crusader, one EB-66 Destroyer and three unidentified types shot down by MiG-21s. 1968: U.S. claimed nine MiG-21s destroyed; North Vietnam claimed 17 US aircraft shot down by MiG-21s. 1969: U.S. destroyed three MiG-21s; one Ryan Firebee UAV destroyed by a MiG-21. 1970: U.S. destroyed two MiG-21s; North Vietnam claimed one F-4 Phantom and one CH-53 Sea Stallion helicopter shot down by MiG-21s. 1972: U.S. claimed 51 MiG-21s destroyed; North Vietnam claimed 53 US aircraft shot down by MiG-21s, including two B-52 Stratofortress bombers. Soviet General Fesenko, the main Soviet adviser to the North Vietnamese Air Force in 1972, recorded 34 MiG-21s destroyed in 1972. According to VPAF, in 1972, they lost 29 MiG-21s, 5 MiG-19s and 16 MiG-17s in aircombat On 3 January 1968, a single MiG-21 pilot, Ha Van Chuc, entered battle with 36 American planes and claimed one F-105 Thunderchief. During the war, the VPAF claimed 103 F-4 Phantoms were shot down by MiG-21s, and that they lost 60 MiG-21s in air combat (54 by Phantoms). According to Russian data, the VPAF MiG-21s claimed 165 air victories, with the loss of 65 aircraft (including a few by accident or friendly fire) and 16 pilots. The losses of MiG-21 pilots were the lowest of all airplanes. Arab–Israeli conflicts The MiG-21 was also used extensively in Middle Eastern conflicts of the 1960s, 1970s and 1980s by the Egyptian Air Force, Syrian Air Force and Iraqi Air Force. The MiG-21 first encountered Israeli Mirage IIICJs on 14 November 1964, but it was not until 14 July 1966 that the first MiG-21 was shot down. Another six Syrian MiG-21s were shot down by Israeli Mirages on 7 April 1967. MiG-21s also faced McDonnell Douglas F-4 Phantom IIs and Douglas A-4 Skyhawks, but were later outclassed by the more modern McDonnell Douglas F-15 Eagle and General Dynamics F-16 Fighting Falcon, both acquired by Israel starting in the mid-1970s. During this period, Syrian pilots flying MiG-21s also independently discovered the Cobra maneuver, which became a standard defensive maneuver under the name "zero speed maneuver" (Syrian: مناورة السرعة صفر). During the opening attacks of the 1967 Six-Day War, the Israeli Air Force (IAF) struck Arab air forces in four attack waves. In the first wave, Israeli pilots claimed to have destroyed eight Egyptian aircraft in air-to-air combat, of which seven were MiG-21s; Egypt claimed five kills scored by MiG-21PFs. During the second wave, Israel claimed four more MiG-21s downed in air-to-air combat, with the third wave resulting claimed air victories over two Syrian and one Iraqi MiG-21. The fourth wave destroyed many more Syrian MiG-21s on the ground. Overall, Egypt lost around 100 out of about 110 MiG-21s they had, almost all on the ground; Syria lost 35 of its 60 MiG-21F-13s and MiG-21PFs in the air and on the ground. Between the end of the Six-Day War and the start of the War of Attrition, IAF Mirage fighters scored six confirmed kills of Egyptian MiG-21s, and Egyptian MiG-21s scored two confirmed and three probable kills against Israeli aircraft. Between the end of the Six-Day War to the end of the War of Attrition, Israel claimed a total of 25 destroyed Syrian MiG-21s; the Syrians claimed three confirmed and four probable kills of Israel aircraft, although Israel denied these. High losses to Israeli aircraft and continuous bombing during the War of Attrition caused Egypt to ask the Soviet Union for help. In March 1970, Soviet pilots and SAM crews arrived with their equipment. On 13 April, during the air battle over the Red Sea coast, the Soviet MiG-21MFs, according to some data, shot down two Israeli F-4 fighters On 18 April, one Israeli scout RF-4E "Phantom" was damaged by a Soviet MiG-21MF. On 16 May, an Israeli aircraft was shot down in air combat, probably by a Soviet MiG-21. On 22 June 1970, a Soviet pilot flying a MiG-21MF shot down an Israeli A-4E. After that, several more successful intercepts were carried out by Soviet pilots and another Israeli A-4 was shot down on 25 June. In response, Israel planned an ambush, calling it Operation Rimon 20. On 30 July, Israeli F-4s lured Soviet MiG-21s into an area where they were ambushed by Israeli Mirages. Asher Snir, flying a Mirage IIICJ, destroyed a Soviet MiG-21; Avihu Ben-Nun and Aviam Sela, both piloting F-4Es, each got a kill, and an unidentified pilot in another Mirage scored a fourth kill against a Soviet-flown MiG-21; the IAF suffered only a damaged Mirage. Three Soviet pilots were killed and the Soviet Union was alarmed by the losses. Yet though it was a morale-boosting achievement, Rimon 20 did not change the course of the war. After the operation, other IAF aircraft were lost to Soviet MiG-21s and SAMs. A week later, on 7 August, the Soviets responded by deploying more aircraft to Egypt and luring Israeli fighter jets into an ambush of their own, "Operation Kavkaz", downing two Israeli Mirage IIICJs. In all, during March and August 1970, Soviet MiG-21 pilots and SAM crews destroyed 21 Israeli aircraft (eight by SA-3 missile systems and 13 by MiG-21s) at a cost of 5 MiG-21s shot down by the IAF, helping to convince the Israelis to sign a ceasefire. In September 1973, a large air battle erupted between Syria and Israel; Israel claimed a total of 12 Syrian MiG-21s destroyed, while Syria claimed eight kills scored by MiG-21s and admitted five losses. During the Yom Kippur War, Israel claimed 73 kills against Egyptian MiG-21s (65 confirmed). Egypt claimed 27 confirmed kills and eight probables against Israeli aircraft by its MiG-21s. However, according to most Israeli sources, these were exaggerated claims, as Israeli air-to-air combat losses for the entire war did not exceed fifteen. On the Syrian front, 6 October 1973 saw a flight of Syrian MiG-21MFs shoot down an Israeli A-4E and Mirage IIICJ, losing three of their own to Israeli IAI Neshers. On 7 October, Syrian MiG-21MFs downed two Israeli F-4Es, three Mirage IIICJs and an A-4E while losing two of their MiGs to Neshers and one to an F-4E, as well as two to friendly SAM fire. Iraqi MiG-21PFs also operated on this front, and on that same day destroyed two A-4Es while losing one MiG. On 8 October 1973, Syrian MiG-21PFMs downed three F-4Es, but six of their MiG-21s were lost. By the end of the war, Syrian MiG-21s claimed a total of 30 confirmed kills against Israeli aircraft; 29 MiG-21s were claimed (26 confirmed) as destroyed by the IDF. Later on 26 April 1974, an unusual occurrence involving Pakistani fighter pilot Flight Lieutenant Sattar Alvi took place while he was on deputation to the No. 67A Squadron of the Syrian Air Force. Alvi, flying a Syrian MiG-21F-13 (Serial No. 1863) out of Syria's Al-Dumayr Air Base with a fellow PAF pilot, was on aerial patrol near the Golan Heights when he spotted two Israeli Mirage-IIICJs intruding in Syrian airspace. According to modern Pakistani sources, Alvi and his flight leader engaged them, and after a brief dogfight, shot down one of the Mirages, flown by Captain M. Lutz. The Israeli pilot later succumbed to wounds he sustained during ejection. However, no major sources from the time reported on such an incident, and there is no mention of "Captain Lutz" in Israel's Ministry of Defense's record of Israel's casualties of war. Between the end of the Yom Kippur War and the start of the 1982 Lebanon War, Israel received modern F-15s and F-16s which were far superior to the old Syrian MiG-21MFs. According to the IDF, these new aircraft shot down 24 Syrian MiG-21s over this period, though Syria did claim five IAF kills by MiG-21s armed with outdated K-13 missiles; Israel denied that it had suffered any losses. The 1982 Lebanon War began on 6 June 1982, and during the conflict the IAF claimed to have destroyed about 45 Syrian MiG-21MFs. Syria confirmed the loss of 37 MiG-21s, including 24 MiG-21bis and 10 MiG-21MF downed and 2 MiG-21bis and 1 MiG-21MF written off Syria claimed two confirmed and 15 probable kills of Israeli aircraft. Two Israeli F-15s and one F-4 were damaged in combat with MiG-21s. In the largest air battle since the Korean War, one Israeli F-15 was heavily damaged by a Syrian MiG-21 firing a R-60 (missile), but was able to make back to base for repairs. Syrian civil war Beginning in July 2012, at which point the Syrian civil war had lasted a year without aerial action, the Syrian Air Force started operations against Syrian insurgents. MiG-21s were among the first combat-ready aircraft used in bombings, rocket attacks and strafing runs, with numerous videos showing the attacks. The rebels had access to heavy machine guns, different anti-aircraft guns and Russian and Chinese MANPADS, up to modern designs such as the FN-6. The first loss of a MiG-21 during the Syrian civil war was recorded on 30 August 2012. The MiG, registration number 2271, was likely downed by heavy machine gun fire on takeoff or landing at Abu al-Duhur Military Airbase, under siege by rebels. A few days later, on 4 September 2012, another MiG-21 (registration number 2280) was shot down in similar circumstances at the same base, also likely on takeoff or landing by rebels using by KPV 14.5 mm machine guns; the downing was recorded on video. On 10 November 2014, Syrian Air Force MiG-21bis number 2204 was shot down, and its pilot killed, by rebels either using a MANPADS or antiaircraft guns, near the town of Sabburah, 45 km east of Hama Airbase where it was likely based. Video and photo evidence of the crash site later emerged. Four months after a MiG-23 was shot down and during which time the Syrian Air Force suffered no losses from enemy fire, one of its MiG-21s was shot down on 12 March 2016 by the Jaysh al-Nasr faction over Hama near Kafr Nabudah. While the Syrian Observatory for Human Rights, as suggested by video evidence, reported that the warplane had been downed by two MANPADS, Jaysh al-Nasr militants claimed to have shot it down with anti-aircraft guns. The pilot appeared to have bailed out of the stricken MiG, but died from ground fire or other causes. On 4 March 2017, a Syrian MiG-21bis from No. 679 squadron, operating out of Hama Airbase and piloted by Col. Mohammad Sawfan, was shot down by Ahrar al-Sham rebels, crashing in Turkish territory near the border. Col. Sawfan successfully ejected but was arrested and taken to a hospital in Antakya, Turkey. A recording between the pilot and ground controller clearly showed Sawfan's disorientation due to a malfunctioning compass, followed by a failure of the entire navigation system. He could not find his way back to base as ordered and inadvertently flew within range of rebel anti-aircraft guns. After being suspended for a number of years, Sawfan was allowed to return to service. Libyan–Egyptian War Egypt received American Sidewinder missiles, fitting them to their MiG-21s and successfully using them in combat against Libyan Mirages and MiG-23s during the brief Egyptian–Libyan War of July 1977. Iran–Iraq War During the Iran–Iraq War, 23 Iraqi MiG-21s were shot down by Iranian F-14s, as confirmed by Iranian, Western and Iraqi sources and another 29 Iraqi MiG-21s were downed by F-4s. However, from 1980 to 1988, Iraqi MiG-21s shot down 43 Iranian fighter aircraft. Libya Libyan Civil War (2011) Libyan MiG-21s saw limited service during the 2011 Libyan civil war. On 15 March 2011, one MiG-21bis and one MiG-21UM flown by defecting Libyan Air Force pilots flew from Ghardabiya Airbase near Sirte to Benina Airport to join the rebellion's Free Libyan Air Force. On 17 March 2011, the MiG-21UM experienced a technical fault and crashed after taking off from Benina. Libyan Civil War (2014–2020) In the Second Libyan Civil War (2014–2020), the Libyan National Army, under the command of Khalifa Haftar is loyal to the legislative body in Tobruk, which is the Libyan House of Representatives, internationally recognised until October 2015. It fights against the now internationally recognized Government of National Accord and the Shura Council of Benghazi Revolutionaries as well as Islamic State in Libya which are common enemies for both the Government of National Accord and the Libyan National Army. Both the Libyan National Army and the Government of National Accord field small airforces. As such, a number of former Libyan Arab Air Force (LARAF) MiG-21s were returned to service with the Tobruk-based Libyan National Army, thanks to spare parts and technical assistance from Egypt and Russia, while a number of former Egyptian Air Force MiG-21s were pressed into service as well. MiG-21s under the control of the Libyan House of Representatives have been used extensively to bomb forces loyal to the rival General National Congress in Benghazi during the 2014 Libyan Civil War. On 29 August 2014, an LNA MiG-21bis, serial number 208, after a bombing mission over Derna, crashed in Bayda according to an official statement as a result of a technical failure of the plane, while Islamist Shura Council of Benghazi Revolutionaries claimed it was shot down. The pilot did not eject and died in the crash. On 2 September 2014 an LNA MiG-21bis, serial number 800, crashed in a city block of Tobruk, due to pilot error during a pull-up maneuver. It is unclear whether the pilot had been on a bombing mission on the way to Derna, further East, or had been performing an aerial ceremony for the MiG-21 pilot who died a few days earlier. Part of the 2019 Western Libya offensive, on 9 April 2019, a Libyan National Army MiG-21 made a low altitude diving rocket attack, probably firing S-24 rockets on Mitiga airport in Tripoli, making limited damages to one of the runways. On 14 April 2019, a Libyan National Army MiG-21MF was shot down by a surface-to-air missile, probably a MANPADS fired by the forces of the Libyan Government of National Accord (GNA) south of Tripoli. Video evidence confirmed the MiG-21 came under fire from anti-aircraft guns, small arms and two SAMs, one of which apparently hit the target. The pilot, Colonel Jamal Ben Amer ejected safely and recovered to LNA-held territory by a Mi-35 helicopter. LNA sources confirmed the loss but blamed a technical problem. Horn of Africa During the Ogaden War of 1977–78, Ethiopian Air Force F-5As engaged Somali Air Force MiG-21MFs in combat on several occasions. In one lopsided incident, two F-5As piloted by Israeli advisers or mercenaries engaged four MiG-21MFs. The MiGs were handled incompetently by the Somali pilots, and the F-5As destroyed two while the surviving pilots collided with each other avoiding an AIM-9. Ethiopia claimed to have shot down 10 Somali MiG-21MFs; while Somalia also claimed to have destroyed several Ethiopian MiG-21MFs, three F-5Es, one Canberra bomber and three Douglas DC-3s. Ethiopian MiG-21s were deployed largely in the ground attack role, and proved instrumental during the final offensive against Somali ground forces. Ethiopian pilots who had flown both the F-5E and the MiG-21 and received training in both the US and the USSR considered the F-5 to be the superior fighter because of its manoeuvrability at low to medium speeds, its superior instrumentation and the fact that it was far easier to fly, allowing the pilot to focus on combat rather than controlling his airplane. This effect was enhanced by the poor quality of pilot training provided by the Soviets, which provided limited flight time and focussed exclusively on taking off and landing, with no practical training in air combat. Angola During Angola's long-running civil war, MiG-21s of the Cuban Air Force were frequently deployed to attack ground targets manned by rebel forces or engage South African Air Force Mirage F1s conducting cross-border strikes. Most MiG-21 losses over Angola were attributed to accurate ground fire, such as an example downed by National Union for the Total Independence of Angola (UNITA) insurgents near Luena with an American FIM-92 Stinger. Despite extensive losses to man-portable air-defense systems, MiG-21s were instrumental during the Battle of Cuito Cuanavale; Cuban pilots became accustomed to flying up to three sorties a day. Both the MiG-21MF and the MiG-21bis were deployed almost exclusively in the fighter/bomber role. As interceptors, they were somewhat unsuccessful due to their inability to detect low-flying South African aircraft. On 6 November 1981, a Mirage F1CZ achieved South Africa's first confirmed air-to-air kill since the Korean War when it destroyed Cuban Lieutenant Danacio Valdez's MiG-21MF with 30mm cannon fire. On 5 October 1982, Mirages escorting an English Electric Canberra on routine reconnaissance over Cahama were engaged by at least two MiG-21bis. A South African radar operator picked up the attacking MiGs and was able to alert the Mirage pilots in advance, instructing them to change course immediately. As they jettisoned their auxiliary tanks, however, they were pinpointed by the Cubans, who opened pursuit. In a vicious dogfight, SAAF Major John Rankin closed range and maneuvered into the MiGs' rear cones. From there, one of his two R.550 Magic missiles impacted directly behind the lead MiG and forced it down. The second aircraft, piloted by Lieutenant Raciel Marrero Rodriguez, could not detect the Mirage's proximity until it had entered his turn radius and was perforated by Rankin's autocannon. This damaged MiG-21 landed safely at Lubango. Contacts between MiG-21s and SAAF Mirage F1s or Mirage IIIs became increasingly common throughout the 1980s. Between 1984 and 1988, thirteen MiG-21s were lost over Angola. On 9 August 1984, a particularly catastrophic accident occurred when the 9th Fighter Training Squadrons and the 12th Fighter Squadrons of the Cuban Air Force attempted to carry out an exercise in poor weather. A single MiG-21bis and three MiG-23s were lost. On 14 December 1988, an Angolan Air Force MiG-21bis, serial number C340, strayed off course and being low on fuel executed an emergency landing on an open field in South West Africa, modern-day Namibia, where it was seized by local authorities. Since Angola did not request its return after the South African Border War, the MiG was restored by Atlas Aviation and until September 2017 it was displayed at Swartkops Air Force Base, Pretoria. The jet was returned to Angola, flying in an Angolan Il-76 cargo plane, as a sign of goodwill on 15 September 2017. Democratic Republic of the Congo The MiG-21MFs of the 25th Fighter Aviation Regiment of the National Air Force of Angola flew ground sorties during the Second Congo War, sometimes being piloted by mercenaries. Some six MiG-21s were imported into the country during the First Congo War for the Congo Air Force, but do not appear to have seen operational service. (Cooper and Weinert, "African MiGs: Volume 1: Angola to Ivory Coast"). Yugoslavia Yugoslavia purchased its first batch of MiG-21s in 1962 from the Soviet Union. From 1962 to the early 1980s, Yugoslavia purchased 261 MiG-21s, of ten different variants. There were 41 MiG-21f-13, 36 MiG-21PfM, 25 MiG-21M, 6 MiG-21MF, 46 MiG-21bis, 45 MiG-21bisK, 12 MiG-21R, 18 MiG-21U, 25 MiG-21UM, and 7 MiG-21US. Yugoslav Air force units that operated MiG-21s were the 204th Fighter-Aviation Regiment at Batajnica Air Base (126th, 127th and 128th fighter-aviation squadrons), 117th fighter-aviation regiment at Željava Air Base (124th and 125th fighter-aviation squadron and 352nd recon squadron), 83rd fighter-aviation regiment at Slatina Air Base (123rd and 130th fighter aviation squadron), 185th fighter-bomber-aviation squadron (129th fighter-aviation squadron) at Pula and 129th training center at Batajnica air base. During the early stages of the 1990s' Yugoslav wars, the Yugoslav military used MiG-21s in a ground-attack role, while Croatian and Slovene forces did not yet have air forces at that point in the conflict. Aircraft from air bases in Slovenia, Croatia, and Bosnia and Herzegovina were relocated to air bases in Serbia. Detailed records show at least seven MiG-21s were shot down by AA defenses in Croatia and Bosnia. A MiG-21 piloted by Serbian Yugoslav Air Force pilot shot down an EC helicopter in 1992. Croatia acquired three MiG-21s in 1992 through defections by Croatian pilots serving with the JNA, two of which were lost in subsequent actions – one to Serbian air defenses, the other a friendly fire accident. In 1993, Croatia purchased about 40 MiG-21s in violation of an arms embargo, but only about 20 of these entered service, while the rest were used for spare parts. Croatia used them alongside the sole remaining defector for ground attack missions in operations Flash (during which one was lost) and Storm. The only air-to-air action for Croatian MiGs was an attempt by two of them to intercept Soko J-22 Oraos of Republika Srpska's air force on a ground attack mission on 7 August 1995. After some maneuvering, both sides disengaged without firing. The remaining Yugoslav MiG-21s were flown to Serbia by 1992 and continued their service in the newly created Federal Republic of Yugoslavia. During the 1999 NATO bombing of Yugoslavia, three MiG-21s were destroyed on the ground. The type continued to serve with the Serbian Air Force until 25 September 2020, when the country's last active MiG-21 crashed in the village of Brasina, near Mali Zvornik, killing both pilots. The aircraft, a MiG-21UM that left the assembly line in December 1986, was the last MiG-21 ever produced in the Soviet Union. Romania In 1962, Romanian Air Force (RoAF) received the first 12 MiG-21F-13, followed by another 12 of the same variant in 1963. Deliveries continued over the next years with other variants: 38 aircraft of MiG-21RFM (PF) variant in 1965, 7 MiG-21U-400/600 in 1965–1968, 56 MiG-21RFMM (PFM) in 1966–1968, 12 MiG-21R in 1968–1972, 68 MiG-21M plus 11 MiG-21US in 1969–1970, 74 MiG-21MF/MF-75 in 1972–1975, and 27 MiG-21UM in 1972–1980 plus another 5 of the same variant in 1990, for a total number of 322 aircraft. Beginning in 1993, Russia did not offer spare parts for the MiG-23 and MiG-29 for the RoAF. Initially, this was the context for the modernization of the Romanian MiG-21s with Elbit Systems, and because it was easier to maintain these fighter jets. In 1995–2002, a total of 111 MiG-21s were modernized, of which 71 were M and MF/MF-75 variants modernized under the LanceR A designation (for ground attack), 14 were UM variant as LanceR B designation (trainer), and another 26 MF/MF-75 variant were modernized under LanceR C designation (air superiority). Today, only 36 LanceRs are operational for the RoAF. It can use both Western and Eastern armament such as the R-60M, R-73, Magic 2, or Python III missiles. The MiG-21s are to be retired in 2024, after another two F-16 squadrons will be ready following the purchase of 32 more F-16s from Norway. The first F-16 squadron was completed in 2021 with the arrival of the last F-16 purchased from Portugal. Despite being one of the newest MiG-21 fleets in service, the Romanian MiG-21 LanceR fleet was grounded due to difficulties maintaining the aircraft, and since 1996 it has had an accident rate of over 30 per 100,000 hours. Serviceability rates below 50% are not uncommon. The Romanian Air Force has suffered numerous events in recent years with its arsenal of MiG-21s. On 12 June 2017, a MiG-21 crashed in Constanța County, with Adrian Stancu, the pilot, managing to escape in time. On 7 July 2018, Florin Rotaru died during an airshow in Borcea with some 3,000 attendants while piloting a MiG-21 that suffered technical difficulties, choosing to deflect the plane and die to protect the attendants rather than ejecting himself in time. On 20 April 2021, during a training flight, a MiG-21 crashed in an uninhabited zone in Mureș County. The pilot, Andrei Criste, managed to eject safely and survived the crash. On the March 2nd 2022, A MiG-21 LanceR crashed during adverse weather conditions near the village of Gura Dobrogei, Cogealac Commune. On 15 April 2022, the RoAF suspended all MiG-21 LanceR flights due to the high rate of accidents, and announced that it planned to speed up the acquisition of the ex-Norwegian F-16s. On 23 May, the LanceRs resumed flights for a period of one year, until 15 May 2023. On 15 May 2023, a retirement ceremony was held for the aircraft at the 71st Air Base, and at the 86th Air Base. From there, the MiG-21 took off to their final destination at the 95th Air Base. Bulgaria The Bulgarian Air Force received a total of 224 MiG-21 aircraft. From September 1963 the 19th Fighter Regiment of the Air Force received 12 MiG-21F-13s. Later some of these aircraft were converted for reconnaissance as MiG-21F-13Rs, which were submitted to the 26th Reconnaissance Regiment in 1988. In January 1965 the 18th Fighter Regiment received a squadron of 12 MiG-21PFs, some of which also were converted and used as reconnaissance aircraft (MiG-oboznachevnieto 21PFR). The 26 Regiment reconnaissance aircraft from this squadron were removed from service in 1991, the 15 Fighter Regiment in 1965 received another 12 MiG-21PF fighters and in 1977–1978 operated another 36 refurbished aircraft. This unit received two more aircraft in 1984 and operated them until 1992. For reconnaissance, a regiment received 26 specialized reconnaissance MiG-21Rs in 1962, and in 1969–1970, 19 Fighter Aviation Regiment received 15 MiG-21m aircraft, which operated in 21 Fighter Aviation Regiment and were removed from active service in 1990. An additional 12 MiG-21MF fighters were received in 1974–1975, with a reconnaissance version of the MiG-21MFR provided to the 26th Reconnaissance Regiment and used until 2000, when removed from active service. From 1983 to 1990, the Bulgaria Air Force received 72 MiG-21bis. Of these, 30 (six new and renovated) are under option with ACS and provided to the 19th Fighter Regiment; the rest are equipped with the "Lazur". This batch was taken out of service in 2000. Besides fighters, the Air Force has received 39 MiG-21U trainers (one in 1966), five MiG-21US in 1969–1970 and 27 MiG-21UM (new) during 1974–1980, another six refurbished ex-Soviet examples in 1990. In 1982, three MiG-21UM trainers were sold to Cambodia and in 1994 another 10 examples. MiG-21UMs were also sold to India. Other training aircraft were removed from active service in 2000. A total of 38 aircraft were lost in the period 1963–2000. The last flight of a Bulgarian Air Force MiG-21 took off from Graf Ignatievo Air Base on 31 December 2015. On 18 December 2015, there was an official ceremony for the retirement of the type from active duty. Known MiG-21 aces Several pilots have attained ace status (five or more aerial victories/kills) while flying the MiG-21. Nguyễn Văn Cốc of the VPAF, who scored nine kills in MiG-21s is regarded as the most successful. Twelve other VPAF pilots were credited with five or more aerial victories while flying the MiG-21: Phạm Thanh Ngân, Nguyễn Hồng Nhị and Mai Văn Cường (both eight kills); Đặng Ngọc Ngự (seven kills), Vũ Ngọc Đỉnh, Nguyễn Ngọc Độ, Nguyễn Nhật Chiêu, Lê Thanh Đạo, Nguyễn Đăng Kỉnh, Nguyễn Đức Soát, and Nguyễn Tiến Sâm (six kills each), and Nguyễn Văn Nghĩa (five kills). Additionally, three Syrian pilots are known to have attained ace status while flying the MiG-21. Syrian airmen: M. Mansour recorded five solo kills (with one additional probable), B. Hamshu scored five solo kills, and A. el-Gar tallied four solo and one shared kill, all three during the 1973–1974 engagements against Israel. Due to the incomplete nature of available records, there are several pilots who have unconfirmed aerial victories (probable kills), which when confirmed would award them "Ace" Status: S. A. Razak of the Iraqi Air Force with four known kills scored during the Iran–Iraq War (until 1991; sometimes referred to as the Persian Gulf War), A. Wafai of the Egyptian Air Force with four known kills against Israel. Variants Operators Current operators This list does not include operators of Chinese copies / licensed manufactured versions known as the Chengdu J-7/F-7. National Air Force of Angola – 23 in service as of 2023. Azerbaijani Air Forces – 5 in service as of 2023. Cuban Revolutionary Air and Air Defense Force – 11 aircraft in service as of 2023. Guinea Air Force – 3 in service as of 2023. Indian Air Force – 36 in service as of November 2023. MiG-21FL version withdrawn December 2013, MiG-21PF (MiG-21FL or Type 77) withdrawn in January 2014. Upgraded MiG-21bis has already been retired by IAF. All remaining variants will be withdrawn by 2025. Libyan Air Force – 12 aircraft in service as of 2023. Malian Air Force - 9 aircraft in service as of December 2023. Mozambique Air Force – 8 aircraft, comprising 6 MiG-21bis and 2 MiG-21UM Trainers are in service as of 2023. Korean People's Air Force - 26 in service as of 2023. Sudanese Air Force - 4 in service as of 2023. Syrian Arab Air Force - 50 in service as of 2024. Yemeni Air Force - 19 in service as of 2023. Former operators Royal Afghan Air Force − 50 MiG-21F-13s and MiG-21Us Afghan Air Force − 46 MiG-21MFs and MiG-21UMs, and 40 MiG-21bis. Several were shot down or destroyed on the ground during the Second Afghan Civil War Taliban − About 20 MiG-21s in 2000, used in the ground attack role National Islamic Movement of Afghanistan − About 30 MiG-21s in 2000 Algerian Air Force Bangladesh Air Force – 10 Mig-21MF's and 2 Mig-21UB's were gifted to Bangladesh in 1970s by Soviet Union. It was the first supersonic fighter of BAF. They were operated from 1973 and all retired in 2000s, replaced by F-7's. Belarusian Air Force Bulgarian Air Force Burkina Faso Air Force Royal Cambodian Air Force Congolese Air Force – 14, in storage Croatian Air Force – In November 2024, the last remaining MiG-21s (four fighters and two trainers) were retired. Czechoslovak People's Air Force – passed on to the Czech Republic and Slovakia. Czech Air Force Air Forces of the National People's Army – passed on to Germany after reunification. Egyptian Air Force Eritrean Air Force Ethiopian Air Force Finnish Air Force Guinea-Bissau Air Force Luftwaffe Georgian Air Force Hungarian Air Force Indonesian Air Force Iranian Air Force – purchased 12 ex-East German MiG-21PFMs plus four MiG-21Us for training purposes. However, only two MiG-21Us were delivered, the others being embargoed after German reunification currently have 17 Chengdu J-7 for training purposes. Iraqi Air Force – operated during Saddam Hussein's Era. Israeli Air Force – acquired as part of Operation Diamond. Currently in Israeli Air Force Museum Military of ISIL – captured 19 (1 operational). Originally three in operational condition. The Syrian Air Force claimed to have shot down two of them. Other airframes are in various states of disrepair and some of them were being overhauled at the time of their capture. Kyrgyzstan Air and Air Defence Force Lao People's Liberation Army Air Force Malagasy Air Force Mongolian Air Force Namibian Air Force Nigerian Air Force People's Liberation Army Air Force – replaced by the Chengdu J-7, a license-built version of the MiG-21. In addition to MiG-21F-13s supplied by the Soviet Union, China also traded a small amount of MiG-21MFs with J-7 export variations, then developed the J-7C/D variants based on the MiG-21MF. The deal was between China and a certain Middle Eastern country. In May 2013, an official publication from Chengdu Aircraft Corporation reported that J-7 production had ceased after decades of manufacturing variations of this Chinese-made MiG-21. Polish Air Force Polish Naval Aviation Romanian Air Force – officially retired on 15 May 2023 Russian Air Force Serbian Air Force and Air Defence – retired from service in May 2021.Replaced by MiG-29SM+. Air Force of Serbia and Montenegro – passed on to Serbia. Slovak Air Force – in 1993, with the dissolution of Czechoslovakia, the Slovak Air Force obtained 13 MiG-21MAs, 36 MiG-21MFs, eight MiG-21Rs, two MiG-21USs and 11 MiG-21UMs. They were withdrawn in 2003. Some were put on display and placed in museums across the country; others were scrapped. Somali Air Force – passed to successor states after the dissolution of the Soviet Union. Soviet Air Force Soviet Air Defence Force Soviet Naval Aviation Tanzanian Air Force Turkmen Air Force Ugandan Air Force – retired from United States Air Force after evaluation flights under "Have Doughnut" and aggressor squadron duty. Ukrainian Air Force Vietnam People's Air Force – retired from service in November 2015, put in temporary storage while the Air Force searches for a replacement, possibly the Sukhoi Su-35 or even the American F-16. Yugoslav Air Force – passed on to Serbia and Montenegro. – four were sold to the Zairean government by Yugoslavia but never flew. Zambian Air Force Civilian operators According to the United States Federal Aviation Administration (FAA), there were 44 privately owned MiG-21s in the U.S. in 2012 By 2013, Draken International had acquired 30 MiG-21bis/UM aircraft, mostly from Poland. In 2017, it operated 30 MiGs. Use as suborbital space launch platform In 2012, Premier Space Systems in Hillsboro, Oregon, US, conducted flight tests for NanoLaunch, a project to launch suborbital sounding rockets from MiG-21s flying over the Pacific Ocean. The company was dissolved in 2018 Specifications (MiG-21bis)
Technology
Specific aircraft
null
61967
https://en.wikipedia.org/wiki/Detonator
Detonator
A detonator is a device used to make an explosive or explosive device explode. Detonators come in a variety of types, depending on how they are initiated (chemically, mechanically, or electrically) and details of their inner working, which often involve several stages. Types of detonators include non-electric and electric. Non-electric detonators are typically stab or pyrotechnic while electric are typically "hot wire" (low voltage), exploding bridge wire (high voltage) or explosive foil (very high voltage). The original electric detonators invented in 1875 independently by Julius Smith and Perry Gardiner used mercury fulminate as the primary explosive. Around the turn of the century performance was enhanced in the Smith-Gardiner blasting cap by the addition of 10-20% potassium chlorate. This compound was superseded by others: lead azide, lead styphnate, some aluminium, or other materials such as DDNP (diazo dinitro phenol) to reduce the amount of lead emitted into the atmosphere by mining and quarrying operations. They also often use a small amount of TNT or tetryl in military detonators and PETN in commercial detonators. History The first blasting cap or detonator was demonstrated in 1745 when British physician and apothecary William Watson showed that the electric spark of a friction machine could ignite black powder, by way of igniting a flammable substance mixed in with the black powder. In 1750, Benjamin Franklin in Philadelphia made a commercial blasting cap consisting of a paper tube full of black powder, with wires leading in both sides and wadding sealing up the ends. The two wires came close but did not touch, so a large electric spark discharge between the two wires would fire the cap. In 1832, a hot wire detonator was produced by American chemist Robert Hare, although attempts along similar lines had earlier been attempted by the Italians Volta and Cavallo. Hare constructed his blasting cap by passing a multistrand wire through a charge of gunpowder inside a tin tube; he had cut all but one fine strand of the multistrand wire so that the fine strand would serve as the hot bridgewire. When a strong current from a large battery (which he called a "deflagrator" or "calorimotor") was passed through the fine strand, it became incandescent and ignited the charge of gunpowder. In 1863, Alfred Nobel realized that although nitroglycerin could not be detonated by a fuse, it could be detonated by the explosion of a small charge of gunpowder, which in turn was ignited by a fuse. Within a year, he was adding mercury fulminate to the gunpowder charges of his detonators, and by 1867 he was using small copper capsules of mercury fulminate, triggered by a fuse, to detonate nitroglycerin. In 1868, Henry Julius Smith of Boston introduced a cap that combined a spark gap ignitor and mercury fulminate, the first electric cap able to detonate dynamite. In 1875, Smith—and then in 1887, Perry G. Gardner of North Adams, Massachusetts—developed electric detonators that combined a hot wire detonator with mercury fulminate explosive. These were the first generally modern type blasting caps. Modern caps use different explosives and separate primary and secondary explosive charges, but are generally very similar to the Gardner and Smith caps. Smith also invented the first satisfactory portable power supply for igniting blasting caps: a high-voltage magneto that was driven by a rack and pinion, which in turn was driven by a T-handle that was pushed downwards. Electric match caps were developed in the early 1900s in Germany, and spread to the US in the 1950s when ICI International purchased Atlas Powder Co. These match caps have become the predominant world standard cap type. Purpose The need for detonators such as blasting caps came from the development of safer secondary and tertiary explosives . Secondary and tertiary explosives are typically initiated by an explosives train starting with the detonator. For safety, detonators and the main explosive device are typically only joined just before use. Design A detonator is usually a multi stage device, with three parts: at the first stage, the initiation mean (fire, electricity, etc.) provide enough energy (as heat or mechanical shock) to activate an easy-to-ignite primary explosive, which in turn detonates a small amount of a more powerful secondary explosive, directly in contact with the primary, and called "base" or "output" explosive, able to carry out the detonation through the casing of the detonator to the main explosive device to activate it. Explosives commonly used as primary in detonators include lead azide, lead styphnate, tetryl, and DDNP. Early blasting caps also used silver fulminate, but it has been replaced with cheaper and safer primary explosives. Silver azide is still used sometimes, but very rarely due to its high price. It is possible to construct a Non Primary Explosive Detonator (NPED) in which the primary explosive is replaced by a flammable but non-explosive mixture that propagates a shock wave along a tube into the secondary explosive. NPEDs are harder to accidentally trigger by shock and can avoid the use of lead. As secondary "base" or "output" explosive, TNT or tetryl are typically found in military detonators and PETN in commercial detonators. While detonators make explosive handling safer, they are hazardous to handle since, despite their small size, they contain enough explosive to injure people; untrained personnel might not recognize them as explosives or wrongly deem them not dangerous due to their appearance and handle them without the required care. Types Ordinary detonators usually take the form of ignition-based explosives. While they are mainly used in commercial operations, ordinary detonators are still used in military operations. This form of detonator is most commonly initiated using a safety fuse, and used in non time-critical detonations e.g. conventional munitions disposal. Well known detonators are lead azide [Pb(N3)2], silver azide [AgN3] and mercury fulminate [Hg(ONC)2]. There are three categories of electrical detonators: instantaneous electrical detonators (IED), short period delay detonators (SPD) and long period delay detonators (LPD). SPDs are measured in milliseconds and LPDs are measured in seconds. In situations where nanosecond accuracy is required, specifically in the implosion charges in nuclear weapons, exploding-bridgewire detonators are employed. The initial shock wave is created by vaporizing a length of a thin wire by an electric discharge. A new development is a slapper detonator, which uses thin plates accelerated by an electrically exploded wire or foil to deliver the initial shock. It is in use in some modern weapons systems. A variant of this concept is used in mining operations, when the foil is exploded by a laser pulse delivered to the foil by optical fiber. A non-electric detonator is a shock tube detonator designed to initiate explosions, generally for the purpose of demolition of buildings and for use in the blasting of rock in mines and quarries. Instead of electric wires, a hollow plastic tube delivers the firing impulse to the detonator, making it immune to most of the hazards associated with stray electric current. It consists of a small diameter, three-layer plastic tube coated on the innermost wall with a reactive explosive compound, which, when ignited, propagates a low energy signal, similar to a dust explosion. The reaction travels at approximately 6,500 ft/s (2,000 m/s) along the length of the tubing with minimal disturbance outside of the tube. Non-electric detonators were invented by the Swedish company Nitro Nobel in the 1960s and 1970s, and launched to the demolitions market in 1973. In civil mining, electronic detonators have a better precision for delays. Electronic detonators are designed to provide the precise control necessary to produce accurate and consistent blasting results in a variety of blasting applications in the mining, quarrying, and construction industries. Electronic detonators may be programmed in millisecond or sub-millisecond increments using a dedicated programming device. Wireless electronic detonators are beginning to be available in the civil mining market. Encrypted radio signals are used to communicate the blast signal to each detonator at the correct time. While currently expensive, wireless detonators can enable new mining techniques as multiple blasts can be loaded at once and fired in sequence without putting humans in harm's way. A number 8 test blasting cap is one containing 2 grams of a mixture of 80 percent mercury fulminate and 20 percent potassium chlorate, or a blasting cap of equivalent strength. An equivalent strength cap comprises 0.40-0.45 grams of PETN base charge pressed in an aluminum shell with bottom thickness not to exceed to 0.03 of an inch, to a specific gravity of not less than 1.4 g/cc, and primed with standard weights of primer depending on the manufacturer. Blasting caps The oldest and simplest type of cap, fuse caps are a metal cylinder, closed at one end. From the open end inwards, there is first an empty space into which a pyrotechnic fuse is inserted and crimped, then a pyrotechnic ignition mix, a primary explosive, and then the main detonating explosive charge. The primary hazard of pyrotechnic blasting caps is that for proper usage, the fuse must be inserted and then crimped into place by crushing the base of the cap around the fuse. If the tool used to crimp the cap is used too close to the explosives, the primary explosive compound can detonate during crimping. A common hazardous practice is crimping caps with one's teeth; an accidental detonation can cause serious injury to the mouth. Fuse type blasting caps are still in active use today. They are the safest type to use around certain types of electromagnetic interference, and they have a built in time delay as the fuse burns down. Solid pack electric blasting caps use a thin bridgewire in direct contact (hence solid pack) with a primary explosive, which is heated by electric current and causes the detonation of the primary explosive. That primary explosive then detonates a larger charge of secondary explosive. Some solid pack fuses incorporate a small pyrotechnic delay element, up to a few hundred milliseconds, before the cap fires. Match type blasting caps use an electric match (insulating sheet with electrodes on both sides, a thin bridgewire soldered across the sides, all dipped in ignition and output mixes) to initiate the primary explosive, rather than direct contact between the bridgewire and the primary explosive. The match can be manufactured separately from the rest of the cap and only assembled at the end of the process. Match type caps are now the most common type found worldwide. The exploding-bridgewire detonator was invented in the 1940s as part of the Manhattan Project to develop nuclear weapons. The design goal was to produce a detonator which functioned very rapidly and predictably). Both Match and Solid Pack type electric caps take a few milliseconds to fire, as the bridgewire heats up and heats the explosive to the point of detonation. Exploding bridgewire or EBW detonators use a higher voltage electric charge and a very thin bridgewire, .04 inch long, .0016 diameter, (1 mm long, 0.04 mm diameter). Instead of heating up the explosive, the EBW detonator wire is heated so quickly by the high firing current that the wire actually vaporizes and explodes due to electric resistance heating. That electrically-driven explosion causes the low-density initiating explosive (usually PETN) to detonate, which in turn detonates a higher density secondary explosive (typically RDX or HMX) in many EBW designs. In addition to firing very quickly when properly initiated, EBW detonators are much safer than blasting caps from stray static electricity and other electric current. Enough current will melt the bridgewire, but it cannot detonate the initiator explosive without the full high-voltage high-current charge passing through the bridgewire. EBW detonators are used in many civilian applications where radio signals, static electricity, or other electrical hazards might cause accidents with conventional electric detonators. Exploding foil initiators (EFI), also known as Slapper detonators are an improvement on EBW detonators. Slappers, instead of directly using the exploding foil to detonate the initiator explosive, use the electrical vaporization of the foil to drive a small circle of insulating material such as PET film or kapton down a circular hole in an additional disc of insulating material. At the far end of that hole is a pellet of high-density secondary explosive. Slapper detonators omit the low-density initiating explosive used in EBW designs and they require much greater energy density than EBW detonators to function, making them inherently safer. Laser initiation of explosives, propellants or pyrotechnics has been attempted in three different ways, (1) direct interaction with the HE or Direct Optical Initiation (DOI); (2) rapid heating of a thin film in contact with a HE; and (3) ablating a thin metal foil to produce a high velocity flyer plate that impacts the HE (laser flyer).
Technology
Explosive weapons
null
61983
https://en.wikipedia.org/wiki/Tannin
Tannin
Tannins (or tannoids) are a class of astringent, polyphenolic biomolecules that bind to and precipitate proteins and various other organic compounds including amino acids and alkaloids. The term tannin is widely applied to any large polyphenolic compound containing sufficient hydroxyls and other suitable groups (such as carboxyls) to form strong complexes with various macromolecules. The term tannin (from scientific French tannin, from French tan "crushed oak bark", tanner "to tan", cognate with English tanning, Medieval Latin tannare, from Proto-Celtic *tannos "oak") refers to the abundance of these compounds in oak bark, which was used in tanning animal hides into leather. The tannin compounds are widely distributed in many species of plants, where they play a role in protection from predation (acting as pesticides) and might help in regulating plant growth. The astringency from the tannins is what causes the dry and puckery feeling in the mouth following the consumption of unripened fruit, red wine or tea. Likewise, the destruction or modification of tannins with time plays an important role when determining harvesting times. Tannins have molecular weights ranging from 500 to over 3,000 (gallic acid esters) and up to 20,000 daltons (proanthocyanidins). Structure and classes of tannins There are three major classes of tannins: Shown below are the base unit or monomer of the tannin. Particularly in the flavone-derived tannins, the base shown must be (additionally) heavily hydroxylated and polymerized in order to give the high molecular weight polyphenol motif that characterizes tannins. Typically, tannin molecules require at least 12 hydroxyl groups and at least five phenyl groups to function as protein binders. Oligostilbenoids (oligo- or polystilbenes) are oligomeric forms of stilbenoids and constitute a minor class of tannins. Pseudo-tannins Pseudo-tannins are low molecular weight compounds associated with other compounds. They do not change color during the Goldbeater's skin test, unlike hydrolysable and condensed tannins, and cannot be used as tanning compounds. Some examples of pseudo tannins and their sources are: History Ellagic acid, gallic acid, and pyrogallic acid were first discovered by chemist Henri Braconnot in 1831. Julius Löwe was the first person to synthesize ellagic acid by heating gallic acid with arsenic acid or silver oxide. Maximilian Nierenstein studied natural phenols and tannins found in different plant species. Working with Arthur George Perkin, he prepared ellagic acid from algarobilla and certain other fruits in 1905. He suggested its formation from galloyl-glycine by Penicillium in 1915. Tannase is an enzyme that Nierenstein used to produce m-digallic acid from gallotannins. He proved the presence of catechin in cocoa beans in 1931. He showed in 1945 that luteic acid, a molecule present in the myrobalanitannin, a tannin found in the fruit of Terminalia chebula, is an intermediary compound in the synthesis of ellagic acid. At these times, molecule formulas were determined through combustion analysis. The discovery in 1943 by Martin and Synge of paper chromatography provided for the first time the means of surveying the phenolic constituents of plants and for their separation and identification. There was an explosion of activity in this field after 1945, including prominent work by Edgar Charles Bate-Smith and Tony Swain at Cambridge University. In 1966, Edwin Haslam proposed a first comprehensive definition of plant polyphenols based on the earlier proposals of Bate-Smith, Swain and Theodore White, which includes specific structural characteristics common to all phenolics having a tanning property. It is referred to as the White–Bate-Smith–Swain–Haslam (WBSSH) definition. Occurrence Tannins are distributed in species throughout the plant kingdom. They are commonly found in both gymnosperms and angiosperms. Mole (1993) studied the distribution of tannin in 180 families of dicotyledons and 44 families of monocotyledons (Cronquist). Most families of dicot contain tannin-free species (tested by their ability to precipitate proteins). The best known families of which all species tested contain tannin are: Aceraceae, Actinidiaceae, Anacardiaceae, Bixaceae, Burseraceae, Combretaceae, Dipterocarpaceae, Ericaceae, Grossulariaceae, Myricaceae for dicot and Najadaceae and Typhaceae in Monocot. To the family of the oak, Fagaceae, 73% of the species tested contain tannin. For those of acacias, Mimosaceae, only 39% of the species tested contain tannin, among Solanaceae rate drops to 6% and 4% for the Asteraceae. Some families like the Boraginaceae, Cucurbitaceae, Papaveraceae contain no tannin-rich species. The most abundant polyphenols are the condensed tannins, found in virtually all families of plants, and comprising up to 50% of the dry weight of leaves. Cellular localization In all vascular plants studied, tannins are manufactured by a chloroplast-derived organelle, the tannosome. Tannins are mainly physically located in the vacuoles or surface wax of plants. These storage sites keep tannins active against plant predators, but also keep some tannins from affecting plant metabolism while the plant tissue is alive. Tannins are classified as ergastic substances, i.e., non-protoplasm materials found in cells. Tannins, by definition, precipitate proteins. In this condition, they must be stored in organelles able to withstand the protein precipitation process. Idioblasts are isolated plant cells which differ from neighboring tissues and contain non-living substances. They have various functions such as storage of reserves, excretory materials, pigments, and minerals. They could contain oil, latex, gum, resin or pigments etc. They also can contain tannins. In Japanese persimmon (Diospyros kaki) fruits, tannin is accumulated in the vacuole of tannin cells, which are idioblasts of parenchyma cells in the flesh. Presence in soils The convergent evolution of tannin-rich plant communities has occurred on nutrient-poor acidic soils throughout the world. Tannins were once believed to function as anti-herbivore defenses, but more and more ecologists now recognize them as important controllers of decomposition and nitrogen cycling processes. As concern grows about global warming, there is great interest to better understand the role of polyphenols as regulators of carbon cycling, in particular in northern boreal forests. Leaf litter and other decaying parts of kauri (Agathis australis), a tree species found in New Zealand, decompose much more slowly than those of most other species. Besides its acidity, the plant also bears substances such as waxes and phenols, most notably tannins, that are harmful to microorganisms. Presence in water and wood The leaching of highly water soluble tannins from decaying vegetation and leaves along a stream may produce what is known as a blackwater river. Water flowing out of bogs has a characteristic brown color from dissolved peat tannins. The presence of tannins (or humic acid) in well water can make it smell bad or taste bitter, but this does not make it unsafe to drink. Tannins leaching from an unprepared driftwood decoration in an aquarium can cause pH lowering and coloring of the water to a tea-like tinge. A way to avoid this is to boil the wood in water several times, discarding the water each time. Using peat as an aquarium substrate can have the same effect. Many hours of boiling the driftwood may need to be followed by many weeks or months of constant soaking and many water changes before the water will stay clear. Raising the water's pH level, e.g. by adding baking soda, will accelerate the process of leaching. Tannins in water can lead to feather staining on wild and domestic waterfowl which frequent the water; mute swans, which are typically white in colour, can often be observed with reddish-brown staining as a result of coming into contact with dissolved tannins, though dissolved iron compounds also play a role. Softwoods, while in general much lower in tannins than hardwoods, are usually not recommended for use in an aquarium so using a hardwood with a very light color, indicating a low tannin content, can be an easy way to avoid tannins. Tannic acid is brown in color, so in general white woods have a low tannin content. Woods with a lot of yellow, red, or brown coloration to them (like cedar, redwood, red oak, etc.) tend to contain a lot of tannin. Extraction There is no single protocol for extracting tannins from all plant material. The procedures used for tannins are widely variable. It may be that acetone in the extraction solvent increases the total yield by inhibiting interactions between tannins and proteins during extraction or even by breaking hydrogen bonds between tannin-protein complexes. Tests for tannins There are three groups of methods for the analysis of tannins: precipitation of proteins or alkaloids, reaction with phenolic rings, and depolymerization. Alkaloid precipitation Alkaloids such as caffeine, cinchonine, quinine or strychnine, precipitates polyphenols and tannins. This property can be used in a quantitation method. Goldbeater's skin test When goldbeater's skin or ox skin is dipped in HCl, rinsed in water, soaked in the tannin solution for 5 minutes, washed in water, and then treated with 1% FeSO4 solution, it gives a blue black color if tannin was present. Ferric chloride test The following describes the use of ferric chloride (FeCl3) tests for phenolics in general: Powdered plant leaves of the test plant (1.0 g) are weighed into a beaker and 10 ml of distilled water are added. The mixture is boiled for five minutes. Two drops of 5% FeCl3 are then added. Production of a greenish precipitate is an indication of the presence of tannins. Alternatively, a portion of the water extract is diluted with distilled water in a ratio of 1:4 and few drops of 10% ferric chloride solution is added. A blue or green color indicates the presence of tannins (Evans, 1989). Other methods The hide-powder method is used in tannin analysis for leather tannin and the Stiasny method for wood adhesives. Statistical analysis reveals that there is no significant relationship between the results from the hide-powder and the Stiasny methods. Hide-powder method 400 mg of sample tannins are dissolved in 100 ml of distilled water. 3 g of slightly chromated hide-powder previously dried in vacuum for 24h over CaCl2 are added and the mixture stirred for 1 h at ambient temperature. The suspension is filtered without vacuum through a sintered glass filter. The weight gain of the hide-powder expressed as a percentage of the weight of the starting material is equated to the percentage of tannin in the sample. Stiasny's method 100 mg of sample tannins are dissolved in 10 ml distilled water. 1 ml of 10M HCl and 2 ml of 37% formaldehyde are added and the mixture heated under reflux for 30 min. The reaction mixture is filtered while hot through a sintered glass filter. The precipitate is washed with hot water (5× 10 ml) and dried over CaCl2. The yield of tannin is expressed as a percentage of the weight of the starting material. Reaction with phenolic rings The bark tannins of Commiphora angolensis have been revealed by the usual color and precipitation reactions and by quantitative determination by the methods of Löwenthal-Procter and of Deijs (formalin-hydrochloric acid method). Colorimetric methods have existed such as the Neubauer-Löwenthal method which uses potassium permanganate as an oxidizing agent and indigo sulfate as an indicator, originally proposed by Löwenthal in 1877. The difficulty is that the establishing of a titer for tannin is not always convenient since it is extremely difficult to obtain the pure tannin. Neubauer proposed to remove this difficulty by establishing the titer not with regard to the tannin but with regard to crystallised oxalic acid, whereby he found that 83 g oxalic acid correspond to 41.20 g tannin. Löwenthal's method has been criticized. For instance, the amount of indigo used is not sufficient to retard noticeably the oxidation of the non-tannins substances. The results obtained by this method are therefore only comparative. A modified method, proposed in 1903 for the quantification of tannins in wine, Feldmann's method, is making use of calcium hypochlorite, instead of potassium permanganate, and indigo sulfate. Food items with tannins Pomegranates Accessory fruits Strawberries contain both hydrolyzable and condensed tannins. Berries Most berries, such as cranberries, and blueberries, contain both hydrolyzable and condensed tannins. Nuts Nuts vary in the amount of tannins they contain. Some species of acorns of oak contain large amounts. For example, acorns of Quercus robur and Quercus petraea in Poland were found to contain 2.4–5.2% and 2.6–4.8% tannins as a proportion of dry matter, but the tannins can be removed by leaching in water so that the acorns become edible. Other nuts – such as hazelnuts, walnuts, pecans, and almonds – contain lower amounts. Tannin concentration in the crude extract of these nuts did not directly translate to the same relationships for the condensed fraction. Herbs and spices Cloves, tarragon, cumin, thyme, vanilla, and cinnamon all contain tannins. Legumes Most legumes contain tannins. Red-colored beans contain the most tannins, and white-colored beans have the least. Peanuts without shells have a very low tannin content. Chickpeas (garbanzo beans) have a smaller amount of tannins. Chocolate Chocolate liquor contains about 6% tannins. Drinks with tannins Principal human dietary sources of tannins are tea and coffee. Most wines aged in charred oak barrels possess tannins absorbed from the wood. Soils high in clay also contribute to tannins in wine grapes. This concentration gives wine its signature astringency. Coffee pulp has been found to contain low to trace amounts of tannins. Fruit juices Although citrus fruits do not contain tannins, orange-colored juices often contain tannins from food colouring. Apple, grape and berry juices all contain high amounts of tannins. Sometimes tannins are even added to juices and ciders to create a more astringent feel to the taste. Beer In addition to the alpha acids extracted from hops to provide bitterness in beer, condensed tannins are also present. These originate both from malt and hops. Trained brewmasters, particularly those in Germany, consider the presence of tannins to be a flaw. However, in some styles, the presence of this astringency is acceptable or even desired, as, for example, in a Flanders red ale. In lager type beers, the tannins can form a precipitate with specific haze-forming proteins in the beer resulting in turbidity at low temperature. This chill haze can be prevented by removing part of the tannins or part of the haze-forming proteins. Tannins are removed using PVPP, haze-forming proteins by using silica or tannic acid. Properties for animal nutrition Tannins have traditionally been considered antinutritional, depending upon their chemical structure and dosage. Many studies suggest that chestnut tannins have positive effects on silage quality in the round bale silages, in particular reducing NPNs (non-protein nitrogen) in the lowest wilting level. Improved fermentability of soya meal nitrogen in the rumen may occur. Condensed tannins inhibit herbivore digestion by binding to consumed plant proteins and making them more difficult for animals to digest, and by interfering with protein absorption and digestive enzymes (for more on that topic, see plant defense against herbivory). Histatins, another type of salivary proteins, also precipitate tannins from solution, thus preventing alimentary adsorption. Legume fodders containing condensed tannins are a possible option for integrated sustainable control of gastrointestinal nematodes in ruminants, which may help address the worldwide development of resistance to synthetic anthelmintics. These include nuts, temperate and tropical barks, carob, coffee and cocoa. Tannin uses and market Tannins have been used since antiquity in the processes of tanning hides for leather, and in helping preserve iron artifacts (as with Japanese iron teapots). Industrial tannin production began at the beginning of the 19th century with the industrial revolution, to produce tanning material for the need for more leather. Before that time, processes used plant material and were long (up to six months). There was a collapse in the vegetable tannin market in the 1950s–1960s, due to the appearance of synthetic tannins, which were invented in response to a scarcity of vegetable tannins during World War II. At that time, many small tannin industry sites closed. Vegetable tannins are estimated to be used for the production of 10–20% of the global leather production. The cost of the final product depends on the method used to extract the tannins, in particular the use of solvents, alkali and other chemicals used (for instance glycerin). For large quantities, the most cost-effective method is hot water extraction. Tannic acid is used worldwide as clarifying agent in alcoholic drinks and as aroma ingredient in both alcoholic and soft drinks or juices. Tannins from different botanical origins also find extensive uses in the wine industry. Uses Tannins are an important ingredient in the process of tanning leather. Tanbark from oak, mimosa, chestnut and quebracho tree has traditionally been the primary source of tannery tannin, though inorganic tanning agents are also in use today and account for 90% of the world's leather production. Tannins produce different colors with ferric chloride (either blue, blue black, or green to greenish-black) according to the type of tannin. Iron gall ink is produced by treating a solution of tannins with iron(II) sulfate. Tannins can also be used as a mordant, and is especially useful in natural dyeing of cellulose fibers such as cotton. The type of tannin used may or may not have an impact on the final color of the fiber. Tannin is a component in a type of industrial particleboard adhesive developed jointly by the Tanzania Industrial Research and Development Organization and Forintek Labs Canada. Pinus radiata tannins has been investigated for the production of wood adhesives. Condensed tannins, e.g., quebracho tannin, and Hydrolyzable tannins, e.g., chestnut tannin, appear to be able to substitute a high proportion of synthetic phenol in phenol-formaldehyde resins for wood particleboard. Tannins can be used for production of anti-corrosive primers for treating rusted steel surfaces prior to painting, converting rust to iron tannate and consolidating and sealing the surface. The use of resins made of tannins has been investigated to remove mercury and methylmercury from solution. Immobilized tannins have been tested to recover uranium from seawater.
Physical sciences
Carbon–oxygen bond
null
61990
https://en.wikipedia.org/wiki/Trilobite
Trilobite
Trilobites (; meaning "three-lobed entities") are extinct marine arthropods that form the class Trilobita. Trilobites form one of the earliest known groups of arthropods. The first appearance of trilobites in the fossil record defines the base of the Atdabanian stage of the Early Cambrian period () and they flourished throughout the lower Paleozoic before slipping into a long decline, when, during the Devonian, all trilobite orders except the Proetida died out. The last trilobites disappeared in the mass extinction at the end of the Permian about 251.9 million years ago. Trilobites were among the most successful of all early animals, existing in oceans for almost 270 million years, with over 22,000 species having been described. By the time trilobites first appeared in the fossil record, they were already highly diversified and geographically dispersed. Because trilobites had wide diversity and an easily fossilized mineralised exoskeleton, they left an extensive fossil record. The study of their fossils has facilitated important contributions to biostratigraphy, paleontology, evolutionary biology, and plate tectonics. Trilobites are placed within the clade Artiopoda, which includes many organisms that are morphologically similar to trilobites, but are largely unmineralised. The relationship of Artiopoda to other arthropods is uncertain. Trilobites evolved into many ecological niches; some moved over the seabed as predators, scavengers, or filter feeders, and some swam, feeding on plankton. Some even crawled onto land. Most lifestyles expected of modern marine arthropods are seen in trilobites, with the possible exception of parasitism (where scientific debate continues). Some trilobites (particularly the family Olenidae) are even thought to have evolved a symbiotic relationship with sulfur-eating bacteria from which they derived food. The largest trilobites were more than long and may have weighed as much as . Evolution Trilobite relatives Trilobites belong to the Artiopoda, a group of extinct arthropods morphologically similar to trilobites, though only the trilobites had mineralised exoskeletons. Thus, other artiopodans are typically only found in exceptionally preserved deposits, mostly during the Cambrian period. The exact relationships of artiopods to other arthropods is uncertain. They have been considered closely related to chelicerates (which include horseshoe crabs and arachnids) as part of a clade called Arachnomorpha, while others consider them to be more closely related to Mandibulata (which contains insects, crustaceans and myriapods) as part of a clade called Antennulata. Fossil record of early trilobites The earliest trilobites known from the fossil record are redlichiids and ptychopariid bigotinids dated to around 520 million years ago. Contenders for the earliest trilobites include Profallotaspis jakutensis (Siberia), Fritzaspis spp. (western USA), Hupetina antiqua (Morocco) and Serrania gordaensis (Spain). Trilobites appeared at a roughly equivalent time in Laurentia, Siberia and West Gondwana. All Olenellina lack facial sutures (see below), and this is thought to represent the original state. The earliest sutured trilobite found so far (Lemdadella), occurs almost at the same time as the earliest Olenellina, suggesting the trilobites origin lies before the start of the Atdabanian, but without leaving fossils. Other groups show secondary lost facial sutures, such as all Agnostina and some Phacopina. Another common feature of the Olenellina also suggests this suborder to be the ancestral trilobite stock: early protaspid stages have not been found, supposedly because these were not calcified, and this also is supposed to represent the original state. Earlier trilobites may be found and could shed more light on their origins. Three specimens of a trilobite from Morocco, Megistaspis hammondi, dated 478 million years old contain fossilized soft parts. In 2024, researchers discovered soft tissues and other structures including the labrum in well-preserved trilobite specimens from Cambrian Stage 4 of Morocco, providing new anatomical information regarding the external and internal morphology of trilobites, and the cause of such extraordinary preservation is probably due to their rapid death after an underwater pyroclastic flow. Divergence and extinction Trilobites saw great diversification over time. For such a long-lasting group of animals, it is no surprise that trilobite evolutionary history is marked by a number of extinction events where some groups perished, and surviving groups diversified to fill ecological niches with comparable or unique adaptations. Generally, trilobites maintained high diversity levels throughout the Cambrian and Ordovician periods before entering a drawn-out decline in the Devonian, culminating in the final extinction of the last few survivors at the end of the Permian period. Evolutionary trends Principal evolutionary trends from primitive morphologies, such as exemplified by Eoredlichia, include the origin of new types of eyes, improvement of enrollment and articulation mechanisms, increased size of pygidium (micropygy to isopygy), and development of extreme spinosity in certain groups. Changes also included narrowing of the thorax and increasing or decreasing numbers of thoracic segments. Specific changes to the cephalon are also noted; variable glabella size and shape, position of eyes and facial sutures, and hypostome specialization. Several morphologies appeared independently within different major taxa (e.g. eye reduction or miniaturization). Effacement, the loss of surface detail in the cephalon, pygidium, or the thoracic furrows, is also a common evolutionary trend. Notable examples of this were the orders Agnostida and Asaphida, and the suborder Illaenina of the Corynexochida. Effacement is believed to be an indication of either a burrowing lifestyle or a pelagic one. Effacement poses a problem for taxonomists since the loss of details (particularly of the glabella) can make the determination of phylogenetic relationships difficult. Cambrian Although it has historically been suggested that trilobites originated during the Precambrian this is no longer supported, and it is thought that trilobites originated shortly before they appeared in the fossil record. Very shortly after trilobite fossils appeared in the lower Cambrian, they rapidly diversified into the major orders that typified the Cambrian—Redlichiida, Ptychopariida, Agnostida, and Corynexochida. The first major crisis in the trilobite fossil record occurred in the Middle Cambrian; surviving orders developed isopygius or macropygius bodies and developed thicker cuticles, allowing better defense against predators (see Thorax below). The end-Cambrian mass extinction event marked a major change in trilobite fauna; almost all Redlichiida (including the Olenelloidea) and most Late Cambrian stocks became extinct. A continuing decrease in Laurentian continental shelf area is recorded at the same time as the extinctions, suggesting major environmental upheaval. Notable trilobite genera appearing in the Cambrian include: Abadiella (Lower Cambrian) Buenellus (Lower Cambrian) Judomia (Lower Cambrian) Olenellus (Lower Cambrian) Ellipsocephalus (Middle Cambrian) Elrathia (Middle Cambrian) Paradoxides (Middle Cambrian) Peronopsis (Middle Cambrian) Xiuqiella (Middle Cambrian) Yiliangella (Middle Cambrian) Yiliangellina (Middle Cambrian) Olenus (Late Cambrian) Ordovician The Early Ordovician is marked by vigorous radiations of articulate brachiopods, bryozoans, bivalves, echinoderms, and graptolites, with many groups appearing in the fossil record for the first time. Although intra-species trilobite diversity seems to have peaked during the Cambrian, trilobites were still active participants in the Ordovician radiation event, with a new fauna taking over from the old Cambrian one. Phacopida and Trinucleioidea are characteristic forms, highly differentiated and diverse, most with uncertain ancestors. The Phacopida and other "new" clades almost certainly had Cambrian forebears, but the fact that they have avoided detection is a strong indication that novel morphologies were developing very rapidly. Changes within the trilobite fauna during the Ordovician foreshadowed the mass extinction at the end of the Ordovician, allowing many families to continue into the Silurian with little disturbance. Ordovician trilobites were successful at exploiting new environments, notably reefs. The Ordovician mass extinction did not leave the trilobites unscathed; some distinctive and previously successful forms such as the Telephinidae and Agnostida became extinct. The Ordovician marks the last great diversification period amongst the trilobites: very few entirely new patterns of organisation arose post-Ordovician. Later evolution in trilobites was largely a matter of variations upon the Ordovician themes. By the Ordovician mass extinction, vigorous trilobite radiation has stopped, and gradual decline is foreshadowed. Some of the genera of Trilobites appearing in the Ordovician include: Cyclopyge (Early to Late Ordovician) Selenopeltis (Early to Late Ordovician) Parabolina (Early Ordovician) Cheirurus (Middle Ordovician) Eodalmanitina (Middle Ordovician) Trinucleus (Middle Ordovician) Triarthrus (Late Ordovician) Silurian and Devonian Most Early Silurian families constitute a subgroup of the Late Ordovician fauna. Few, if any, of the dominant Early Ordovician fauna survived to the end of the Ordovician, yet 74% of the dominant Late Ordovician trilobite fauna survived the Ordovician. Late Ordovician survivors account for all post-Ordovician trilobite groups except the Harpetida. Silurian and Devonian trilobite assemblages are superficially similar to Ordovician assemblages, dominated by Lichida and Phacopida (including the well-known Calymenina). A number of characteristic forms do not extend far into the Devonian and almost all the remainder were wiped out by a series of dramatic Middle and Late Devonian extinctions. Three orders and all but five families were exterminated by the combination of sea level changes and a break in the redox equilibrium (a meteorite impact has also been suggested as a cause). Only a single order, the Proetida, survived into the Carboniferous. Genera of trilobites during the Silurian and Devonian periods include: Dalmanites (Early to Late Silurian) Calymene (Silurian) Encrinurus (Silurian) Exallaspis (Middle to Late Silurian) Paralejurus (Early Devonian) Lioharpes (Early-Middle Devonian) Phacops (Middle to Late Devonian) Carboniferous and Permian The Proetida survived for millions of years, continued through the Carboniferous period and lasted until the end of the Permian (when the vast majority of species on Earth were wiped out). It is unknown why the order Proetida alone survived the Devonian. The Proetida maintained relatively diverse faunas in both deep and shallow water shelf environments throughout the Carboniferous. For many millions of years the Proetida existed untroubled in their ecological niche. An analogy would be today's crinoids, which mostly exist as deep-water species; in the Paleozoic era, vast 'forests' of crinoids lived in shallow near-shore environments. Some of the genera of trilobites during the Carboniferous and Permian periods include: Archegonus (Early to Middle Carboniferous) Hesslerides (Middle Carboniferous) Endops (Middle Permian) Triproetus (Late Carboniferous to Early Permian) Ditomopyge (Late Carboniferous to Late Permian) Pseudophillipsia (Late carboniferous to Late Permian) Final extinction Exactly why the trilobites became extinct is not clear; with repeated extinction events (often followed by apparent recovery) throughout the trilobite fossil record, a combination of causes is likely. After the extinction event at the end of the Devonian period, what trilobite diversity remained was bottlenecked into the order Proetida. Decreasing diversity of genera limited to shallow-water shelf habitats coupled with a drastic lowering of sea level (regression) meant that the final decline of trilobites happened shortly before the end Permian mass extinction event. With so many marine species involved in the Permian extinction, the end of nearly 300 million successful years for the trilobites would not have been unexpected at the time. Fossil distribution Trilobites appear to have been primarily marine organisms, since the fossilized remains of trilobites are always found in rocks containing fossils of other salt-water animals such as brachiopods, crinoids, and corals. Some trackways suggest trilobites made at least temporary excursions onto land. Within the marine paleoenvironment, trilobites were found in a broad range from extremely shallow water to very deep water. Trilobites, like brachiopods, crinoids, and corals, are found on all modern continents, and occupied every ancient ocean from which Paleozoic fossils have been collected. The remnants of trilobites can range from the preserved body to pieces of the exoskeleton, which it shed in the process known as ecdysis. In addition, the tracks left behind by trilobites living on the sea floor are often preserved as trace fossils. There are three main forms of trace fossils associated with trilobites: Rusophycus, Cruziana and Diplichnites—such trace fossils represent the preserved life activity of trilobites active upon the sea floor. Rusophycus, the resting trace, are trilobite excavations involving little or no forward movement and ethological interpretations suggest resting, protection and hunting. Cruziana, the feeding trace, are furrows through the sediment, which are believed to represent the movement of trilobites while deposit feeding. Many of the Diplichnites fossils are believed to be traces made by trilobites walking on the sediment surface. Care must be taken as similar trace fossils are recorded in freshwater and post-Paleozoic deposits, representing non-trilobite origins. Trilobite fossils are found worldwide, with thousands of known species. Because they appeared quickly in geological time, and moulted like other arthropods, trilobites serve as excellent index fossils, enabling geologists to date the age of the rocks in which they are found. They were among the first fossils to attract widespread attention, and new species are being discovered every year. In the United States, the best open-to-the-public collection of trilobites is located in Hamburg, New York. The shale quarry, informally known as Penn Dixie, stopped mining in the 1960s. The large amounts of trilobites were discovered in the 1970s by Dan Cooper. As a well-known rock collector, he incited scientific and public interest in the location. The fossils are dated to the Givetian (387.2–382.7 million years ago) when the Western New York Region was 30 degrees south of the equator and completely covered in water. The site was purchased from Vincent C. Bonerb by the Town of Hamburg with the cooperation of the Hamburg Natural History Society to protect the land from development. In 1994, the quarry became Penn Dixie Fossil Park & Nature Reserve when they received 501(c)3 status and was opened for visitation and collection of trilobite samples. The two most common found samples are Eldredgeops rana and Greenops. A famous location for trilobite fossils in the United Kingdom is Wren's Nest, Dudley, in the West Midlands, where Calymene blumenbachii is found in the Silurian Wenlock Group. This trilobite is featured on the town's coat of arms and was named the Dudley Bug or Dudley Locust by quarrymen who once worked the now abandoned limestone quarries. Llandrindod Wells, Powys, Wales, is another famous trilobite location. The well-known Elrathia kingi trilobite is found in abundance in the Cambrian Wheeler Shale of Utah. Spectacularly preserved trilobite fossils, often showing soft body parts (legs, gills, antennae, etc.) have been found in British Columbia, Canada (the Cambrian Burgess Shale and similar localities); New York, U.S.A. (Ordovician Walcott–Rust quarry, near Russia, and Beecher's Trilobite Bed, near Rome); China (Lower Cambrian Maotianshan Shales near Chengjiang); Germany (the Devonian Hunsrück Slates near Bundenbach) and, much more rarely, in trilobite-bearing strata in Utah (Wheeler Shale and other formations), Ontario, and Manuels River, Newfoundland and Labrador. Sites in Morocco also yield very well-preserved trilobites, many buried in mudslides alive and so perfectly preserved. An industry has developed around their recovery, leading to controversies about practices in restoral. The variety of eye and upper body forms and fragile protuberances is best known from these samples preserved similarly to bodies in Pompeii. The French palaeontologist Joachim Barrande (1799–1883) carried out his landmark study of trilobites in the Cambrian, Ordovician and Silurian of Bohemia, publishing the first volume of Système silurien du centre de la Bohême in 1852. Importance The study of Paleozoic trilobites in the Welsh-English borders by Niles Eldredge was fundamental in formulating and testing punctuated equilibrium as a mechanism of evolution. Identification of the 'Atlantic' and 'Pacific' trilobite faunas in North America and Europe implied the closure of the Iapetus Ocean (producing the Iapetus suture), thus providing important supporting evidence for the theory of continental drift. Trilobites have been important in estimating the rate of speciation during the period known as the Cambrian explosion because they are the most diverse group of metazoans known from the fossil record of the early Cambrian. Trilobites are excellent stratigraphic markers of the Cambrian period: researchers who find trilobites with alimentary prosopon, and a micropygium, have found Early Cambrian strata. Most of the Cambrian stratigraphy is based on the use of trilobite marker fossils. Trilobites are the state fossils of Ohio (Isotelus), Wisconsin (Calymene celebra) and Pennsylvania (Phacops rana). Taxonomy The 10 most commonly recognized trilobite orders are Agnostida, Redlichiida, Corynexochida, Lichida, Odontopleurida, Phacopida, Proetida, Asaphida, Harpetida and Ptychopariida. In 2020, an 11th order, Trinucleida, was proposed to be elevated out of the asaphid superfamily Trinucleioidea. Sometimes the Nektaspida are considered trilobites, but these lack a calcified exoskeleton and eyes. Some scholars have proposed that the order Agnostida is polyphyletic, with the suborder Agnostina representing non-trilobite arthropods unrelated to the suborder Eodiscina. Under this hypothesis, Eodiscina would be elevated to a new order, Eodiscida. Over 20,000 species of trilobite have been described. Despite their rich fossil record with thousands of described genera found throughout the world, the taxonomy and phylogeny of trilobites have many uncertainties. Except possibly for the members of the orders Phacopida and Lichida (which first appear during the early Ordovician), nine of the eleven trilobite orders appear prior to the end of the Cambrian. Most scientists believe that order Redlichiida, more specifically its suborder Redlichiina, contains a common ancestor of all other orders, with the possible exception of the Agnostina. While many potential phylogenies are found in the literature, most have suborder Redlichiina giving rise to orders Corynexochida and Ptychopariida during the Lower Cambrian, and the Lichida descending from either the Redlichiida or Corynexochida in the Middle Cambrian. Order Ptychopariida is the most problematic order for trilobite classification. In the 1959 Treatise on Invertebrate Paleontology, what are now members of orders Ptychopariida, Asaphida, Proetida and Harpetida were grouped together as order Ptychopariida; subclass Librostoma was erected in 1990 to encompass all of these orders, based on their shared ancestral character of a natant (unattached) hypostome. The most recently recognized of the nine trilobite orders, Harpetida, was erected in 2002. The progenitor of order Phacopida is unclear. Morphology When trilobites are found, only the exoskeleton is preserved (often in an incomplete state) in all but a handful of locations. A few locations () preserve identifiable soft body parts (legs, gills, musculature & digestive tract) and enigmatic traces of other structures (e.g. fine details of eye structure) as well as the exoskeleton. Of the 20,000 known species only 38 have fossils with preserved appendages. Trilobites range in length from minute (less than ) to very large (over ), with an average size range of . Supposedly the smallest species is Acanthopleurella stipulae with a maximum of . The world's largest-known trilobite specimen, assigned to Isotelus rex is in length. It was found in 1998 by Canadian scientists in Ordovician rocks on the shores of Hudson Bay. However, a partial specimen of the Ordovician trilobite Hungioides bohemicus found in 2009 in Arouca, Portugal is estimated to have measured when complete in length. Only the upper (dorsal) part of their exoskeleton is mineralized, composed of calcite and calcium phosphate minerals in a lattice of chitin, and is curled round the lower edge to produce a small fringe called the "doublure". Their appendages and soft underbelly were non-mineralized. Three distinctive tagmata (sections) are present: cephalon (head); thorax (body) and pygidium (tail). Terminology As might be expected for a group of animals comprising genera, the morphology and description of trilobites can be complex. Despite morphological complexity and an unclear position within higher classifications, there are a number of characteristics which distinguish the trilobites from other arthropods: a generally sub-elliptical, dorsal, chitinous exoskeleton divided longitudinally into three distinct lobes (from which the group gets its name); having a distinct, relatively large head shield (cephalon) articulating axially with a thorax comprising articulated transverse segments, the hindmost of which are almost invariably fused to form a tail shield (pygidium). When describing differences between trilobite taxa, the presence, size, and shape of the cephalic features are often mentioned. During moulting, the exoskeleton generally splits between the head and thorax, which is why so many trilobite fossils are missing one or the other. In most groups facial sutures on the cephalon helped facilitate moulting. Similar to lobsters and crabs, trilobites would have physically "grown" between the moult stage and the hardening of the new exoskeleton. Cephalon A trilobite's cephalon, or head section, is highly variable with a lot of morphological complexity. The glabella forms a dome underneath which sat the "crop" or "stomach". Generally, the exoskeleton has few distinguishing ventral features, but the cephalon often preserves muscle attachment scars and occasionally the hypostome, a small rigid plate comparable to the ventral plate in other arthropods. A toothless mouth and stomach sat upon the hypostome with the mouth facing backward at the rear edge of the hypostome. Hypostome morphology is highly variable; sometimes supported by an un-mineralised membrane (natant), sometimes fused onto the anterior doublure with an outline very similar to the glabella above (conterminant) or fused to the anterior doublure with an outline significantly different from the glabella (impendent). Many variations in shape and placement of the hypostome have been described. The size of the glabella and the lateral fringe of the cephalon, together with hypostome variation, have been linked to different lifestyles, diets and specific ecological niches. The anterior and lateral fringe of the cephalon is greatly enlarged in the Harpetida, in other species a bulge in the pre-glabellar area is preserved that suggests a brood pouch. Highly complex compound eyes are another obvious feature of the cephalon. Facial sutures Facial or cephalic sutures are the natural fracture lines in the cephalon of trilobites. Their function was to assist the trilobite in shedding its old exoskeleton during ecdysis (or molting). All species assigned to the suborder Olenellina, that became extinct at the very end of the Early Cambrian (like Fallotaspis, Nevadia, Judomia, and Olenellus) lacked facial sutures. They are believed to have never developed facial sutures, having pre-dated their evolution. Because of this (along with other primitive characteristics), they are thought to be the earliest ancestors of later trilobites. Some other later trilobites also lost facial sutures secondarily. The type of sutures found in different species are used extensively in the taxonomy and phylogeny of trilobites. Dorsal sutures The dorsal surface of the trilobite cephalon (the frontmost tagma, or the 'head') can be divided into two regions—the cranidium and the librigena ("free cheeks"). The cranidium can be further divided into the glabella (the central lobe in the cephalon) and the fixigena ("fixed cheeks"). The facial sutures lie along the anterior edge, at the division between the cranidium and the librigena. Trilobite facial sutures on the dorsal side can be roughly divided into five main types according to where the sutures end relative to the genal angle (the edges where the side and rear margins of the cephalon converge). Absent – Facial sutures are lacking in the Olenellina. This is considered a primitive state, and is always combined with the presence of eyes. Proparian – The facial suture ends in front of the genal angle, along the lateral margin. Example genera showing this type of suture include Dalmanites of Phacopina (Phacopida) and Ekwipagetia of Eodiscina (Agnostida). Gonatoparian – The facial suture ends at the tip of the genal angle. Example genera showing this type of suture include Calymene and Trimerus of Calymenina (Phacopida). Opisthoparian – The facial suture ends at the posterior margin of the cephalon. Example genera showing this type of suture include Peltura of Olenina (Ptychopariida) and Bumastus of Illaenina (Corynexochida). This is the most common type of facial suture. Hypoparian or marginal – In some trilobites, dorsal sutures may be secondarily lost. Several exemplary time series of species show the "migration" of the dorsal suture until it coincides with the margins of the cephalon. As the visual surface of the eye is on the diminishing free cheek (or librigena), the number of lenses tends to go down, and eventually the eye disappears. The loss of dorsal sutures may arise from the proparian state, such as in some Eodiscina like Weymouthia, all Agnostina, and some Phacopina such as Ductina. The marginal sutures exhibited by the harpetids and trinucleioids are derived from opisthoparian sutures. On the other hand, blindness is not always accompanied by the loss of facial sutures. The primitive state of the dorsal sutures is proparian. Opisthoparian sutures have developed several times independently. There are no examples of proparian sutures developing in taxa with opisthoparian ancestry. Trilobites that exhibit opisthoparian sutures as adults commonly have proparian sutures as instars (known exceptions being Yunnanocephalus and Duyunaspis). Hypoparian sutures have also arisen independently in several groups of trilobites. The course of the facial sutures from the front of the visual surface varies at least as strongly as it does in the rear, but the lack of a clear reference point similar to the genal angle makes it difficult to categorize. One of the more pronounced states is that the front of the facial sutures do not cut the lateral or frontal border on its own, but coincide in front of the glabella, and cut the frontal border at the midline. This is, inter alia, the case in the Asaphida. Even more pronounced is the situation that the frontal branches of the facial sutures end in each other, resulting in yoked free cheeks. This is known in Triarthrus, and in the Phacopidae, but in that family the facial sutures are not functional, as can be concluded from the fact that free cheeks are not found separated from the cranidium. There are also two types of sutures in the dorsal surface connected to the compound eyes of trilobites. They are: Ocular sutures – are sutures surrounding the edges of the compound eye. Trilobites with these sutures lose the entire surface of the eyes when molting. It is common among Cambrian trilobites. Palpebral sutures – are sutures which form part of the dorsal facial suture running along the top edges of the compound eye. Ventral sutures Dorsal facial sutures continue downward to the ventral side of the cephalon where they become the Connective sutures that divide the doublure. The following are the types of ventral sutures. Connective sutures – are the sutures that continue from the facial sutures past the front margin of the cephalon. Rostral suture – is only present when the trilobite possesses a rostrum (or rostral plate). It connects the rostrum to the front part of the dorsal cranidium. Hypostomal suture – separates the hypostome from the doublure when the hypostome is of the attached type. It is absent when the hypostome is free-floating (i.e. natant). it is also absent in some coterminant hypostomes where the hypostome is fused to the doublure. Median suture – exhibited by asaphid trilobites, they are formed when instead of becoming connective sutures, the two dorsal sutures converge at a point in front of the cephalon then divide straight down the center of the doublure. Rostrum The rostrum (or the rostral plate) is a distinct part of the doublure located at the front of the cephalon. It is separated from the rest of the doublure by the rostral suture. During molting in trilobites like Paradoxides, the rostrum is used to anchor the front part of the trilobite as the cranidium separates from the librigena. The opening created by the arching of the body provides an exit for the molting trilobite. It is absent in some trilobites like Lachnostoma. Hypostome The hypostome is the hard mouthpart of the trilobite found on the ventral side of the cephalon typically below the glabella. The hypostome can be classified into three types based on whether they are permanently attached to the rostrum or not and whether they are aligned to the anterior dorsal tip of the glabella. Natant – Hypostome not attached to doublure. Aligned with front edge of glabella. Conterminant – Hypostome attached to rostral plate of doublure. Aligned with front edge of glabella. Impendent – Hypostome attached to rostral plate but not aligned to glabella. Thorax The thorax is a series of articulated segments that lie between the cephalon and pygidium. The number of segments varies between 2 and 103 with most species in the 2 to 16 range. Each segment consists of the central axial ring and the outer pleurae, which protected the limbs and gills. The pleurae are sometimes abbreviated or extended to form long spines. Apodemes are bulbous projections on the ventral surface of the exoskeleton to which most leg muscles attached, although some leg muscles attached directly to the exoskeleton. Determining a junction between thorax and pygidium can be difficult and many segment counts suffer from this problem. Volvation Trilobite fossils are often found "enrolled" (curled up) like modern pill bugs for protection; evidence suggests enrollment ("volvation") helped protect against the inherent weakness of the arthropod cuticle that was exploited by anomalocarid predators. The earliest evidence of volvation is a little over 510 million years old and has been found in Olenellidae, but these forms didn't have any of the interlocking mechanisms found in later trilobites. Some trilobites achieved a fully closed capsule (e.g. Phacops), while others with long pleural spines (e.g. Selenopeltis) left a gap at the sides or those with a small pygidium (e.g. Paradoxides) left a gap between the cephalon and pygidium. In Phacops, the pleurae overlap a smooth bevel (facet) allowing a close seal with the doublure. The doublure carries a Panderian notch or protuberance on each segment to prevent over rotation and achieve a good seal. Even in an agnostid, with only 2 articulating thoracic segments, the process of enrollment required a complex musculature to contract the exoskeleton and return to the flat condition. Pygidium The pygidium is formed from a number of segments and the telson fused together. Segments in the pygidium are similar to the thoracic segments (bearing biramous limbs) but are not articulated. Trilobites can be described based on the pygidium being micropygous (pygidium smaller than cephalon), subisopygous (pygidium sub equal to cephalon), isopygous (pygidium equal in size to cephalon), or macropygous (pygidium larger than cephalon). Prosopon (surface sculpture) Trilobite exoskeletons show a variety of small-scale structures collectively called prosopon. Prosopon does not include large scale extensions of the cuticle (e.g. hollow pleural spines) but to finer scale features, such as ribbing, domes, pustules, pitting, ridging and perforations. The exact purpose of the prosopon is not resolved but suggestions include structural strengthening, sensory pits or hairs, preventing predator attacks and maintaining aeration while enrolled. In one example, alimentary ridge networks (easily visible in Cambrian trilobites) might have been either digestive or respiratory tubes in the cephalon and other regions. Spines Some trilobites such as those of the order Lichida evolved elaborate spiny forms, from the Ordovician until the end of the Devonian period. Examples of these specimens have been found in the Hamar Laghdad Formation of Alnif in Morocco. There is a serious counterfeiting and fakery problem with much of the Moroccan material that is offered commercially. Spectacular spined trilobites have also been found in western Russia; Oklahoma, USA; and Ontario, Canada. Some trilobites had horns on their heads similar to several modern beetles. Based on the size, location, and shape of the horns it has been suggested that these horns may have been used to combat for mates. Horns were widespread in the family Raphiophoridae (Asaphida). Another function of these spines was protection from predators. When enrolled, trilobite spines offered additional protection. This conclusion is likely to be applicable to other trilobites as well, such as in the Phacopid trilobite genus Walliserops, that developed spectacular tridents. Soft body parts Only 21 or so species are described from which soft body parts are preserved, so some features (e.g. the posterior antenniform cerci preserved only in Olenoides serratus) remain difficult to assess in the wider picture. Appendages Trilobites had a single pair of preoral antennae and otherwise undifferentiated biramous limbs (two, three or four cephalic pairs, followed by one pair per thoracic segment and some pygidium pairs). Each endopodite (walking leg) had six or seven segments, homologous to other early arthropods. Endopodites are attached to the coxa, which also bore a feather-like exopodite, or gill branch, which was used for respiration and, in some species, swimming. A 2021 study found that the upper limb branch of trilobites is a "well-developed gill" that oxygenates the hemolymph, comparable to the book gill in modern horseshoe crab Limulus. In Olenoides, the partially articulated junction with the body is distinct from the exopods of Chelicerata or Crustacea. The inside of the coxa (or gnathobase) carries spines, probably to process prey items. The last exopodite segment usually had claws or spines. Many examples of hairs on the legs suggest adaptations for feeding (as for the gnathobases) or sensory organs to help with walking. Digestive tract and diet The toothless mouth of trilobites was situated on the rear edge of the hypostome (facing backward), in front of the legs attached to the cephalon. The mouth is linked by a small esophagus to the stomach that lay forward of the mouth, below the glabella. The "intestine" led backward from there to the pygidium. The "feeding limbs" attached to the cephalon are thought to have fed food into the mouth, possibly "slicing" the food on the hypostome and/or gnathobases first. Recent propagation phase-contrast synchrotron microtomography, or (PPC-SRμCT), which is a 3d imagining of tissue related to an organism's function, of a sample of Bohemolichas incola show large concentrations of undigestible fragments of Conchoprimitia osekensis, a small-shelled species now extinct, in the B. incola sample digestive tract. The fragments are indicative of durophagous predation (shell crushing). As the composition of the shells found were not taxonomically significant, rather based on physical properties regarding the shell strength and size, B. incola was opportunistic for food classifying feeding habits to be similar to scavengers. The remains of shells address another digestive aspect of B. incola, in the enzymatic ways in which these indigestible shells were siphoned out of little nutrition leaving only fragments behind. These remnants build on the concept of early Trilobites potentially having glands that secrete enzymes that aid in the digestive process. Internal organs While there is direct and implied evidence for the presence and location of the mouth, stomach and digestive tract (see above) the presence of heart, brain and liver are only implied (although "present" in many reconstructions) with little direct geological evidence. Musculature Although rarely preserved, long lateral muscles extended from the cephalon to midway down the pygidium, attaching to the axial rings allowing enrollment while separate muscles on the legs tucked them out of the way. Sensory organs Many trilobites had complex eyes; they also had a pair of antennae. Some trilobites were blind, probably living too deep in the sea for light to reach them. As such, they became secondarily blind in this branch of trilobite evolution. Other trilobites (e.g., Phacops rana and Erbenochile erbeni) had large eyes that were for use in well lit, predator-filled waters. Antennae The pair of antennae suspected in most trilobites (and preserved in a few examples) were highly flexible to allow them to be retracted when the trilobite was enrolled. One species (Olenoides serratus) preserves antenna-like cerci, which project from the rear of the trilobite. Eyes Even the earliest trilobites had complex, compound eyes with lenses made of calcite (a characteristic of all trilobite eyes), confirming that the eyes of arthropods and probably other animals could have developed before the Cambrian. Improving eyesight of both predator and prey in marine environments has been suggested as one of the evolutionary pressures furthering an apparent rapid development of new life forms during what is known as the Cambrian explosion. Trilobite eyes were typically compound, with each lens being an elongated prism. The number of lenses in such an eye varied: some trilobites had only one, while some had thousands of lenses in a single eye. In compound eyes, the lenses were typically arranged hexagonally. The fossil record of trilobite eyes is complete enough that their evolution can be studied through time, which compensates to some extent for the lack of preservation of soft internal parts. Lenses of trilobites' eyes were made of calcite (calcium carbonate, CaCO3). Pure forms of calcite are transparent, and some trilobites used crystallographically oriented, clear calcite crystals to form each lens of each eye. Rigid calcite lenses would have been unable to accommodate to a change of focus like the soft lens in a human eye would; in some trilobites, the calcite formed an internal doublet structure, giving superb depth of field and minimal spherical aberration, according to optical principles discovered by French scientist René Descartes and Dutch physicist Christiaan Huygens in the 17th century. A living species with similar lenses is the brittle star Ophiocoma wendtii. In other trilobites, with a Huygens interface apparently missing, a gradient-index lens is invoked with the refractive index of the lens changing toward the center. Sublensar sensory structures have been found in the eyes of some phacopid trilobites. The structures consist of what appear to be several sensory cells surrounding a rhadomeric structure, resembling closely the sublensar structures found in the eyes of many modern arthropod apposition eyes, especially Limulus, a genus of horseshoe crabs. Holochroal eyes had a great number (sometimes over 15,000) of small (30–100 μm, rarely larger) lenses. Lenses were hexagonally close packed, touching each other, with a single corneal membrane covering all lenses. Each lens was in direct contact with adjacent lenses. Holochroal eyes are the ancestral eye of trilobites, and are by far the most common, found in all orders except the Agnostida, and through the entirety of the Trilobites' existence. Little is known of the early history of holochroal eyes; Lower and Middle Cambrian trilobites rarely preserve the visual surface. The spatial resolving power of grated eyes (such as holochroal eyes) is dependent on light intensity, circular motion, receptor density, registered light angle, and the extent to which the signal of individual rhabdoms are neurally combined. This implies that lenses need to be larger under low light conditions (such as for Pricyclopyge, when comparing it to Carolinites), and for fast moving predators and prey. As the circular velocity caused by the forward speed of an animal itself is much higher for the ommatidia directed perpendicular to the movement, fast-moving trilobites (such as Carolinites) have eyes flattened from the side and more curved were ommatia are directed to the front or back. Thus eye morphology can be used to make assumptions about the ecosystem of trilobites. Schizochroal eyes typically had fewer (around 700), larger lenses than holochroal eyes and are found only in Phacopina. Each lens had a cornea, and adjacent lenses were separated by thick interlensar cuticle, known as sclera. Schizochroal eyes appear quite suddenly in the early Ordovician, and were presumably derived from a holochroal ancestor. Field of view (all-around vision), eye placement and coincidental development of more efficient enrollment mechanisms point to the eye as a more defensive "early warning" system than directly aiding in the hunt for food. Modern eyes that are functionally equivalent to the schizochroal eye were not thought to exist, but are found in the modern insect species Xenos peckii. Abathochroal eyes are found only in Cambrian Eodiscina, and have around 70 small separate lenses that had individual cornea. The sclera was separate from the cornea, and was not as thick as the sclera in schizochroal eyes. Although well preserved examples are sparse in the early fossil record, abathochroal eyes have been recorded in the lower Cambrian, making them among the oldest known. Environmental conditions seem to have resulted in the later loss of visual organs in many Eodiscina. Secondary blindness is not uncommon, particularly in long lived groups such as the Agnostida and Trinucleioidea. In Proetida and Phacopina from western Europe and particularly Tropidocoryphinae from France (where there is good stratigraphic control), there are well studied trends showing progressive eye reduction between closely related species that eventually leads to blindness. Several other structures on trilobites have been explained as photo-receptors. Of particular interest are "macula", the small areas of thinned cuticle on the underside of the hypostome. In some trilobites macula are suggested to function as simple "ventral eyes" that could have detected night and day or allowed a trilobite to navigate while swimming (or turned) upside down. Sensory pits There are several types of prosopon that have been suggested as sensory apparatus collecting chemical or vibrational signals. The connection between large pitted fringes on the cephalon of Harpetida and Trinucleoidea with corresponding small or absent eyes makes for an interesting possibility of the fringe as a "compound ear". Development Trilobites grew through successive moult stages called instars, in which existing segments increased in size and new trunk segments appeared at a sub-terminal generative zone during the anamorphic phase of development. This was followed by the epimorphic developmental phase, in which the animal continued to grow and moult, but no new trunk segments were expressed in the exoskeleton. The combination of anamorphic and epimorphic growth constitutes the hemianamorphic developmental mode that is common among many living arthropods. Trilobite development was unusual in the way in which articulations developed between segments, and changes in the development of articulation gave rise to the conventionally recognized developmental phases of the trilobite life cycle (divided into three stages), which are not readily-comparable with those of other arthropods. Actual growth and change in external form of the trilobite would have occurred when the trilobite was soft shelled, following moulting and before the next exoskeleton hardened. Trilobite larvae are known from the Cambrian to the Carboniferous and from all sub-orders. As instars from closely related taxa are more similar than instars from distantly related taxa, trilobite larvae provide morphological information important in evaluating high-level phylogenetic relationships among trilobites. Despite the absence of supporting fossil evidence, their similarity to living arthropods has led to the belief that trilobites multiplied sexually and produced eggs. Some species may have kept eggs or larvae in a brood pouch forward of the glabella, particularly when the ecological niche was challenging to larvae. Size and morphology of the first calcified stage are highly variable between (but not within) trilobite taxa, suggesting some trilobites passed through more growth within the egg than others. Early developmental stages prior to calcification of the exoskeleton are a possibility (suggested for fallotaspids), but so is calcification and hatching coinciding. The earliest post-embryonic trilobite growth stage known with certainty are the "protaspid" stages (anamorphic phase). Starting with an indistinguishable proto-cephalon and proto-pygidium (anaprotaspid) a number of changes occur ending with a transverse furrow separating the proto-cephalon and proto-pygidium (metaprotaspid) that can continue to add segments. Segments are added at the posterior part of the pygidium, but all segments remain fused together. The "meraspid" stages (anamorphic phase) are marked by the appearance of an articulation between the head and the fused trunk. Prior to the onset of the first meraspid stage the animal had a two-part structure—the head and the plate of fused trunk segments, the pygidium. During the meraspid stages, new segments appeared near the rear of the pygidium as well as additional articulations developing at the front of the pygidium, releasing freely articulating segments into the thorax. Segments are generally added one per moult (although two per moult and one every alternate moult are also recorded), with number of stages equal to the number of thoracic segments. A substantial amount of growth, from less than 25% up to 30%–40%, probably took place in the meraspid stages. The "holaspid" stages (epimorphic phase) commence when a stable, mature number of segments has been released into the thorax. Moulting continued during the holaspid stages, with no changes in thoracic segment number. Some trilobites are suggested to have continued moulting and growing throughout the life of the individual, albeit at a slower rate on reaching maturity. Some trilobites showed a marked transition in morphology at one particular instar, which has been called "trilobite metamorphosis". Radical change in morphology is linked to the loss or gain of distinctive features that mark a change in mode of life. A change in lifestyle during development has significance in terms of evolutionary pressure, as the trilobite could pass through several ecological niches on the way to adult development and changes would strongly affect survivorship and dispersal of trilobite taxa. It is worth noting that trilobites with all protaspid stages solely planktonic and later meraspid stages benthic (e.g. asaphids) failed to last through the Ordovician extinctions, while trilobites that were planktonic for only the first protaspid stage before metamorphosing into benthic forms survived (e.g. lichids, phacopids). Pelagic larval life-style proved ill-adapted to the rapid onset of global climatic cooling and loss of tropical shelf habitats during the Ordovician. There is no evidence that trilobites reabsorbed their exoskeletons during moulting. Some authors have argued that the failure of trilobites to reabsorb their mineralised exoskeletons when they moulted was a functional disadvantage when compared to modern arthropods that generally do reabsorb their cuticles, as it took substantially longer to reconstruct their exoskeletons, making them more vulnerable to predators. History of usage and research Rev. Edward Lhwyd published in 1698 in The Philosophical Transactions of the Royal Society, the oldest scientific journal in the English language, part of his letter "Concerning Several Regularly Figured Stones Lately Found by Him", that was accompanied by a page of etchings of fossils. One of his etchings depicted a trilobite he found near Llandeilo, probably on the grounds of Lord Dynefor's castle, he described as "the skeleton of some flat Fish". The discovery of Calymene blumenbachii (the Dudley locust) in 1749 by Charles Lyttleton, could be identified as the beginning of trilobite research. Lyttleton submitted a letter to the Royal Society of London in 1750 concerning a "petrified insect" he found in the "limestone pits at Dudley". In 1754, Manuel Mendez da Costa proclaimed that the Dudley locust was not an insect, but instead belonged to "the crustaceous tribe of animals". He proposed to call the Dudley specimens Pediculus marinus major trilobos (large trilobed marine louse), a name which lasted well into the 19th century. German naturalist Johann Walch, who executed the first inclusive study of this group, proposed the use of the name "trilobite". He considered it appropriate to derive the name from the unique three-lobed character of the central axis and a pleural zone to each side. Written descriptions of trilobites date possibly from the third century BC and definitely from the fourth century AD. The Spanish geologists Eladio Liñán and Rodolfo Gozalo argue that some of the fossils described in Greek and Latin lapidaries as scorpion stone, beetle stone, and ant stone, refer to trilobite fossils. Less ambiguous references to trilobite fossils can be found in Chinese sources. Fossils from the Kushan formation of northeastern China were prized as inkstones and decorative pieces. In the New World, American fossil hunters found plentiful deposits of Elrathia kingi in western Utah in the 1860s. Until the early 1900s, the Ute Native Americans of Utah wore these trilobites, which they called pachavee (little water bug), as amulets. A hole was bored in the head and the fossil was worn on a string. According to the Ute themselves, trilobite necklaces protect against bullets and diseases such as diphtheria. In 1931, Frank Beckwith uncovered evidence of the Ute use of trilobites. Travelling through the badlands, he photographed two petroglyphs that most likely represent trilobites. On the same trip he examined a burial, of unknown age, with a drilled trilobite fossil lying in the chest cavity of the interred. Since then, trilobite amulets have been found all over the Great Basin, as well as in British Columbia and Australia. In the 1880s, archaeologists discovered in the Grotte du Trilobite (Caves of Arcy-sur-Cure, Yonne, France) a much-handled trilobite fossil that had been drilled as if to be worn as a pendant. The occupation stratum in which the trilobite was found has been dated as 15,000 years old. Because the pendant was handled so much, the species of trilobite cannot be determined. This type of trilobite is not found around Yonne, so it may have been highly prized and traded from elsewhere.
Biology and health sciences
Arthropoda, others
null
62158
https://en.wikipedia.org/wiki/Convolvulaceae
Convolvulaceae
Convolvulaceae (), commonly called the bindweeds or morning glories, is a family of about 60 genera and more than 1,650 species. These species are primarily herbaceous vines, but also include trees, shrubs and herbs. The tubers of several species are edible, the best known of which is the sweet potato. Description Convolvulaceae can be recognized by their funnel-shaped, radially symmetrical corolla; the floral formula for the family has five sepals, five fused petals, five epipetalous stamens (stamens fused to the petals), and a two-part syncarpous and superior gynoecium. The stems of these plants are usually winding, hence their Latin name (from convolvere, "to wind"). The leaves are simple and alternate, without stipules. In parasitic Cuscuta (dodder) they are reduced to scales. The fruit can be a capsule, berry, or nut, all containing only two seeds per one locule (one ovule/ovary). The leaves and starchy, tuberous roots of some species are used as foodstuffs (e.g. sweet potato and water spinach), and the seeds are exploited for their medicinal value as purgatives. Some species contain ergoline alkaloids that are likely responsible for the use of these species as ingredients in psychedelic drugs (e.g. ololiuhqui). The presence of ergolines in some species of this family is due to infection by fungi related to the ergot fungi of the genus Claviceps. A recent study of Convolvulaceae species, Ipomoea asarifolia, and its associated fungi showed the presence of a fungus, identified by DNA sequencing of 18s and ITS ribosomal DNA and phylogenetic analysis to be closely related to fungi in the family Clavicipitaceae, was always associated with the presence of ergoline alkaloids in the plant. The identified fungus appears to be a seed-transmitted, obligate biotroph growing epiphytically on its host. This finding strongly suggests the unique presence of ergoline alkaloids in some species of the family Convolvulaceae is due to symbiosis with clavicipitaceous fungi. Moreover, another group of compounds, loline alkaloids, commonly produced by some members of the clavicipitaceous fungi (genus Neotyphodium), has been identified in a convolvulaceous species, but the origin of the loline alkaloids in this species is unknown. Members of the family are well known as food plants (e.g. sweet potatoes and water spinach), as showy garden plants (e.g. morning glory) and as troublesome weeds (e.g. bindweed (mainly Convolvulus and Calystegia) and dodder), while Humbertia madagascariensis is a medium-sized tree and Ipomoea carnea is an erect shrub. Some parasitic members of this family are also used medicinally. Genera Tribe Aniseieae Aniseia Choisy Odonellia K.R.Robertson Tetralocularia O'Donell Tribe Cardiochlamyeae Cardiochlamys Oliv. Cordisepalum Verdc. Dinetus Buch.-Ham. ex Sweet Duperreya Gaudich. Poranopsis Roberty Tridynamia Gagnep. Tribe Convolvuleae Calystegia R.Br. – Bindweed, morning glory Convolvulus L. – bindweed, morning glory Jacquemontia Choisy Polymeria R.Br. Tribe Cresseae Bonamia Thouars Cladostigma Radlk. Cressa L. Evolvulus L. Hildebrandtia Vatke Seddera Hochst. Stylisma Raf. Wilsonia R. Br. Tribe Cuscuteae Cuscuta L. – dodder Tribe Dichondreae Dichondra J.R.Forst. & G.Forst. Falkia Thunb. Nephrophyllum A.Rich. Petrogenia I.M.Johnst. Tribe Erycibeae Erycibe Roxb. Tribe Humbertieae Humbertia Tribe Ipomoeeae Argyreia Lour. – Hawaiian baby woodrose Astripomoea A.Meeuse Blinkworthia Choisy Ipomoea L. – morning glory, sweet potato Lepistemon Blume Lepistemonopsis Dammer Paralepistemon Lejoly & Lisowski Rivea Choisy Stictocardia Hallier f. Tribe Maripeae Dicranostyles Benth. Itzaea Standl. & Steyerm. Lysiostyles Benth. Maripa Aubl. Tribe Poraneae Calycobolus Willd. ex Schult. Dipteropeltis Hallier f. Metaporana N.E.Br. Neuropeltis Wall. Neuropeltopsis Ooststr. Porana Burm.f. Rapona Baill. Incertae sedis Camonea Raf. Daustinia Buril & Simões Decalobanthus Ooststr. Distimake Raf. Hewittia Wight & Arn. Hyalocystis Hallier f. Merremia Dennst. ex Endl. – Hawaiian woodrose Operculina Silva Manso Remirema Kerr Xenostegia D.F.Austin & Staples
Biology and health sciences
Solanales
Plants
62166
https://en.wikipedia.org/wiki/Rotifer
Rotifer
The rotifers (, from Latin 'wheel' and 'bearing'), sometimes called wheel animals or wheel animalcules, make up a phylum (Rotifera ) of microscopic and near-microscopic pseudocoelomate animals. They were first described by Rev. John Harris in 1696, and other forms were described by Antonie van Leeuwenhoek in 1703. Most rotifers are around long (although their size can range from to over ), and are common in freshwater environments throughout the world with a few saltwater species. Some rotifers are free swimming and truly planktonic, others move by inchworming along a substrate, and some are sessile, living inside tubes or gelatinous holdfasts that are attached to a substrate. About 25 species are colonial (e.g., Sinantherina semibullata), either sessile or planktonic. Rotifers are an important part of the freshwater zooplankton, being a major foodsource and with many species also contributing to the decomposition of soil organic matter. Most species of the rotifers are cosmopolitan, but there are also some endemic species, like Cephalodella vittata to Lake Baikal. Recent barcoding evidence, however, suggests that some 'cosmopolitan' species, such as Brachionus plicatilis, B. calyciflorus, Lecane bulla, among others, are actually species complexes. In some recent treatments, rotifers are placed with acanthocephalans in a larger clade called Syndermata. In June 2021, biologists reported the restoration of bdelloid rotifers after being frozen for 24,000 years in the Siberian permafrost. Early purported fossils of rotifers have been suggested in Devonian and Permian fossil beds. Taxonomy and naming John Harris first described the rotifers (in particular a bdelloid rotifer) in 1696 as "an animal like a large maggot which could contract itself into a spherical figure and then stretch itself out again; the end of its tail appeared with a forceps like that of an earwig". In 1702, Antonie van Leeuwenhoek gave a detailed description of Rotifer vulgaris and subsequently described Melicerta ringens and other species. He was also the first to publish observations of the revivification of certain species after drying. Other forms were described by other observers, but it was not until the publication of Christian Gottfried Ehrenberg's in 1838 that the rotifers were recognized as being multicellular animals. About 2,200 species of rotifers have been described. Their taxonomy is currently in a state of flux. One treatment places them in the phylum Rotifera, with three classes: Seisonidea, Bdelloidea and Monogononta. The largest group is the Monogononta, with about 1,500 species, followed by the Bdelloidea, with about 350 species. There are only two known genera with three species of Seisonidea. The Acanthocephala, previously considered to be a separate phylum, have been demonstrated to be modified rotifers. The exact relationship to other members of the phylum has not yet been resolved. One possibility is that the Acanthocephala are closer to the Bdelloidea and Monogononta than to the Seisonidea; the corresponding names and relationships are shown in the cladogram below. The Rotifera, strictly speaking, are confined to the Bdelloidea and the Monogononta. Rotifera, Acanthocephala and Seisonida make up a clade called Syndermata. Etymology The word rotifer is derived from a Neo-Latin word meaning 'wheel-bearer' due to the corona around the mouth that in concerted sequential motion resembles a wheel (although the organ does not actually rotate). Anatomy Rotifers have bilateral symmetry and a variety of different shapes. The body of a rotifer is divided into a head, trunk, and foot, and is typically somewhat cylindrical. There is a well-developed cuticle, which may be thick and rigid, giving the animal a box-like shape, or flexible, giving the animal a worm-like shape; such rotifers are respectively called loricate and illoricate. Rigid cuticles are often composed of multiple plates, and may bear spines, ridges, or other ornamentation. Their cuticle is nonchitinous and is formed from sclerotized proteins. The two most distinctive features of rotifers (in females of all species) are the presence of corona on the head, a structure ciliated in all genera except Cupelopagis and presence of mastax. In the more primitive species, the corona forms a simple ring of cilia around the mouth from which an additional band of cilia stretches over the back of the head. In the great majority of rotifers, however, this has evolved into a more complex structure. Modifications to the basic plan of the corona include alteration of the cilia into bristles or large tufts, and either expansion or loss of the ciliated band around the head. In genera such as Collotheca, the corona is modified to form a funnel surrounding the mouth. In many species, such as those in the genus Testudinella, the cilia around the mouth have disappeared, leaving just two small circular bands on the head. In the bdelloids, this plan is further modified, with the upper band splitting into two rotating wheels, raised up on a pedestal projecting from the upper surface of the head. The trunk forms the major part of the body, and encloses most of the internal organs. The foot projects from the rear of the trunk, and is usually much narrower, giving the appearance of a tail. The cuticle over the foot often forms rings, making it appear segmented, although the internal structure is uniform. Many rotifers can retract the foot partially or wholly into the trunk. The foot ends in from one to four toes, which, in sessile and crawling species, contain adhesive glands to attach the animal to the substratum. In many free-swimming species, the foot as a whole is reduced in size, and may even be absent. Nervous system Rotifers have a small cerebral ganglion, effectively its brain, located just above the mastax, from which a number of nerves extend throughout the body. The number of nerves varies among species, although the nervous system usually has a simple layout. The nervous system comprises about 25% of the roughly 1,000 cells in a rotifer. Rotifers typically possess one or two pairs of short antennae and up to five eyes. The eyes are simple in structure, sometimes with just a single photoreceptor cell. In addition, the bristles of the corona are sensitive to touch, and there are also a pair of tiny sensory pits lined by cilia in the head region. Retrocerebral organ Despite over 100 years of research, rotifer anatomy still has many poorly understood components. One of the more mysterious organs in rotifers is the "retrocerebral organ" (RCO), which still remains very enigmatic in its morphology, function, development, and evolution. Lying close to the brain, this organ usually consists of one or more glands and a sac or reservoir. The sac drains into a duct before opening through pores on the uppermost part of the head. Current data shows a wide diversity in structure and potential function. In some species it is reduced or may even be absent completely. Benthic species have larger RCO's than planktonic species. Despite this diversity, positional correspondence of RCOs strongly suggests homology. A 2023 study using transmission electron microscopy and confocal laser scanning microscopy has illuminated the fine structure of this organ further. The study, the first of its kind, investigated the RCO in one species, Trichocerca similis. It was determined to be a syncytial organ, composed of a posterior glandular region, an expansive reservoir, and an anterior duct. The glandular portion has an active cytoplasm with paired nuclei, abundant rough ER, ribosomes, Golgi, and mitochondria. Secretion granules accumulate at the anterior end of the gland where they undergo homotypic fusion to create larger granules with numerous "mesh-like" contents. These contents gradually fuse into tubular secretions that accumulate in the reservoir, awaiting secretion. Cross-striated longitudinal muscles form a partial sleeve around the reservoir and may function to squeeze the secretions through the gland's duct that often penetrates through the cerebral ganglion. Retrocerebral organ secretions Much like the organ itself, the precise function and biochemical makeup of the secretions is still unknown. The small size of rotifers and small volume of the secretions makes isolation immensely difficult. The secretions have some similarities to the hydrogel secretions that form gelatinous housings in some rotifer species. Ultrastructure analysis of T. similis secretions showed them to be a series of tube-like secretions with a highly filamentous framework. This is highly suggestive of a glycosaminoglycan structure- proteins with negatively charged polysaccharide chains forming proteoglycan molecules. These molecules are standard in vertebrate and invertebrate gelatins such as mucus. Despite recent advancements in understanding RCO organ and secretion ultrastructure, the exact function of the organ is still ultimately unclear. The leading hypotheses are that the RCO secretes a mucus-like substance that aids in benthic locomotion, adhesion, and/or reproduction (i.e., attachment of eggs to a substrate), although more research is needed to explore function and evaluate the homology between species. Digestive system The coronal cilia create a current that sweeps food into the mouth. The mouth opens into a characteristic chewing pharynx (called the mastax), sometimes via a ciliated tube, and sometimes directly. The pharynx has a powerful muscular wall and contains tiny, calcified, jaw-like structures called trophi, which are the only fossilizable parts of a rotifer. The shape of the trophi varies between different species, depending partly on the nature of their diet. In suspension feeders, the trophi are covered in grinding ridges, while in more actively carnivorous species, they may be shaped like forceps to help bite into prey. In some ectoparasitic rotifers, the mastax is adapted to grip onto the host, although, in others, the foot performs this function instead. Behind the mastax lies an oesophagus, which opens into a stomach where most of the digestion and absorption occurs. The stomach opens into a short intestine that terminates in a cloaca on the posterior dorsal surface of the animal. Up to seven salivary glands are present in some species, emptying to the mouth in front of the oesophagus, while the stomach is associated with two gastric glands that produce digestive enzymes. A pair of protonephridia open into a bladder that drains into the cloaca. These organs expel water from the body, helping to maintain osmotic balance. Biology The coronal cilia pull the animal, when unattached, through the water. Like many other microscopic animals, adult rotifers frequently exhibit eutely—they have a fixed number of cells within a species, usually on the order of 1,000. Bdelloid rotifer genomes contain two or more divergent copies of each gene, suggesting a long-term asexual evolutionary history. For example, four copies of hsp82 are found. Each is different and found on a different chromosome excluding the possibility of homozygous sexual reproduction. Feeding Rotifers eat particulate organic detritus, dead bacteria, algae, and protozoans. They eat particles up to 10 micrometres in size. Like crustaceans, rotifers contribute to nutrient recycling. For this reason, they are used in fish tanks to help clean the water, to prevent clouds of waste matter. Rotifers affect the species composition of algae in ecosystems through their choice in grazing. Rotifers may compete with cladocera and copepods for planktonic food sources. Reproduction and life cycle Rotifers are dioecious and reproduce sexually or parthenogenetically. They are sexually dimorphic, with the females always being larger than the males. In some species, this is relatively mild, but in others the female may be up to ten times the size of the male. In parthenogenetic species, males may be present only at certain times of the year, or absent altogether. The female reproductive system consists of one or two ovaries, each with a vitellarium gland that supplies the eggs with yolk. Together, each ovary and vitellarium form a single syncitial structure in the anterior part of the animal, opening through an oviduct into the cloaca. Males do not usually have a functional digestive system, and are therefore short-lived, often being sexually fertile at birth. They have a single testicle and sperm duct, associated with a pair of glandular structures referred to as prostates (unrelated to the vertebrate prostate). The sperm duct opens into a gonopore at the posterior end of the animal, which is usually modified to form a penis. The gonopore is homologous to the cloaca of females, but in most species has no connection to the vestigial digestive system, which lacks an anus. In the genus Asplanchna also the females lacks an anus, but have kept the cloacal opening for excretion and the release of eggs. The phylum Rotifera encloses three classes that reproduce by three different mechanisms: Seisonidea only reproduce sexually; Bdelloidea reproduce exclusively by asexual parthenogenesis; Monogononta reproduce alternating these two mechanisms ("cyclical parthenogenesis" or "heterogony"). Parthenogenesis (amictic phase) dominates the monogonont life cycle, promoting fast population growth and colonization. In this phase males are absent and amictic females produce diploid eggs by mitosis which develop parthenogenetically into females that are clones of their mothers. Some amictic females can generate mictic females that will produce haploid eggs by meiosis. Mixis (meiosis) is induced by different types of stimulus depending on species. Haploid eggs develop into haploid dwarf males if they are not fertilized and into diploid "resting eggs" (or "diapausing eggs") if they are fertilized by males. Fertilization is internal. The male either inserts his penis into the female's cloaca or uses it to penetrate her skin, injecting the sperm into the body cavity. The egg secretes a shell, and is attached either to the substratum, nearby plants, or the female's own body. A few species, such as members of the Rotaria, are ovoviviparous, retaining the eggs inside their body until they hatch. Most species hatch as miniature versions of the adult. Sessile species, however, are born as free-swimming larvae, which closely resemble the adults of related free-swimming species. Females grow rapidly, reaching their adult size within a few days, while males typically do not grow in size at all. The life span of monogonont females varies from two days to about three weeks. Loss of sexual reproduction system 'Ancient asexuals': Bdelloid rotifers are assumed to have reproduced without sex for many millions of years. Males are absent within the species, and females reproduce only by parthenogenesis. However, a new study provided evidence for interindividual genetic exchange and recombination in Adineta vaga, a species previously thought to be anciently asexual. Recent transitions: Loss of sexual reproduction can be inherited in a simple Mendelian fashion in the monogonont rotifer Brachionus calyciflorus: This species can normally switch between sexual and asexual reproduction (cyclical parthenogenesis), but occasionally gives rise to purely asexual lineages (obligate parthenogens). These lineages are unable to reproduce sexually due to being homozygous for a recessive allele. Resting eggs Resting eggs enclose an embryo encysted in a three-layered shell that protects it from external stressors. They are able to remain dormant for several decades and can resist adverse periods (e.g., pond desiccation or presence of antagonists). When favourable conditions return and after an obligatory period of diapause which varies among species, resting eggs hatch releasing diploid amictic females that enter into the asexual phase of the life cycle. Anhydrobiosis Bdelloid rotifer females cannot produce resting eggs, but many can survive prolonged periods of adverse conditions after desiccation. This facility is termed anhydrobiosis, and organisms with these capabilities are termed anhydrobionts. Under drought conditions, bdelloid rotifers contract into an inert form and lose almost all body water; when rehydrated they resume activity within a few hours. Bdelloids can survive the dry state for long periods, with the longest well-documented dormancy being nine years. Rotifers can also undergo other forms of cryptobiosis, notably cryobiosis which results from decreased temperatures. In 2021, researchers collected samples from remote Arctic locations containing rotifers which when thawed revealed living specimens around 24,000 years old. While in other anhydrobionts, such as the brine shrimp, this desiccation tolerance is thought to be linked to the production of trehalose, a non-reducing disaccharide (sugar), bdelloids apparently cannot synthesise trehalose. In bdelloids, a major cause of the resistance to desiccation, as well as resistance to ionizing radiation, is a highly efficient mechanism for repairing the DNA double-strand breaks induced by these agents. This repair mechanism likely involves mitotic recombination between homologous DNA regions. Predators Rotifers fall prey to many animals, such as copepods, fish (e.g. herring, salmon), bryozoa, comb jellies, jellyfish, starfish, and tardigrades. Genome size The genome size of a bdelloid rotifer, Adineta vaga, was reported to be around 244 Mb. The genomes of Monogononts seem to be significantly smaller than those of Bdelloids. In Monogononta the nuclear DNA content (2C) in eight different species of four different genera ranged almost fourfold, from 0.12 to 0.46 pg. Haploid "1C" genome sizes in Brachionus species range at least from 0.056 to 0.416 pg. Gallery
Biology and health sciences
Platyzoa
Animals
62175
https://en.wikipedia.org/wiki/Crinoid
Crinoid
Crinoids are marine invertebrates that make up the class Crinoidea. Crinoids that remain attached to the sea floor by a stalk in their adult form are commonly called sea lilies, while the unstalked forms, called feather stars or comatulids, are members of the largest crinoid order, Comatulida. Crinoids are echinoderms in the phylum Echinodermata, which also includes the starfish, brittle stars, sea urchins and sea cucumbers. They live in both shallow water and in depths of over . Adult crinoids are characterised by having the mouth located on the upper surface. This is surrounded by feeding arms, and is linked to a U-shaped gut, with the anus being located on the oral disc near the mouth. Although the basic echinoderm pattern of fivefold symmetry can be recognised, in most crinoids the five arms are subdivided into ten or more. These have feathery pinnules and are spread wide to gather planktonic particles from the water. At some stage in their lives, most crinoids have a short stem used to attach themselves to the substrate, but many live attached only as juveniles and become free-swimming as adults. There are only about 700 living species of crinoid, but the class was much more abundant and diverse in the past. Some thick limestone beds dating to the mid-Paleozoic era to Jurassic period are almost entirely made up of disarticulated crinoid fragments. Etymology The name "Crinoidea" comes from the Ancient Greek word κρίνον (krínon), "a lily", with the suffix –oid meaning "like". Morphology The basic body form of a crinoid is a stem (not present in adult feather stars) and a crown consisting of a cup-like central body known as the theca, and a set of five rays or arms, usually branched and feathery. The mouth and anus are both located on the upper side of the theca, making the dorsal (upper) surface the oral surface, unlike in the other echinoderm groups such as the sea urchins, starfish and brittle stars where the mouth is on the underside. The numerous calcareous plates make up the bulk of the crinoid, with only a small percentage of soft tissue. These ossicles fossilise well and there are beds of limestone dating from the Lower Carboniferous around Clitheroe, England, formed almost exclusively from a diverse fauna of crinoid fossils. The stem of sea lilies is composed of a column of highly porous ossicles which are connected by ligamentary tissue. It attaches to the substrate with a flattened holdfast or with whorls of jointed, root-like structures known as cirri. Further cirri may occur higher up the stem. In crinoids that attach to hard surfaces, the cirri may be robust and curved, resembling birds' feet, but when crinoids live on soft sediment, the cirri may be slender and rod-like. Juvenile feather stars have a stem, but this is later lost, with many species retaining a few cirri at the base of the crown. The majority of living crinoids are free-swimming and have only a vestigial stalk. In those deep-sea species that still retain a stalk, it may reach up to in length (although usually much smaller), and fossil species are known with stems. The theca is pentamerous (has five-part symmetry) and is homologous with the body or disc of other echinoderms. The base of the theca is formed from a cup-shaped set of ossicles (bony plates), the calyx, while the upper surface is formed by the weakly-calcified tegmen, a membranous disc. The tegmen is divided into five "ambulacral areas", including a deep groove from which the tube feet project, and five "interambulacral areas" between them. The mouth is near the centre or on the margin of the tegmen, and ambulacral grooves lead from the base of the arms to the mouth. The anus is also located on the tegmen, often on a small elevated cone, in an interambulacral area. The theca is relatively small and contains the crinoid's digestive organs. The arms are supported by a series of articulating ossicles similar to those in the stalk. Primitively, crinoids had only five arms, but in most modern forms these are divided into two at ossicle II, giving ten arms in total. In most living species, especially the free-swimming feather stars, the arms branch several more times, producing up to two hundred branches in total. Being jointed, the arms can curl up. They are lined, on either side alternately, by smaller jointed appendages known as "pinnules" which give them their feather-like appearance. Both arms and pinnules have tube feet along the margins of the ambulacral grooves. The tube feet come in groups of three of different size; they have no suction pads and are used to hold and manipulate food particles. The grooves are equipped with cilia which facilitate feeding by moving the organic particles along the arm and into the mouth. Feeding Crinoids are passive suspension feeders, filtering plankton and small particles of detritus from the sea water flowing past them with their feather-like arms. The arms are raised to form a fan-shape which is held perpendicular to the current. Mobile crinoids move to perch on rocks, coral heads or other eminences to maximise their feeding opportunities. The food particles are caught by the primary (longest) tube feet, which are fully extended and held erect from the pinnules, forming a food-trapping mesh, while the secondary and tertiary tube feet are involved in manipulating anything encountered. The tube feet are covered with sticky mucus that traps any particles which come in contact. Once they have caught a particle of food, the tube feet flick it into the ambulacral groove, where the cilia propel the mucus and food particles towards the mouth. Lappets at the side of the groove help keep the mucus stream in place. The total length of the food-trapping surface may be very large; the 56 arms of a Japanese sea lily with arms, have a total length of including the pinnules. Generally speaking, crinoids living in environments with relatively little plankton have longer and more highly branched arms than those living in food-rich environments. The mouth descends into a short oesophagus. There is no true stomach, so the oesophagus connects directly to the intestine, which runs in a single loop right around the inside of the calyx. The intestine often includes numerous diverticulae, some of which may be long or branched. The end of the intestine opens into a short muscular rectum. This ascends towards the anus, which projects from a small conical protuberance at the edge of the tegmen. Faecal matter is formed into large, mucous-cemented pellets which fall onto the tegmen and thence the substrate. Predation Specimens of the sea urchin Calocidaris micans found in the vicinity of the crinoid Endoxocrinus parrae, have been shown to contain large quantities of stem portions in their guts. These consist of articulated ossicles with soft tissue, whereas the local sediment contained only disarticulated ossicles without soft tissue. This makes it highly likely that these sea urchins are predators of the crinoids, and that the crinoids flee, offering part of their stem in the process. Various crinoid fossils hint at possible prehistoric predators. Coprolites of both fish and cephalopods have been found containing ossicles of various crinoids, such as the pelagic crinoid Saccocoma, from the Jurassic lagerstatten Solnhofen, while damaged crinoid stems with bite marks matching the toothplates of coccosteid placoderms have been found in Late Devonian Poland. The calyxes of several Devonian to Carboniferous-aged crinoids have the shells of a snail, Platyceras, intimately associated with them. Some have the snail situated over the anus, suggesting that Platyceras was a coprophagous commensal, while others have the animal directly situated over a borehole, suggesting a more pernicious relationship. Water vascular system Like other echinoderms, crinoids possess a water vascular system that maintains hydraulic pressure in the tube feet. This is not connected to external sea water via a madreporite, as in other echinoderms, but only connected through a large number of pores to the coelom (body cavity). The main fluid reservoir is the muscular-walled ring canal which is connected to the coelom by stone canals lined with calcareous material. The coelom is divided into a number of interconnecting spaces by mesenteries. It surrounds the viscera in the disc and has branches within the stalk and arms, with smaller branches extending into the pinnules. It is the contraction of the ring canal that extends the tube feet. Three narrow branches of the coelom enter each arm, two on the oral side and one aborally, and pinnules. The action of cilia cause there to be a slow flow of fluid (1mm per second) in these canals, outward in the oral branches and inward in the aboral ones, and this is the main means of transport of nutrients and waste products. There is no heart and separate circulatory system but at the base of the disc there is a large blood vessel known as the axial organ, containing some slender blind-ended tubes of unknown function, which extends into the stalk. These various fluid-filled spaces, in addition to transporting nutrients around the body, also function as both a respiratory and an excretory system. Oxygen is absorbed primarily through the tube feet, which are the most thin-walled parts of the body, with further gas exchange taking place over the large surface area of the arms. There are no specialised organs for excretion while waste is collected by phagocytic coelomocytes. Nervous system The crinoid nervous system is divided into three parts, with numerous connections between them. The oral or uppermost portion is the only one homologous with the nervous systems of other echinoderms. It consists of a central nerve ring surrounding the mouth, and radial nerves branching into the arms and is sensory in function. Below this lies an intermediate nerve ring, giving off radial nerves supplying the arms and pinnules. These nerves are motor in nature, and control the musculature of the tube feet. The third portion of the nervous system lies aborally, and is responsible for the flexing and movement actions of the arms, pinnules and cirri. This is centred on a mass of neural tissue near the base of the calyx, and provides a single nerve to each arm and a number of nerves to the stalk. Reproduction and life cycle Crinoids are dioecious, with individuals being either male or female. In most species, the gonads are located in the pinnules but in a few, they are located in the arms. Not all the pinnules are reproductive, just those closest to the crown. The gametes are produced in genital canals enclosed in genital coeloms. The pinnules eventually rupture to release the sperm and eggs into the surrounding sea water. In certain genera, such as Antedon, the fertilised eggs are cemented to the arms with secretions from epidermal glands; in others, especially cold water species from Antarctica, the eggs are brooded in specialised sacs on the arms or pinnules. The fertilised eggs hatch to release free-swimming vitellaria larvae. The bilaterally symmetrical larva is barrel-shaped with rings of cilia running round the body, and a tuft of sensory hairs at the upper pole. While both feeding (planktotrophic) and non-feeding (lecithotrophic) larvae exist among the four other extant echinoderm classes, all present day crinoids appear to be descendants from a surviving clade that went through a bottleneck after the Permian extinction, at that time losing the feeding larval stage. The larva's free-swimming period lasts for only a few days before it settles on the bottom and attaches itself to the underlying surface using an adhesive gland on its underside. The larva then undergoes an extended period of metamorphoses into a stalked juvenile, becoming radially symmetric in the process. Even the free-swimming feather stars go through this stage, with the adult eventually breaking away from the stalk. Regeneration Crinoids are not capable of clonal reproduction as are some starfish and brittle stars, but are capable of regenerating lost body parts. Arms torn off by predators or damaged by adverse environmental conditions can regrow, and even the visceral mass can regenerate over the course of a few weeks. The stalk's uppermost segment and the basal plates have the capacity to regenerate the entire crown. Nutrients and other components from the stalk, especially the upper 5 cm, are used in crown regeneration. Crinoids have been able to regenerate parts since Paleozoic times. These regenerative abilities may be vital in surviving attacks by predatory fish. Locomotion Most modern crinoids, i.e., the feather stars, are free-moving and lack a stem as adults. Examples of fossil crinoids that have been interpreted as free-swimming include Marsupites, Saccocoma and Uintacrinus. In general, crinoids move to new locations by crawling, using the cirri as legs. Such a movement may be induced in relation to a change in current direction, the need to climb to an elevated perch to feed, or because of an agonistic behaviour by an encountered individual. Crinoids can also swim. They do this by co-ordinated, repeated sequential movements of the arms in three groups. At first the direction of travel is upwards but soon becomes horizontal, travelling at about per second with the oral surface in front. Swimming usually takes place as short bursts of activity lasting up to half a minute, and in the comatulid Florometra serratissima at least, only takes place after mechanical stimulation or as an escape response evoked by a predator. In 2005, a stalked crinoid was recorded pulling itself along the sea floor off the Grand Bahama Island. While it has been known that stalked crinoids could move, before this recording the fastest motion known for a stalked crinoid was per hour. The 2005 recording showed one of these moving across the seabed at the much faster rate of per second, or per hour. Evolution Origins If one ignores the enigmatic Echmatocrinus of the Burgess Shale, the earliest known unequivocal crinoid groups date back to the Ordovician, 480 million years ago. There are two competing hypotheses pertaining to the origin of the group: the traditional viewpoint holds that crinoids evolved from within the blastozoans (the eocrinoids and their derived descendants, the blastoids and the cystoids), whereas the most popular alternative suggests that the crinoids split early from among the edrioasteroids. The debate is difficult to settle, in part because all three candidate ancestors share many characteristics, including radial symmetry, calcareous plates, and stalked or direct attachment to the substrate. Diversity Echinoderms with mineralized skeletons entered the fossil record in the early Cambrian (540 mya), and during the next 100 million years, the crinoids and blastoids (also stalked filter-feeders) were dominant. At that time, the Echinodermata included twenty taxa of class rank, only five of which survived the mass extinction events that followed. The long and varied geological history of the crinoids demonstrates how well the echinoderms had adapted to filter-feeding. The crinoids underwent two periods of abrupt adaptive radiation, the first during the Ordovician (485 to 444 mya), and the other during the early Triassic (around 230 mya). This Triassic radiation resulted in forms possessing flexible arms becoming widespread; motility, predominantly a response to predation pressure, also became far more prevalent than sessility. This radiation occurred somewhat earlier than the Mesozoic marine revolution, possibly because it was mainly prompted by increases in benthic predation, specifically of echinoids. There then followed a selective mass extinction at the end of the Permian period, during which all blastoids and most crinoids became extinct. After the end-Permian extinction, crinoids never regained the morphological diversity and dominant position they enjoyed in the Paleozoic; they employed a different suite of ecological strategies open to them from those that had proven so successful in the Paleozoic. Fossils Some fossil crinoids, such as Pentacrinites, seem to have lived attached to floating driftwood and complete colonies are often found. Sometimes this driftwood would become waterlogged and sink to the bottom, taking the attached crinoids with it. The stem of Pentacrinites can be several metres long. Modern relatives of Pentacrinites live in gentle currents attached to rocks by the end of their stem. In 2012, three geologists reported they had isolated complex organic molecules from 340-million-year-old (Mississippian) fossils of multiple species of crinoids. Identified as "resembl[ing ...] aromatic or polyaromatic quinones", these are the oldest molecules to be definitively associated with particular individual fossils, as they are believed to have been sealed inside ossicle pores by precipitated calcite during the fossilization process. Crinoid fossils, and in particular disarticulated crinoid columnals, can be so abundant that they at times serve as the primary supporting clasts in sedimentary rocks. Rocks of this nature are called encrinites. Taxonomy Crinoidea has been accepted as a distinct clade of echinoderms since the definition of the group by Miller in 1821. It includes many extinct orders as well as four closely related living orders (Comatulida, Cyrtocrinida, Hyocrinida, and Isocrinida), which are part of the subgroup Articulata. Living articulates comprise around 540 species. Class Crinoidea †Protocrinoidea (incertae sedis) Subclass †Camerata Order †Diplobathrida Order †Monobathrida Subclass Pentacrinoidea Parvclass †Disparida Order †Eustenocrinida Order †Maennilicrinida Order †Tetragonocrinida Order †Calceocrinida Parvclass Cladida Superorder †Porocrinoidea Order †Hybocrinida Order †Porocrinida Superorder †Flexibilia Order †Sagenocrinida Order †Taxocrinida Magnorder Eucladida †Ampelocrinida (incertae sedis) Superorder †Cyathoformes Superorder Articulata Order †Encrinida Order †Holocrinida Order †Millericrinida Order †Roveacrinida Order †Uintacrinida Order Comatulida Order Cyrtocrinida Order Hyocrinida Order Isocrinida Phylogeny The phylogeny, geologic history, and classification of the Crinoidea was discussed by Wright et al. (2017). These authors presented new phylogeny-based and rank-based classifications based on results of recent phylogenetic analyses. Their rank-based classification of crinoid higher taxa (down to Order), not fully resolved and with numerous groups incertae sedis (of uncertain placement), is illustrated in the cladogram. In culture Fossilised crinoid columnal segments extracted from limestone quarried on Lindisfarne, or found washed up along the foreshore, were threaded into necklaces or rosaries, and became known as St. Cuthbert's beads in the Middle Ages. Similarly, in the Midwestern United States, fossilized segments of the columns of crinoids are sometimes known as Indian beads. A species of crinoid, Eperisocrinus missouriensis, is the state fossil of Missouri. The aliens in the movie franchise Alien were inspired by crinoids. Fossil crinoid gallery See Also Echinobase, a database that contains information about various echinoderms, including a crinoid species.
Biology and health sciences
Echinoderms
Animals
62187
https://en.wikipedia.org/wiki/Bilateria
Bilateria
Bilateria () is a large clade or infrakingdom of animals called bilaterians (), characterised by bilateral symmetry (i.e. having a left and a right side that are mirror images of each other) during embryonic development. This means their body plans are laid around a longitudinal axis (rostral–caudal axis) with a front (or "head") and a rear (or "tail") end, as well as a left–right–symmetrical belly (ventral) and back (dorsal) surface. Nearly all bilaterians maintain a bilaterally symmetrical body as adults; the most notable exception is the echinoderms, which have pentaradial symmetry as adults, but are only bilaterally symmetrical as an embryo. Cephalization is a characteristic feature among most bilaterians, where the special sense organs and central nerve ganglia become concentrated at the front end. Bilaterians constitute one of the five main metazoan lineages, the other four being Porifera (sponges), Cnidaria (jellyfish, hydrozoans, sea anemones and corals), Ctenophora (comb jellies) and Placozoa (tiny blob-like animals). For the most part, bilateral embryos are triploblastic, having three germ layers: endoderm, mesoderm and ectoderm. Except for a few phyla (i.e. flatworms and gnathostomulids), bilaterians have complete digestive tracts with a separate mouth and anus. Some bilaterians (the acoelomates) lack body cavities, while others have a primary body cavity derived from the blastocoel, or a secondary cavity, the coelom. Body plan Animals with a bilaterally symmetric body plan that mainly move in one direction have a head end (anterior) and a tail (posterior) end as well as a back (dorsal) and a belly (ventral); therefore they also have a left side and a right side. Having a front end means that this part of the body encounters stimuli, such as food, favouring cephalisation, the development of a head with sense organs and a mouth. Most bilaterians (nephrozoans) have a gut that extends through the body from mouth to anus, and sometimes a wormlike body plan with a hydrostatic skeleton. Xenacoelomorphs, on the other hand, have a bag gut with one opening. Many bilaterian phyla have primary larvae which swim with cilia and have an apical organ containing sensory cells. Evolution Inferred nature of the ancestor The hypothetical most recent common ancestor of all Bilateria is termed the 'urbilaterian'. The nature of this first bilaterian is a matter of debate. One side suggests that acoelomates gave rise to the other groups (planuloid–aceloid hypothesis by Ludwig von Graff, Elie Metchnikoff, Libbie Hyman, or ). This means that the urbilaterian had a solid body, and all body cavities therefore secondarily arose later in different groups. The other side poses that the urbilaterian had a coelom, meaning that the main acoelomate phyla (flatworms and gastrotrichs) have secondarily lost their body cavities. This is the Archicoelomata hypothesis first proposed by A. T. Masterman in 1899. Variations of the Archicoelomata hypothesis are the Gastraea by Ernst Haeckel in 1872 or Adam Sedgwick, and more recently the Bilaterogastrea by , and the Trochaea by Claus Nielsen. One proposal, by Johanna Taylor Cannon and colleagues, is that that the original bilaterian was a bottom dwelling worm with a single body opening, similar to Xenoturbella. An alternative proposal, by Jaume Baguñà and colleagues, is that it may have resembled the planula larvae of some cnidarians, which unlike the radially-symmetric adults have some bilateral symmetry. However, Lewis I. Held presents evidence that it was segmented, as the mechanism for creating segments is shared between vertebrates (deuterostomes) and arthropods (protostomes). Bilaterians, presumably including the urbilaterian, share many more Hox genes controlling the development of their more complex bodies, including of their heads, than do the Cnidaria and the Acoelomorpha. Fossil record The first evidence of Bilateria in the fossil record comes from trace fossils in Ediacaran sediments, and the first bona fide bilaterian fossil is Kimberella, dating to . Earlier fossils are controversial; the fossil Vernanimalcula may be the earliest known bilaterian, but may also represent an infilled bubble. Fossil embryos are known from around the time of Vernanimalcula (), but none of these have bilaterian affinities. Burrows believed to have been created by bilaterian life forms have been found in the Tacuarí Formation of Uruguay, and were believed to be at least 585 million years old. However, more recent evidence shows these fossils are actually late Paleozoic instead of Ediacaran. Phylogeny Bilateria has traditionally been divided into two main lineages or superphyla. The deuterostomes traditionally include the echinoderms, hemichordates, chordates, and the extinct Vetulicolia. The protostomes include most of the rest, such as arthropods, annelids, molluscs, and flatworms. There are several differences, most notably in how the embryo develops. In particular, the first opening of the embryo becomes the mouth in protostomes, and the anus in deuterostomes. Many taxonomists now recognise at least two more superphyla among the protostomes, Ecdysozoa and Spiralia. The arrow worms (Chaetognatha) have proven difficult to classify; recent studies place them in the Gnathifera. The traditional division of Bilateria into Deuterostomia and Protostomia was challenged when new morphological and molecular evidence supported a sister relationship between the acoelomate taxa, Acoela and Nemertodermatida (together called Acoelomorpha), and the remaining bilaterians. The latter clade was called Nephrozoa by Jondelius et al. (2002) and Eubilateria by Baguña and Riutort (2004). The acoelomorph taxa had previously been considered flatworms with secondarily lost characteristics, but the new relationship suggested that the simple acoelomate worm form was the original bilaterian body plan and that the coelom, the digestive tract, excretory organs, and nerve cords developed in the Nephrozoa. Subsequently, the acoelomorphs were placed in phylum Xenacoelomorpha, together with the xenoturbellids, and the sister relationship between Xenacoelomorpha and Nephrozoa confirmed in phylogenomic analyses. A modern consensus phylogenetic tree for Bilateria, from a 2014 review by Casey Dunn and colleagues, is shown below. A different hypothesis is that Ambulacraria are sister to Xenacoelomorpha together forming Xenambulacraria. Xenambulacraria may be sister to Chordata or to Centroneuralia (corresponding to Nephrozoa without Ambulacraria, or, as shown here, to Chordata + Protostomia). The cladogram indicates approximately when some clades radiated into newer clades, in millions of years ago (Mya). A 2019 study by Hervé Philippe and colleagues presents the tree, cautioning that "the support values are very low, meaning there is no solid evidence to refute the traditional protostome and deuterostome dichotomy". Taxonomic history The Bilateria were named by the Austrian embryologist Berthold Hatschek in 1888. In his classification, the group included the Zygoneura, Ambulacraria, and Chordonii (the Chordata). In 1910, the Austrian zoologist Karl Grobben renamed the Zygoneura to Protostomia, and created the Deuterostomia to encompass the Ambulacraria and Chordonii.
Biology and health sciences
General classification
null
62198
https://en.wikipedia.org/wiki/Livermorium
Livermorium
Livermorium is a synthetic chemical element; it has symbol Lv and atomic number 116. It is an extremely radioactive element that has only been created in a laboratory setting and has not been observed in nature. The element is named after the Lawrence Livermore National Laboratory in the United States, which collaborated with the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, to discover livermorium during experiments conducted between 2000 and 2006. The name of the laboratory refers to the city of Livermore, California, where it is located, which in turn was named after the rancher and landowner Robert Livermore. The name was adopted by IUPAC on May 30, 2012. Six isotopes of livermorium are known, with mass numbers of 288–293 inclusive; the longest-lived among them is livermorium-293 with a half-life of about 80 milliseconds. A seventh possible isotope with mass number 294 has been reported but not yet confirmed. In the periodic table, it is a p-block transactinide element. It is a member of the 7th period and is placed in group 16 as the heaviest chalcogen, but it has not been confirmed to behave as the heavier homologue to the chalcogen polonium. Livermorium is calculated to have some similar properties to its lighter homologues (oxygen, sulfur, selenium, tellurium, and polonium), and be a post-transition metal, though it should also show several major differences from them. Introduction History Unsuccessful synthesis attempts The first search for element 116, using the reaction between 248Cm and 48Ca, was performed in 1977 by Ken Hulet and his team at the Lawrence Livermore National Laboratory (LLNL). They were unable to detect any atoms of livermorium. Yuri Oganessian and his team at the Flerov Laboratory of Nuclear Reactions (FLNR) in the Joint Institute for Nuclear Research (JINR) subsequently attempted the reaction in 1978 and met failure. In 1985, in a joint experiment between Berkeley and Peter Armbruster's team at GSI, the result was again negative, with a calculated cross section limit of 10–100 pb. Work on reactions with 48Ca, which had proved very useful in the synthesis of nobelium from the natPb+48Ca reaction, nevertheless continued at Dubna, with a superheavy element separator being developed in 1989, a search for target materials and starting of collaborations with LLNL being started in 1990, production of more intense 48Ca beams being started in 1996, and preparations for long-term experiments with 3 orders of magnitude higher sensitivity being performed in the early 1990s. This work led directly to the production of new isotopes of elements 112 to 118 in the reactions of 48Ca with actinide targets and the discovery of the 5 heaviest elements on the periodic table: flerovium, moscovium, livermorium, tennessine, and oganesson. In 1995, an international team led by Sigurd Hofmann at the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Germany attempted to synthesise element 116 in a radiative capture reaction (in which the compound nucleus de-excites through pure gamma emission without evaporating neutrons) between a lead-208 target and selenium-82 projectiles. No atoms of element 116 were identified. Unconfirmed discovery claims In late 1998, Polish physicist Robert Smolańczuk published calculations on the fusion of atomic nuclei towards the synthesis of superheavy atoms, including elements 118 and 116. His calculations suggested that it might be possible to make these two elements by fusing lead with krypton under carefully controlled conditions. In 1999, researchers at Lawrence Berkeley National Laboratory made use of these predictions and announced the discovery of elements 118 and 116, in a paper published in Physical Review Letters, and very soon after the results were reported in Science. The researchers reported to have performed the reaction + → + → + α The following year, they published a retraction after researchers at other laboratories were unable to duplicate the results and the Berkeley lab itself was unable to duplicate them as well. In June 2002, the director of the lab announced that the original claim of the discovery of these two elements had been based on data fabricated by principal author Victor Ninov. The isotope 289Lv was finally discovered in 2024 at the JINR. Discovery Livermorium was first synthesized on July 19, 2000, when scientists at Dubna (JINR) bombarded a curium-248 target with accelerated calcium-48 ions. A single atom was detected, decaying by alpha emission with decay energy 10.54 MeV to an isotope of flerovium. The results were published in December 2000. + → * → + 3 → + α The daughter flerovium isotope had properties matching those of a flerovium isotope first synthesized in June 1999, which was originally assigned to 288Fl, implying an assignment of the parent livermorium isotope to 292Lv. Later work in December 2002 indicated that the synthesized flerovium isotope was actually 289Fl, and hence the assignment of the synthesized livermorium atom was correspondingly altered to 293Lv. Road to confirmation Two further atoms were reported by the institute during their second experiment during April–May 2001. In the same experiment they also detected a decay chain which corresponded to the first observed decay of flerovium in December 1998, which had been assigned to 289Fl. No flerovium isotope with the same properties as the one found in December 1998 has ever been observed again, even in repeats of the same reaction. Later it was found that 289Fl has different decay properties and that the first observed flerovium atom may have been its nuclear isomer 289mFl. The observation of 289mFl in this series of experiments may indicate the formation of a parent isomer of livermorium, namely 293mLv, or a rare and previously unobserved decay branch of the already-discovered state 293Lv to 289mFl. Neither possibility is certain, and research is required to positively assign this activity. Another possibility suggested is the assignment of the original December 1998 atom to 290Fl, as the low beam energy used in that original experiment makes the 2n channel plausible; its parent could then conceivably be 294Lv, but this assignment would still need confirmation in the 248Cm(48Ca,2n)294Lv reaction. The team repeated the experiment in April–May 2005 and detected 8 atoms of livermorium. The measured decay data confirmed the assignment of the first-discovered isotope as 293Lv. In this run, the team also observed the isotope 292Lv for the first time. In further experiments from 2004 to 2006, the team replaced the curium-248 target with the lighter curium isotope curium-245. Here evidence was found for the two isotopes 290Lv and 291Lv. In May 2009, the IUPAC/IUPAP Joint Working Party reported on the discovery of copernicium and acknowledged the discovery of the isotope 283Cn. This implied the de facto discovery of the isotope 291Lv, from the acknowledgment of the data relating to its granddaughter 283Cn, although the livermorium data was not absolutely critical for the demonstration of copernicium's discovery. Also in 2009, confirmation from Berkeley and the Gesellschaft für Schwerionenforschung (GSI) in Germany came for the flerovium isotopes 286 to 289, immediate daughters of the four known livermorium isotopes. In 2011, IUPAC evaluated the Dubna team experiments of 2000–2006. Whereas they found the earliest data (not involving 291Lv and 283Cn) inconclusive, the results of 2004–2006 were accepted as identification of livermorium, and the element was officially recognized as having been discovered. The synthesis of livermorium has been separately confirmed at the GSI (2012) and RIKEN (2014 and 2016). In the 2012 GSI experiment, one chain tentatively assigned to 293Lv was shown to be inconsistent with previous data; it is believed that this chain may instead originate from an isomeric state, 293mLv. In the 2016 RIKEN experiment, one atom that may be assigned to 294Lv was seemingly detected, alpha decaying to 290Fl and 286Cn, which underwent spontaneous fission; however, the first alpha from the livermorium nuclide produced was missed, and the assignment to 294Lv is still uncertain though plausible. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, livermorium is sometimes called eka-polonium. In 1979 IUPAC recommended that the placeholder systematic element name ununhexium (Uuh) be used until the discovery of the element was confirmed and a name was decided. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 116", with the symbol of E116, (116), or even simply 116. According to IUPAC recommendations, the discoverer or discoverers of a new element have the right to suggest a name. The discovery of livermorium was recognized by the Joint Working Party (JWP) of IUPAC on 1 June 2011, along with that of flerovium. According to the vice-director of JINR, the Dubna team originally wanted to name element 116 moscovium, after the Moscow Oblast in which Dubna is located, but it was later decided to use this name for element 115 instead. The name livermorium and the symbol Lv were adopted on May 23, 2012. The name recognises the Lawrence Livermore National Laboratory, within the city of Livermore, California, US, which collaborated with JINR on the discovery. The city in turn is named after the American rancher Robert Livermore, a naturalized Mexican citizen of English birth. The naming ceremony for flerovium and livermorium was held in Moscow on October 24, 2012. Other routes of synthesis The synthesis of livermorium in fusion reactions using projectiles heavier than 48Ca has been explored in preparation for synthesis attempts of the yet-undiscovered element 120, as such reactions would necessarily utilize heavier projectiles. In 2023, the reaction between 238U and 54Cr was studied at the JINR's Superheavy Element Factory in Dubna; one atom of the new isotope 288Lv was reported, though more detailed analysis has not yet been published. Similarly, in 2024, a team at the Lawrence Berkeley National Laboratory reported the synthesis of two atoms of 290Lv in the reaction between 244Pu and 50Ti. This result was described as "truly groundbreaking" by RIKEN director Hiromitsu Haba, whose team plans to search for element 119. The team at JINR studied the reaction between 242Pu and 50Ti in 2024 as a follow-up to the 238U+54Cr, obtaining additional decay data for 288Lv and its decay products and discovering the new isotope 289Lv. Predicted properties Other than nuclear properties, no properties of livermorium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Properties of livermorium remain unknown and only predictions are available. Nuclear stability and isotopes Livermorium is expected to be near an island of stability centered on copernicium (element 112) and flerovium (element 114). Due to the expected high fission barriers, any nucleus within this island of stability exclusively decays by alpha decay and perhaps some electron capture and beta decay. While the known isotopes of livermorium do not actually have enough neutrons to be on the island of stability, they can be seen to approach the island, as the heavier isotopes are generally the longer-lived ones. Superheavy elements are produced by nuclear fusion. These fusion reactions can be divided into "hot" and "cold" fusion, depending on the excitation energy of the compound nucleus produced. In hot fusion reactions, very light, high-energy projectiles are accelerated toward very heavy targets (actinides), giving rise to compound nuclei at high excitation energy (~40–50 MeV) that may either fission or evaporate several (3 to 5) neutrons. In cold fusion reactions (which use heavier projectiles, typically from the fourth period, and lighter targets, usually lead and bismuth), the produced fused nuclei have a relatively low excitation energy (~10–20 MeV), which decreases the probability that these products will undergo fission reactions. As the fused nuclei cool to the ground state, they require emission of only one or two neutrons. Hot fusion reactions tend to produce more neutron-rich products because the actinides have the highest neutron-to-proton ratios of any elements that can presently be made in macroscopic quantities. Important information could be gained regarding the properties of superheavy nuclei by the synthesis of more livermorium isotopes, specifically those with a few neutrons more or less than the known ones – 286Lv, 287Lv, 294Lv, and 295Lv. This is possible because there are many reasonably long-lived isotopes of curium that can be used to make a target. The light isotopes can be made by fusing curium-243 with calcium-48. They would undergo a chain of alpha decays, ending at transactinide isotopes that are too light to achieve by hot fusion and too heavy to be produced by cold fusion. The same neutron-deficient isotopes are also reachable in reactions with projectiles heavier than 48Ca, which will be necessary to reach elements beyond atomic number 118 (or possibly 119); this is how 288Lv and 289Lv were discovered. The synthesis of the heavy isotopes 294Lv and 295Lv could be accomplished by fusing the heavy curium isotope curium-250 with calcium-48. The cross section of this nuclear reaction would be about 1 picobarn, though it is not yet possible to produce 250Cm in the quantities needed for target manufacture. Alternatively, 294Lv could be produced via charged-particle evaporation in the 251Cf(48Ca,pn) reaction. After a few alpha decays, these livermorium isotopes would reach nuclides at the line of beta stability. Additionally, electron capture may also become an important decay mode in this region, allowing affected nuclei to reach the middle of the island. For example, it is predicted that 295Lv would alpha decay to 291Fl, which would undergo successive electron capture to 291Nh and then 291Cn which is expected to be in the middle of the island of stability and have a half-life of about 1200 years, affording the most likely hope of reaching the middle of the island using current technology. A drawback is that the decay properties of superheavy nuclei this close to the line of beta stability are largely unexplored. Other possibilities to synthesize nuclei on the island of stability include quasifission (partial fusion followed by fission) of a massive nucleus. Such nuclei tend to fission, expelling doubly magic or nearly doubly magic fragments such as calcium-40, tin-132, lead-208, or bismuth-209. Recently it has been shown that the multi-nucleon transfer reactions in collisions of actinide nuclei (such as uranium and curium) might be used to synthesize the neutron-rich superheavy nuclei located at the island of stability, although formation of the lighter elements nobelium or seaborgium is more favored. One last possibility to synthesize isotopes near the island is to use controlled nuclear explosions to create a neutron flux high enough to bypass the gaps of instability at 258–260Fm and at mass number 275 (atomic numbers 104 to 108), mimicking the r-process in which the actinides were first produced in nature and the gap of instability around radon bypassed. Some such isotopes (especially 291Cn and 293Cn) may even have been synthesized in nature, but would have decayed away far too quickly (with half-lives of only thousands of years) and be produced in far too small quantities (about 10−12 the abundance of lead) to be detectable as primordial nuclides today outside cosmic rays. Physical and atomic In the periodic table, livermorium is a member of group 16, the chalcogens. It appears below oxygen, sulfur, selenium, tellurium, and polonium. Every previous chalcogen has six electrons in its valence shell, forming a valence electron configuration of ns2np4. In livermorium's case, the trend should be continued and the valence electron configuration is predicted to be 7s27p4; therefore, livermorium will have some similarities to its lighter congeners. Differences are likely to arise; a large contributing effect is the spin–orbit (SO) interaction—the mutual interaction between the electrons' motion and spin. It is especially strong for the superheavy elements, because their electrons move much faster than in lighter atoms, at velocities comparable to the speed of light. In relation to livermorium atoms, it lowers the 7s and the 7p electron energy levels (stabilizing the corresponding electrons), but two of the 7p electron energy levels are stabilized more than the other four. The stabilization of the 7s electrons is called the inert pair effect, and the effect "tearing" the 7p subshell into the more stabilized and the less stabilized parts is called subshell splitting. Computation chemists see the split as a change of the second (azimuthal) quantum number l from 1 to and for the more stabilized and less stabilized parts of the 7p subshell, respectively: the 7p1/2 subshell acts as a second inert pair, though not as inert as the 7s electrons, while the 7p3/2 subshell can easily participate in chemistry. For many theoretical purposes, the valence electron configuration may be represented to reflect the 7p subshell split as 7s7p7p. Inert pair effects in livermorium should be even stronger than in polonium and hence the +2 oxidation state becomes more stable than the +4 state, which would be stabilized only by the most electronegative ligands; this is reflected in the expected ionization energies of livermorium, where there are large gaps between the second and third ionization energies (corresponding to the breaching of the unreactive 7p1/2 shell) and fourth and fifth ionization energies. Indeed, the 7s electrons are expected to be so inert that the +6 state will not be attainable. The melting and boiling points of livermorium are expected to continue the trends down the chalcogens; thus livermorium should melt at a higher temperature than polonium, but boil at a lower temperature. It should also be denser than polonium (α-Lv: 12.9 g/cm3; α-Po: 9.2 g/cm3); like polonium it should also form an α and a β allotrope. The electron of a hydrogen-like livermorium atom (oxidized so that it only has one electron, Lv115+) is expected to move so fast that it has a mass 1.86 times that of a stationary electron, due to relativistic effects. For comparison, the figures for hydrogen-like polonium and tellurium are expected to be 1.26 and 1.080 respectively. Chemical Livermorium is projected to be the fourth member of the 7p series of chemical elements and the heaviest member of group 16 in the periodic table, below polonium. While it is the least theoretically studied of the 7p elements, its chemistry is expected to be quite similar to that of polonium. The group oxidation state of +6 is known for all the chalcogens apart from oxygen which cannot expand its octet and is one of the strongest oxidizing agents among the chemical elements. Oxygen is thus limited to a maximum +2 state, exhibited in the fluoride OF2. The +4 state is known for sulfur, selenium, tellurium, and polonium, undergoing a shift in stability from reducing for sulfur(IV) and selenium(IV) through being the most stable state for tellurium(IV) to being oxidizing in polonium(IV). This suggests a decreasing stability for the higher oxidation states as the group is descended due to the increasing importance of relativistic effects, especially the inert pair effect. The most stable oxidation state of livermorium should thus be +2, with a rather unstable +4 state. The +2 state should be about as easy to form as it is for beryllium and magnesium, and the +4 state should only be achieved with strongly electronegative ligands, such as in livermorium(IV) fluoride (LvF4). The +6 state should not exist at all due to the very strong stabilization of the 7s electrons, making the valence core of livermorium only four electrons. The lighter chalcogens are also known to form a −2 state as oxide, sulfide, selenide, telluride, and polonide; due to the destabilization of livermorium's 7p3/2 subshell, the −2 state should be very unstable for livermorium, whose chemistry should be essentially purely cationic, though the larger subshell and spinor energy splittings of livermorium as compared to polonium should make Lv2− slightly less unstable than expected. Livermorium hydride (LvH2) would be the heaviest chalcogen hydride and the heaviest homolog of water (the lighter ones are H2S, H2Se, H2Te, and PoH2). Polane (polonium hydride) is a more covalent compound than most metal hydrides because polonium straddles the border between metal and metalloid and has some nonmetallic properties: it is intermediate between a hydrogen halide like hydrogen chloride (HCl) and a metal hydride like stannane (SnH4). Livermorane should continue this trend: it should be a hydride rather than a livermoride, but still a covalent molecular compound. Spin-orbit interactions are expected to make the Lv–H bond longer than expected from periodic trends alone, and make the H–Lv–H bond angle larger than expected: this is theorized to be because the unoccupied 8s orbitals are relatively low in energy and can hybridize with the valence 7p orbitals of livermorium. This phenomenon, dubbed "supervalent hybridization", has some analogues in non-relativistic regions in the periodic table; for example, molecular calcium difluoride has 4s and 3d involvement from the calcium atom. The heavier livermorium dihalides are predicted to be linear, but the lighter ones are predicted to be bent. Experimental chemistry Unambiguous determination of the chemical characteristics of livermorium has not yet been established. In 2011, experiments were conducted to create nihonium, flerovium, and moscovium isotopes in the reactions between calcium-48 projectiles and targets of americium-243 and plutonium-244. The targets included lead and bismuth impurities and hence some isotopes of bismuth and polonium were generated in nucleon transfer reactions. This, while an unforeseen complication, could give information that would help in the future chemical investigation of the heavier homologs of bismuth and polonium, which are respectively moscovium and livermorium. The produced nuclides bismuth-213 and polonium-212m were transported as the hydrides 213BiH3 and 212mPoH2 at 850 °C through a quartz wool filter unit held with tantalum, showing that these hydrides were surprisingly thermally stable, although their heavier congeners McH3 and LvH2 would be expected to be less thermally stable from simple extrapolation of periodic trends in the p-block. Further calculations on the stability and electronic structure of BiH3, McH3, PoH2, and LvH2 are needed before chemical investigations take place. Moscovium and livermorium are expected to be volatile enough as pure elements for them to be chemically investigated in the near future, a property livermorium would then share with its lighter congener polonium, though the short half-lives of all presently known livermorium isotopes means that the element is still inaccessible to experimental chemistry.
Physical sciences
Group 16
Chemistry
62200
https://en.wikipedia.org/wiki/Oganesson
Oganesson
Oganesson is a synthetic chemical element; it has symbol Og and atomic number 118. It was first synthesized in 2002 at the Joint Institute for Nuclear Research (JINR) in Dubna, near Moscow, Russia, by a joint team of Russian and American scientists. In December 2015, it was recognized as one of four new elements by the Joint Working Party of the international scientific bodies IUPAC and IUPAP. It was formally named on 28 November 2016. The name honors the nuclear physicist Yuri Oganessian, who played a leading role in the discovery of the heaviest elements in the periodic table. Oganesson has the highest atomic number and highest atomic mass of all known elements. On the periodic table of the elements it is a p-block element, a member of group 18 and the last member of period 7. Its only known isotope, oganesson-294, is highly radioactive, with a half-life of 0.7 ms and, only five atoms have been successfully produced. This has so far prevented any experimental studies of its chemistry. Because of relativistic effects, theoretical studies predict that it would be a solid at room temperature, and significantly reactive, unlike the other members of group 18 (the noble gases). Introduction History Early speculation The possibility of a seventh noble gas, after helium, neon, argon, krypton, xenon, and radon, was considered almost as soon as the noble gas group was discovered. Danish chemist Hans Peter Jørgen Julius Thomsen predicted in April 1895, the year after the discovery of argon, that there was a whole series of chemically inert gases similar to argon that would bridge the halogen and alkali metal groups: he expected that the seventh of this series would end a 32-element period which contained thorium and uranium and have an atomic weight of 292, close to the 294 now known for the first and only confirmed isotope of oganesson. Danish physicist Niels Bohr noted in 1922 that this seventh noble gas should have atomic number 118 and predicted its electronic structure as 2, 8, 18, 32, 32, 18, 8, matching modern predictions. Following this, German chemist Aristid von Grosse wrote an article in 1965 predicting the likely properties of element 118. It was 107 years from Thomsen's prediction before oganesson was successfully synthesized, although its chemical properties have not been investigated to determine if it behaves as the heavier congener of radon. In a 1975 article, American chemist Kenneth Pitzer suggested that element 118 should be a gas or volatile liquid due to relativistic effects. Unconfirmed discovery claims In late 1998, Polish physicist Robert Smolańczuk published calculations on the fusion of atomic nuclei towards the synthesis of superheavy atoms, including oganesson. His calculations suggested that it might be possible to make element 118 by fusing lead with krypton under carefully controlled conditions, and that the fusion probability (cross section) of that reaction would be close to the lead–chromium reaction that had produced element 106, seaborgium. This contradicted predictions that the cross sections for reactions with lead or bismuth targets would go down exponentially as the atomic number of the resulting elements increased. In 1999, researchers at Lawrence Berkeley National Laboratory made use of these predictions and announced the discovery of elements 118 and 116, in a paper published in Physical Review Letters, and very soon after the results were reported in Science. The researchers reported that they had performed the reaction + → + . In 2001, they published a retraction after researchers at other laboratories were unable to duplicate the results and the Berkeley lab could not duplicate them either. In June 2002, the director of the lab announced that the original claim of the discovery of these two elements had been based on data fabricated by principal author Victor Ninov. Newer experimental results and theoretical predictions have confirmed the exponential decrease in cross sections with lead and bismuth targets as the atomic number of the resulting nuclide increases. Discovery reports The first genuine decay of atoms of oganesson was observed in 2002 at the Joint Institute for Nuclear Research (JINR) in Dubna, Russia, by a joint team of Russian and American scientists. Headed by Yuri Oganessian, a Russian nuclear physicist of Armenian ethnicity, the team included American scientists from the Lawrence Livermore National Laboratory in California. The discovery was not announced immediately, because the decay energy of 294Og matched that of 212mPo, a common impurity produced in fusion reactions aimed at producing superheavy elements, and thus announcement was delayed until after a 2005 confirmatory experiment aimed at producing more oganesson atoms. The 2005 experiment used a different beam energy (251 MeV instead of 245 MeV) and target thickness (0.34 mg/cm2 instead of 0.23 mg/cm2). On 9 October 2006, the researchers announced that they had indirectly detected a total of three (possibly four) nuclei of oganesson-294 (one or two in 2002 and two more in 2005) produced via collisions of californium-249 atoms and calcium-48 ions. + → + 3 . In 2011, IUPAC evaluated the 2006 results of the Dubna–Livermore collaboration and concluded: "The three events reported for the Z = 118 isotope have very good internal redundancy but with no anchor to known nuclei do not satisfy the criteria for discovery". Because of the very small fusion reaction probability (the fusion cross section is or ) the experiment took four months and involved a beam dose of calcium ions that had to be shot at the californium target to produce the first recorded event believed to be the synthesis of oganesson. Nevertheless, researchers were highly confident that the results were not a false positive, since the chance that the detections were random events was estimated to be less than one part in . In the experiments, the alpha-decay of three atoms of oganesson was observed. A fourth decay by direct spontaneous fission was also proposed. A half-life of 0.89 ms was calculated: decays into by alpha decay. Since there were only three nuclei, the half-life derived from observed lifetimes has a large uncertainty: . → + The identification of the nuclei was verified by separately creating the putative daughter nucleus directly by means of a bombardment of with ions, + → + 3 , and checking that the decay matched the decay chain of the nuclei. The daughter nucleus is very unstable, decaying with a lifetime of 14 milliseconds into , which may experience either spontaneous fission or alpha decay into , which will undergo spontaneous fission. Confirmation In December 2015, the Joint Working Party of international scientific bodies International Union of Pure and Applied Chemistry (IUPAC) and International Union of Pure and Applied Physics (IUPAP) recognized the element's discovery and assigned the priority of the discovery to the Dubna–Livermore collaboration. This was on account of two 2009 and 2010 confirmations of the properties of the granddaughter of 294Og, 286Fl, at the Lawrence Berkeley National Laboratory, as well as the observation of another consistent decay chain of 294Og by the Dubna group in 2012. The goal of that experiment had been the synthesis of 294Ts via the reaction 249Bk(48Ca,3n), but the short half-life of 249Bk resulted in a significant quantity of the target having decayed to 249Cf, resulting in the synthesis of oganesson instead of tennessine. From 1 October 2015 to 6 April 2016, the Dubna team performed a similar experiment with 48Ca projectiles aimed at a mixed-isotope californium target containing 249Cf, 250Cf, and 251Cf, with the aim of producing the heavier oganesson isotopes 295Og and 296Og. Two beam energies at 252 MeV and 258 MeV were used. Only one atom was seen at the lower beam energy, whose decay chain fitted the previously known one of 294Og (terminating with spontaneous fission of 286Fl), and none were seen at the higher beam energy. The experiment was then halted, as the glue from the sector frames covered the target and blocked evaporation residues from escaping to the detectors. The production of 293Og and its daughter 289Lv, as well as the even heavier isotope 297Og, is also possible using this reaction. The isotopes 295Og and 296Og may also be produced in the fusion of 248Cm with 50Ti projectiles. A search beginning in summer 2016 at RIKEN for 295Og in the 3n channel of this reaction was unsuccessful, though the study is planned to resume; a detailed analysis and cross section limit were not provided. These heavier and likely more stable isotopes may be useful in probing the chemistry of oganesson. Naming Using Mendeleev's nomenclature for unnamed and undiscovered elements, oganesson is sometimes known as eka-radon (until the 1960s as eka-emanation, emanation being the old name for radon). In 1979, IUPAC assigned the systematic placeholder name ununoctium to the undiscovered element, with the corresponding symbol of Uuo, and recommended that it be used until after confirmed discovery of the element. Although widely used in the chemical community on all levels, from chemistry classrooms to advanced textbooks, the recommendations were mostly ignored among scientists in the field, who called it "element 118", with the symbol of E118, (118), or simply 118. Before the retraction in 2001, the researchers from Berkeley had intended to name the element ghiorsium (Gh), after Albert Ghiorso (a leading member of the research team). The Russian discoverers reported their synthesis in 2006. According to IUPAC recommendations, the discoverers of a new element have the right to suggest a name. In 2007, the head of the Russian institute stated the team were considering two names for the new element: flyorium, in honor of Georgy Flyorov, the founder of the research laboratory in Dubna; and moskovium, in recognition of the Moscow Oblast where Dubna is located. He also stated that although the element was discovered as an American collaboration, who provided the californium target, the element should rightly be named in honor of Russia since the Flyorov Laboratory of Nuclear Reactions at JINR was the only facility in the world which could achieve this result. These names were later suggested for element 114 (flerovium) and element 116 (moscovium). Flerovium became the name of element 114; the final name proposed for element 116 was instead livermorium, with moscovium later being proposed and accepted for element 115 instead. Traditionally, the names of all noble gases end in "-on", with the exception of helium, which was not known to be a noble gas when discovered. The IUPAC guidelines valid at the moment of the discovery approval however required all new elements be named with the ending "-ium", even if they turned out to be halogens (traditionally ending in "-ine") or noble gases (traditionally ending in "-on"). While the provisional name ununoctium followed this convention, a new IUPAC recommendation published in 2016 recommended using the "-on" ending for new group 18 elements, regardless of whether they turn out to have the chemical properties of a noble gas. The scientists involved in the discovery of element 118, as well as those of 117 and 115, held a conference call on 23 March 2016 to decide their names. Element 118 was the last to be decided upon; after Oganessian was asked to leave the call, the remaining scientists unanimously decided to have the element "oganesson" after him. Oganessian was a pioneer in superheavy element research for sixty years reaching back to the field's foundation: his team and his proposed techniques had led directly to the synthesis of elements 107 through 118. Mark Stoyer, a nuclear chemist at the LLNL, later recalled, "We had intended to propose that name from Livermore, and things kind of got proposed at the same time from multiple places. I don't know if we can claim that we actually proposed the name, but we had intended it." In internal discussions, IUPAC asked the JINR if they wanted the element to be spelled "oganeson" to match the Russian spelling more closely. Oganessian and the JINR refused this offer, citing the Soviet-era practice of transliterating names into the Latin alphabet under the rules of the French language ("Oganessian" is such a transliteration) and arguing that "oganesson" would be easier to link to the person. In June 2016, IUPAC announced that the discoverers planned to give the element the name oganesson (symbol: Og). The name became official on 28 November 2016. In 2017, Oganessian commented on the naming: The naming ceremony for moscovium, tennessine, and oganesson was held on 2 March 2017 at the Russian Academy of Sciences in Moscow. In a 2019 interview, when asked what it was like to see his name in the periodic table next to Einstein, Mendeleev, the Curies, and Rutherford, Oganessian responded: Characteristics Other than nuclear properties, no properties of oganesson or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that it decays very quickly. Thus only predictions are available. Nuclear stability and isotopes The stability of nuclei quickly decreases with the increase in atomic number after curium, element 96, whose most stable isotope, 247Cm, has a half-life four orders of magnitude longer than that of any subsequent element. All nuclides with an atomic number above 101 undergo radioactive decay with half-lives shorter than 30 hours. No elements with atomic numbers above 82 (after lead) have stable isotopes. This is because of the ever-increasing Coulomb repulsion of protons, so that the strong nuclear force cannot hold the nucleus together against spontaneous fission for long. Calculations suggest that in the absence of other stabilizing factors, elements with more than 104 protons should not exist. However, researchers in the 1960s suggested that the closed nuclear shells around 114 protons and 184 neutrons should counteract this instability, creating an island of stability in which nuclides could have half-lives reaching thousands or millions of years. While scientists have still not reached the island, the mere existence of the superheavy elements (including oganesson) confirms that this stabilizing effect is real, and in general the known superheavy nuclides become exponentially longer-lived as they approach the predicted location of the island. Oganesson is radioactive, decaying via alpha decay and spontaneous fission, with a half-life that appears to be less than a millisecond. Nonetheless, this is still longer than some predicted values. Calculations using a quantum-tunneling model predict the existence of several heavier isotopes of oganesson with alpha-decay half-lives close to 1 ms. Theoretical calculations done on the synthetic pathways for, and the half-life of, other isotopes have shown that some could be slightly more stable than the synthesized isotope 294Og, most likely 293Og, 295Og, 296Og, 297Og, 298Og, 300Og and 302Og (the last reaching the N = 184 shell closure). Of these, 297Og might provide the best chances for obtaining longer-lived nuclei, and thus might become the focus of future work with this element. Some isotopes with many more neutrons, such as some located around 313Og, could also provide longer-lived nuclei. The isotopes from 291Og to 295Og might be produced as daughters of element 120 isotopes that can be reached in the reactions 249–251Cf+50Ti, 245Cm+48Ca, and 248Cm+48Ca. In a quantum-tunneling model, the alpha decay half-life of was predicted to be with the experimental Q-value published in 2004. Calculation with theoretical Q-values from the macroscopic-microscopic model of Muntian–Hofman–Patyk–Sobiczewski gives somewhat lower but comparable results. Calculated atomic and physical properties Oganesson is a member of group 18, the zero-valence elements. The members of this group are usually inert to most common chemical reactions (for example, combustion) because the outer valence shell is completely filled with eight electrons. This produces a stable, minimum energy configuration in which the outer electrons are tightly bound. It is thought that similarly, oganesson has a closed outer valence shell in which its valence electrons are arranged in a 7s27p6 configuration. Consequently, some expect oganesson to have similar physical and chemical properties to other members of its group, most closely resembling the noble gas above it in the periodic table, radon. Following the periodic trend, oganesson would be expected to be slightly more reactive than radon. However, theoretical calculations have shown that it could be significantly more reactive. In addition to being far more reactive than radon, oganesson may be even more reactive than the elements flerovium and copernicium, which are heavier homologs of the more chemically active elements lead and mercury, respectively. The reason for the possible enhancement of the chemical activity of oganesson relative to radon is an energetic destabilization and a radial expansion of the last occupied 7p-subshell. More precisely, considerable spin–orbit interactions between the 7p electrons and the inert 7s electrons effectively lead to a second valence shell closing at flerovium, and a significant decrease in stabilization of the closed shell of oganesson. It has also been calculated that oganesson, unlike the other noble gases, binds an electron with release of energy, or in other words, it exhibits positive electron affinity, due to the relativistically stabilized 8s energy level and the destabilized 7p3/2 level, whereas copernicium and flerovium are predicted to have no electron affinity. Nevertheless, quantum electrodynamic corrections have been shown to be quite significant in reducing this affinity by decreasing the binding in the anion Og− by 9%, thus confirming the importance of these corrections in superheavy elements. 2022 calculations expect the electron affinity of oganesson to be 0.080(6) eV. Monte Carlo simulations of oganesson's molecular dynamics predict it has a melting point of and a boiling point of due to relativistic effects (if these effects are ignored, oganesson would melt at ≈). Thus oganesson would probably be a solid rather than a gas under standard conditions, though still with a rather low melting point. Oganesson is expected to have an extremely broad polarizability, almost double that of radon. Because of its tremendous polarizability, oganesson is expected to have an anomalously low first ionization energy of about 860 kJ/mol, similar to that of cadmium and less than those of iridium, platinum, and gold. This is significantly smaller than the values predicted for darmstadtium, roentgenium, and copernicium, although it is greater than that predicted for flerovium. Its second ionization energy should be around 1560 kJ/mol. Even the shell structure in the nucleus and electron cloud of oganesson is strongly impacted by relativistic effects: the valence and core electron subshells in oganesson are expected to be "smeared out" in a homogeneous Fermi gas of electrons, unlike those of the "less relativistic" radon and xenon (although there is some incipient delocalisation in radon), due to the very strong spin–orbit splitting of the 7p orbital in oganesson. A similar effect for nucleons, particularly neutrons, is incipient in the closed-neutron-shell nucleus 302Og and is strongly in force at the hypothetical superheavy closed-shell nucleus 472164, with 164 protons and 308 neutrons. Studies have also predicted that due to increasing electrostatic forces, oganesson may have a semibubble structure in proton density, having few protons at the center of its nucleus. Moreover, spin–orbit effects may cause bulk oganesson to be a semiconductor, with a band gap of  eV predicted. All the lighter noble gases are insulators instead: for example, the band gap of bulk radon is expected to be  eV. Predicted compounds The only confirmed isotope of oganesson, 294Og, has much too short a half-life to be chemically investigated experimentally. Therefore, no compounds of oganesson have been synthesized yet. Nevertheless, calculations on theoretical compounds have been performed since 1964. It is expected that if the ionization energy of the element is high enough, it will be difficult to oxidize and therefore, the most common oxidation state would be 0 (as for the noble gases); nevertheless, this appears not to be the case. Calculations on the diatomic molecule showed a bonding interaction roughly equivalent to that calculated for , and a dissociation energy of 6 kJ/mol, roughly 4 times of that of . Most strikingly, it was calculated to have a bond length shorter than in by 0.16 Å, which would be indicative of a significant bonding interaction. On the other hand, the compound OgH+ exhibits a dissociation energy (in other words proton affinity of oganesson) that is smaller than that of RnH+. The bonding between oganesson and hydrogen in OgH is predicted to be very weak and can be regarded as a pure van der Waals interaction rather than a true chemical bond. On the other hand, with highly electronegative elements, oganesson seems to form more stable compounds than for example copernicium or flerovium. The stable oxidation states +2 and +4 have been predicted to exist in the fluorides and . The +6 state would be less stable due to the strong binding of the 7p1/2 subshell. This is a result of the same spin–orbit interactions that make oganesson unusually reactive. For example, it was shown that the reaction of oganesson with to form the compound would release an energy of 106 kcal/mol of which about 46 kcal/mol come from these interactions. For comparison, the spin–orbit interaction for the similar molecule is about 10 kcal/mol out of a formation energy of 49 kcal/mol. The same interaction stabilizes the tetrahedral Td configuration for , as distinct from the square planar D4h one of , which is also expected to have; this is because OgF4 is expected to have two inert electron pairs (7s and 7p1/2). As such, OgF6 is expected to be unbound, continuing an expected trend in the destabilisation of the +6 oxidation state (RnF6 is likewise expected to be much less stable than XeF6). The Og–F bond will most probably be ionic rather than covalent, rendering the oganesson fluorides non-volatile. OgF2 is predicted to be partially ionic due to oganesson's high electropositivity. Oganesson is predicted to be sufficiently electropositive to form an Og–Cl bond with chlorine. A compound of oganesson and tennessine, OgTs4, has been predicted to be potentially stable chemically.
Physical sciences
Group 18
Chemistry
62247
https://en.wikipedia.org/wiki/Backus%E2%80%93Naur%20form
Backus–Naur form
In computer science, BackusNaur form (BNF; ; Backus normal form) is a notation used to describe the syntax of programming languages or other formal languages. It was developed by John Backus and Peter Naur. BNF can be described as a metasyntax notation for context-free grammars. Backus–Naur form is applied wherever exact descriptions of languages are needed, such as in official language specifications, in manuals, and in textbooks on programming language theory. BNF can be used to describe document formats, instruction sets, and communication protocols. Over time, many extensions and variants of the original Backus–Naur notation have been created; some are exactly defined, including extended Backus–Naur form (EBNF) and augmented Backus–Naur form (ABNF). Overview BNFs describe how to combine different symbols to produce a syntactically correct sequence. BNFs consist of three components: a set of non-terminal symbols, a set of terminal symbols, and rules for replacing non-terminal symbols with a sequence of symbols. These so-called "derivation rules" are written as <symbol> ::= __expression__ where: <symbol> is a nonterminal variable that is always enclosed between the pair <>. means that the symbol on the left must be replaced with the expression on the right. __expression__ consists of one or more sequences of either terminal or nonterminal symbols where each sequence is separated by a vertical bar "|" indicating a choice, the whole being a possible substitution for the symbol on the left. All syntactically correct sequences must be generated in the following manner: Initialize the sequence so that it just contains one start symbol. Apply derivation rules to this start symbol and the ensuing sequences of symbols. Applying rules in this manner can produce longer and longer sequences, so many BNF definitions allow for a special "delete" symbol to be included in the specification. We can specify a rule that allows us to replace some symbols with this "delete" symbol, which is meant to indicate that we can remove the symbols from our sequence and still have a syntactically correct sequence. Example As an example, consider this possible BNF for a U.S. postal address: <postal-address> ::= <name-part> <street-address> <zip-part> <name-part> ::= <personal-part> <last-name> <opt-suffix-part> <EOL> | <personal-part> <name-part> <personal-part> ::= <first-name> | <initial> "." <street-address> ::= <house-num> <street-name> <opt-apt-num> <EOL> <zip-part> ::= <town-name> "," <state-code> <ZIP-code> <EOL> <opt-suffix-part> ::= "Sr." | "Jr." | <roman-numeral> | "" <opt-apt-num> ::= "Apt" <apt-num> | "" This translates into English as: A postal address consists of a name-part, followed by a street-address part, followed by a zip-code part. A name-part consists of either: a personal-part followed by a last name followed by an optional suffix (Jr. Sr., or dynastic number) and end-of-line, or a personal part followed by a name part (this rule illustrates the use of recursion in BNFs, covering the case of people who use multiple first and middle names and initials). A personal-part consists of either a first name or an initial followed by a dot. A street address consists of a house number, followed by a street name, followed by an optional apartment specifier, followed by an end-of-line. A zip-part consists of a town-name, followed by a comma, followed by a state code, followed by a ZIP-code followed by an end-of-line. An opt-suffix-part consists of a suffix, such as "Sr.", "Jr." or a roman-numeral, or an empty string (i.e. nothing). An opt-apt-num consists of a prefix "Apt" followed by an apartment number, or an empty string (i.e. nothing). Note that many things (such as the format of a first-name, apartment number, ZIP-code, and Roman numeral) are left unspecified here. If necessary, they may be described using additional BNF rules. History The idea of describing the structure of language using rewriting rules can be traced back to at least the work of Pāṇini, an ancient Indian Sanskrit grammarian and a revered scholar in Hinduism who lived sometime between the 6th and 4th century BC. His notation to describe Sanskrit word structure is equivalent in power to that of Backus and has many similar properties. In Western society, grammar was long regarded as a subject for teaching, rather than scientific study; descriptions were informal and targeted at practical usage. In the first half of the 20th century, linguists such as Leonard Bloomfield and Zellig Harris started attempts to formalize the description of language, including phrase structure. Meanwhile, string rewriting rules as formal logical systems were introduced and studied by mathematicians such as Axel Thue (in 1914), Emil Post (1920s–40s) and Alan Turing (1936). Noam Chomsky, teaching linguistics to students of information theory at MIT, combined linguistics and mathematics by taking what is essentially Thue's formalism as the basis for the description of the syntax of natural language. He also introduced a clear distinction between generative rules (those of context-free grammars) and transformation rules (1956). John Backus, a programming language designer at IBM, proposed a metalanguage of "metalinguistic formulas" to describe the syntax of the new programming language IAL, known today as ALGOL 58 (1959). His notation was first used in the ALGOL 60 report. BNF is a notation for Chomsky's context-free grammars. Backus may have been familiar with Chomsky's work, but there are some doubts about this. As proposed by Backus, the formula defined "classes" whose names are enclosed in angle brackets. For example, <ab>. Each of these names denotes a class of basic symbols. Further development of ALGOL led to ALGOL 60. In the committee's 1963 report, Peter Naur called Backus's notation Backus normal form. Donald Knuth argued that BNF should rather be read as Backus–Naur form, as it is "not a normal form in the conventional sense", unlike, for instance, Chomsky normal form. The name Pāṇini Backus form was also once suggested in view of the fact that the expansion Backus normal form may not be accurate, and that Pāṇini had independently developed a similar notation earlier. BNF is described by Peter Naur in the ALGOL 60 report as metalinguistic formula: Another example from the ALGOL 60 report illustrates a major difference between the BNF metalanguage and a Chomsky context-free grammar. Metalinguistic variables do not require a rule defining their formation. Their formation may simply be described in natural language within the <> brackets. The following ALGOL 60 report section 2.3 comments specification, exemplifies how this works: For the purpose of including text among the symbols of a program the following "comment" conventions hold: Equivalence here means that any of the three structures shown in the left column may be replaced, in any occurrence outside of strings, by the symbol shown in the same line in the right column without any effect on the action of the program. Naur changed two of Backus's symbols to commonly available characters. The ::= symbol was originally a :≡. The | symbol was originally the word "" (with a bar over it). BNF is very similar to canonical-form Boolean algebra equations that are, and were at the time, used in logic-circuit design. Backus was a mathematician and the designer of the FORTRAN programming language. Studies of Boolean algebra is commonly part of a mathematics curriculum. Neither Backus nor Naur described the names enclosed in < > as non-terminals. Chomsky's terminology was not originally used in describing BNF. Naur later described them as classes in ALGOL course materials. In the ALGOL 60 report they were called metalinguistic variables. Anything other than the metasymbols ::=, |, and class names enclosed in < > are symbols of the language being defined. The metasymbol ::= is to be interpreted as "is defined as". The | is used to separate alternative definitions and is interpreted as "or". The metasymbols < > are delimiters enclosing a class name. BNF is described as a metalanguage for talking about ALGOL by Peter Naur and Saul Rosen. In 1947 Saul Rosen became involved in the activities of the fledgling Association for Computing Machinery, first on the languages committee that became the IAL group and eventually led to ALGOL. He was the first managing editor of the Communications of the ACM. BNF was first used as a metalanguage to talk about the ALGOL language in the ALGOL 60 report. That is how it is explained in ALGOL programming course material developed by Peter Naur in 1962. Early ALGOL manuals by IBM, Honeywell, Burroughs and Digital Equipment Corporation followed the ALGOL 60 report using it as a metalanguage. Saul Rosen in his book describes BNF as a metalanguage for talking about ALGOL. An example of its use as a metalanguage would be in defining an arithmetic expression: The first symbol of an alternative may be the class being defined, the repetition, as explained by Naur, having the function of specifying that the alternative sequence can recursively begin with a previous alternative and can be repeated any number of times. For example, above <expr> is defined as a <term> followed by any number of <addop> <term>. In some later metalanguages, such as Schorre's META II, the BNF recursive repeat construct is replaced by a sequence operator and target language symbols defined using quoted strings. The < and > brackets were removed. Parentheses () for mathematical grouping were added. The <expr> rule would appear in META II as These changes enabled META II and its derivative programming languages to define and extend their own metalanguage, at the cost of the ability to use a natural language description, metalinguistic variable, language construct description. Many spin-off metalanguages were inspired by BNF. See META II, TREE-META, and Metacompiler. A BNF class describes a language construct formation, with formation defined as a pattern or the action of forming the pattern. The class name expr is described in a natural language as a <term> followed by a sequence <addop> <term>. A class is an abstraction; we can talk about it independent of its formation. We can talk about term, independent of its definition, as being added or subtracted in expr. We can talk about a term being a specific data type and how an expr is to be evaluated having specific combinations of data types, or even reordering an expression to group data types and evaluation results of mixed types. The natural-language supplement provided specific details of the language class semantics to be used by a compiler implementation and a programmer writing an ALGOL program. Natural-language description further supplemented the syntax as well. The integer rule is a good example of natural and metalanguage used to describe syntax: There are no specifics on white space in the above. As far as the rule states, we could have space between the digits. In the natural language we complement the BNF metalanguage by explaining that the digit sequence can have no white space between the digits. English is only one of the possible natural languages. Translations of the ALGOL reports were available in many natural languages. The origin of BNF is not as important as its impact on programming language development. During the period immediately following the publication of the ALGOL 60 report BNF was the basis of many compiler-compiler systems. Some, like "A Syntax Directed Compiler for ALGOL 60" developed by Edgar T. Irons and "A Compiler Building System" Developed by Brooker and Morris, directly used BNF. Others, like the Schorre Metacompilers, made it into a programming language with only a few changes. <class name> became symbol identifiers, dropping the enclosing <, > and using quoted strings for symbols of the target language. Arithmetic-like grouping provided a simplification that removed using classes where grouping was its only value. The META II arithmetic expression rule shows grouping use. Output expressions placed in a META II rule are used to output code and labels in an assembly language. Rules in META II are equivalent to a class definitions in BNF. The Unix utility yacc is based on BNF with code production similar to META II. yacc is most commonly used as a parser generator, and its roots are obviously BNF. BNF today is one of the oldest computer-related languages still in use. Further examples BNF's syntax itself may be represented with a BNF like the following: <syntax> ::= <rule> | <rule> <syntax> <rule> ::= <opt-whitespace> "<" <rule-name> ">" <opt-whitespace> "::=" <opt-whitespace> <expression> <line-end> <opt-whitespace> ::= " " <opt-whitespace> | "" <expression> ::= <list> | <list> <opt-whitespace> "|" <opt-whitespace> <expression> <line-end> ::= <opt-whitespace> <EOL> | <line-end> <line-end> <list> ::= <term> | <term> <opt-whitespace> <list> <term> ::= <literal> | "<" <rule-name> ">" <literal> ::= '"' <text1> '"' | "'" <text2> "'" <text1> ::= "" | <character1> <text1> <text2> ::= "" | <character2> <text2> <character> ::= <letter> | <digit> | <symbol> <letter> ::= "A" | "B" | "C" | "D" | "E" | "F" | "G" | "H" | "I" | "J" | "K" | "L" | "M" | "N" | "O" | "P" | "Q" | "R" | "S" | "T" | "U" | "V" | "W" | "X" | "Y" | "Z" | "a" | "b" | "c" | "d" | "e" | "f" | "g" | "h" | "i" | "j" | "k" | "l" | "m" | "n" | "o" | "p" | "q" | "r" | "s" | "t" | "u" | "v" | "w" | "x" | "y" | "z" <digit> ::= "0" | "1" | "2" | "3" | "4" | "5" | "6" | "7" | "8" | "9" <symbol> ::= "|" | " " | "!" | "#" | "$" | "%" | "&" | "(" | ")" | "*" | "+" | "," | "-" | "." | "/" | ":" | ";" | ">" | "=" | "<" | "?" | "@" | "[" | "\" | "]" | "^" | "_" | "`" | "{" | "}" | "~" <character1> ::= <character> | "'" <character2> ::= <character> | '"' <rule-name> ::= <letter> | <rule-name> <rule-char> <rule-char> ::= <letter> | <digit> | "-" Note that "" is the empty string. The original BNF did not use quotes as shown in <literal> rule. This assumes that no whitespace is necessary for proper interpretation of the rule. <EOL> represents the appropriate line-end specifier (in ASCII, carriage-return, line-feed or both depending on the operating system). <rule-name> and <text> are to be substituted with a declared rule's name/label or literal text, respectively. In the U.S. postal address example above, the entire block-quote is a <syntax>. Each line or unbroken grouping of lines is a rule; for example one rule begins with <name-part> ::=. The other part of that rule (aside from a line-end) is an expression, which consists of two lists separated by a vertical bar |. These two lists consists of some terms (three terms and two terms, respectively). Each term in this particular rule is a rule-name. Variants EBNF There are many variants and extensions of BNF, generally either for the sake of simplicity and succinctness, or to adapt it to a specific application. One common feature of many variants is the use of regular expression repetition operators such as * and +. The extended Backus–Naur form (EBNF) is a common one. Another common extension is the use of square brackets around optional items. Although not present in the original ALGOL 60 report (instead introduced a few years later in IBM's PL/I definition), the notation is now universally recognised. ABNF Augmented Backus–Naur form (ABNF) and Routing Backus–Naur form (RBNF) are extensions commonly used to describe Internet Engineering Task Force (IETF) protocols. Parsing expression grammars build on the BNF and regular expression notations to form an alternative class of formal grammar, which is essentially analytic rather than generative in character. Others Many BNF specifications found online today are intended to be human-readable and are non-formal. These often include many of the following syntax rules and extensions: Optional items enclosed in square brackets: [<item-x>]. Items existing 0 or more times are enclosed in curly brackets or suffixed with an asterisk (*) such as <word> ::= <letter> {<letter>} or <word> ::= <letter> <letter>* respectively. Items existing 1 or more times are suffixed with an addition (plus) symbol, +, such as <word> ::= <letter>+. Terminals may appear in bold rather than italics, and non-terminals in plain text rather than angle brackets. Where items are grouped, they are enclosed in simple parentheses. Software using BNF or variants Software that accepts BNF (or a superset) as input ANTLR, a parser generator written in Java Coco/R, compiler generator accepting an attributed grammar in EBNF DMS Software Reengineering Toolkit, program analysis and transformation system for arbitrary languages GOLD, a BNF parser generator RPA BNF parser. Online (PHP) demo parsing: JavaScript, XML XACT X4MR System, a rule-based expert system for programming language translation XPL Analyzer, a tool which accepts simplified BNF for a language and produces a parser for that language in XPL; it may be integrated into the supplied SKELETON program, with which the language may be debugged (a SHARE contributed program, which was preceded by A Compiler Generator) bnfparser2, a universal syntax verification utility bnf2xml, Markup input with XML tags using advanced BNF matching JavaCC, Java Compiler Compiler tm (JavaCC tm) - The Java Parser Generator Similar software GNU bison, GNU version of yacc Yacc, parser generator (most commonly used with the Lex preprocessor) Racket's parser tools, lex and yacc-style parsing (Beautiful Racket edition) Qlik Sense, a BI tool, uses a variant of BNF for scripting BNF Converter (BNFC), operating on a variant called "labeled Backus–Naur form" (LBNF). In this variant, each production for a given non-terminal is given a label, which can be used as a constructor of an algebraic data type representing that nonterminal. The converter is capable of producing types and parsers for abstract syntax in several languages, including Haskell and Java
Technology
Programming languages
null
62251
https://en.wikipedia.org/wiki/Ctenophora
Ctenophora
Ctenophora ( ; : ctenophore ; ) comprise a phylum of marine invertebrates, commonly known as comb jellies, that inhabit sea waters worldwide. They are notable for the groups of cilia they use for swimming (commonly referred to as "combs"), and they are the largest animals to swim with the help of cilia. Depending on the species, adult ctenophores range from a few millimeters to in size. 186 living species are recognised. Their bodies consist of a mass of jelly, with a layer two cells thick on the outside, and another lining the internal cavity. The phylum has a wide range of body forms, including the egg-shaped cydippids with a pair of retractable tentacles that capture prey, the flat, generally combless platyctenids, and the large-mouthed beroids, which prey on other ctenophores. Almost all ctenophores function as predators, taking prey ranging from microscopic larvae and rotifers to the adults of small crustaceans; the exceptions are juveniles of two species, which live as parasites on the salps on which adults of their species feed. Despite their soft, gelatinous bodies, fossils thought to represent ctenophores appear in Lagerstätten dating as far back as the early Cambrian, about 525 million years ago. The position of the ctenophores in the "tree of life" has long been debated in molecular phylogenetics studies. Biologists proposed that ctenophores constitute the second-earliest branching animal lineage, with sponges being the sister-group to all other multicellular animals (Porifera sister hypothesis). Other biologists contend that ctenophores emerged earlier than sponges (Ctenophora sister hypothesis), which themselves appeared before the split between cnidarians and bilaterians. Pisani et al. reanalyzed the data and suggested that the computer algorithms used for analysis were misled by the presence of specific ctenophore genes that were markedly different from those of other species. Follow up analysis by Whelan et al. (2017) yielded further support for the 'Ctenophora sister' hypothesis; the issue remains a matter of taxonomic dispute. Schultz et al. (2023) found irreversible changes in synteny in the sister of the Ctenophora, the Myriazoa, consisting of the rest of the animals. Distinguishing features Among animal phyla, the ctenophores are more complex than sponges, about as complex as cnidarians (jellyfish, sea anemones, etc.), and less complex than bilaterians (which include almost all other animals). Unlike sponges, both ctenophores and cnidarians have: cells bound by inter-cell connections and carpet-like basement membranes; muscles; nervous systems; and sensory organs (in some, not all). Ctenophores are distinguished from all other animals by having colloblasts, which are sticky and adhere to prey, although a few ctenophore species lack them. Like cnidarians, ctenophores have two main layers of cells that sandwich a middle layer of jelly-like material, which is called the mesoglea in cnidarians and ctenophores; more complex animals have three main cell layers and no intermediate jelly-like layer. Hence ctenophores and cnidarians have traditionally been labelled diploblastic. Both ctenophores and cnidarians have a type of muscle that, in more complex animals, arises from the middle cell layer, and as a result some recent text books classify ctenophores as triploblastic, while others still regard them as diploblastic. The comb jellies have more than 80 different cell types, exceeding the numbers from other groups like placozoans, sponges, cnidarians, and some deep-branching bilaterians. Ranging from about to in size, ctenophores are the largest non-colonial animals that use cilia as their main method of locomotion. Most species have eight strips, called comb rows, that run the length of their bodies and bear comb-like bands of cilia, called "ctenes", stacked along the comb rows so that when the cilia beat, those of each comb touch the comb below. The name "ctenophora" means "comb-bearing", from the Greek (stem-form ) meaning "comb" and the Greek suffix meaning "carrying". Description For a phylum with relatively few species, ctenophores have a wide range of body plans. Coastal species need to be tough enough to withstand waves and swirling sediment particles, while some oceanic species are so fragile that it is very difficult to capture them intact for study. In addition, oceanic species do not preserve well, and are known mainly from photographs and from observers' notes. Hence most attention has until recently concentrated on three coastal genera – Pleurobrachia, Beroe and Mnemiopsis. At least two textbooks base their descriptions of ctenophores on the cydippid Pleurobrachia. Since the body of many species is almost radially symmetrical, the main axis is oral to aboral (from the mouth to the opposite end). However, since only two of the canals near the statocyst terminate in anal pores, ctenophores have no mirror-symmetry, although many have rotational symmetry. In other words, if the animal rotates in a half-circle it looks the same as when it started. Common features The ctenophore phylum has a wide range of body forms, including the flattened, deep-sea platyctenids, in which the adults of most species lack combs, and the coastal beroids, which lack tentacles and prey on other ctenophores by using huge mouths armed with groups of large, stiffened cilia that act as teeth. Body layers Like those of cnidarians, (jellyfish, sea anemones, etc.), ctenophores' bodies consist of a relatively thick, jelly-like mesoglea sandwiched between two epithelia, layers of cells bound by inter-cell connections and by a fibrous basement membrane that they secrete. The epithelia of ctenophores have two layers of cells rather than one, and some of the cells in the upper layer have several cilia per cell. The outer layer of the epidermis (outer skin) consists of: sensory cells; cells that secrete mucus, which protects the body; and interstitial cells, which can transform into other types of cell. In specialized parts of the body, the outer layer also contains colloblasts, found along the surface of tentacles and used in capturing prey, or cells bearing multiple large cilia, for locomotion. The inner layer of the epidermis contains a nerve net, and myoepithelial cells that act as muscles. The internal cavity forms: a mouth that can usually be closed by muscles; a pharynx ("throat"); a wider area in the center that acts as a stomach; and a system of internal canals. These branch through the mesoglea to the most active parts of the animal: the mouth and pharynx; the roots of the tentacles, if present; all along the underside of each comb row; and four branches around the sensory complex at the far end from the mouth – two of these four branches terminate in anal pores. The inner surface of the cavity is lined with an epithelium, the gastrodermis. The mouth and pharynx have both cilia and well-developed muscles. In other parts of the canal system, the gastrodermis is different on the sides nearest to and furthest from the organ that it supplies. The nearer side is composed of tall nutritive cells that store nutrients in vacuoles (internal compartments), germ cells that produce eggs or sperm, and photocytes that produce bioluminescence. The side furthest from the organ is covered with ciliated cells that circulate water through the canals, punctuated by ciliary rosettes, pores that are surrounded by double whorls of cilia and connect to the mesoglea. Feeding, excretion and respiration When prey is swallowed, it is liquefied in the pharynx by enzymes and by muscular contractions of the pharynx. The resulting slurry is wafted through the canal system by the beating of the cilia, and digested by the nutritive cells. The ciliary rosettes in the canals may help to transport nutrients to muscles in the mesoglea. The anal pores may eject unwanted small particles, but most unwanted matter is regurgitated via the mouth. Little is known about how ctenophores get rid of waste products produced by the cells. The ciliary rosettes in the gastrodermis may help to remove wastes from the mesoglea, and may also help to adjust the animal's buoyancy by pumping water into or out of the mesoglea. Locomotion The outer surface bears usually eight comb rows, called swimming-plates, which are used for swimming. The rows are oriented to run from near the mouth (the "oral pole") to the opposite end (the "aboral pole"), and are spaced more or less evenly around the body, although spacing patterns vary by species and in most species the comb rows extend only part of the distance from the aboral pole towards the mouth. The "combs" (also called "ctenes" or "comb plates") run across each row, and each consists of thousands of unusually long cilia, up to . Unlike conventional cilia and flagella, which has a filament structure arranged in a 9 + 2 pattern, these cilia are arranged in a 9 + 3 pattern, where the extra compact filament is suspected to have a supporting function. These normally beat so that the propulsion stroke is away from the mouth, although they can also reverse direction. Hence ctenophores usually swim in the direction in which the mouth is eating, unlike jellyfish. When trying to escape predators, one species can accelerate to six times its normal speed; some other species reverse direction as part of their escape behavior, by reversing the power stroke of the comb plate cilia. It is uncertain how ctenophores control their buoyancy, but experiments have shown that some species rely on osmotic pressure to adapt to the water of different densities. Their body fluids are normally as concentrated as seawater. If they enter less dense brackish water, the ciliary rosettes in the body cavity may pump this into the mesoglea to increase its bulk and decrease its density, to avoid sinking. Conversely, if they move from brackish to full-strength seawater, the rosettes may pump water out of the mesoglea to reduce its volume and increase its density. Nervous system and senses Ctenophores have no brain or central nervous system, but instead have a subepidermal nerve net (rather like a cobweb) that forms a ring round the mouth and is densest near structures such as the comb rows, pharynx, tentacles (if present) and the sensory complex furthest from the mouth. The communication between nerve cells make use of two different methods; some of the neurons are found to have synaptic connections, but the neurons in the nerve net are highly distinctive by being fused into a syncytium, rather than being connected by synapses. Some animals outside ctenophores also have fused nerve cells, but never to such a degree that they form a whole nerve net. Fossils shows that Cambrian species had a more complex nervous system, with long nerves which connected with a ring around the mouth. The only known ctenophores with long nerves today is Euplokamis in the order Cydippida. Their nerve cells arise from the same progenitor cells as the colloblasts. In addition, there is a less organized mesogleal nerve net consisting of single neurites. The largest single sensory feature is the aboral organ (at the opposite end from the mouth), which is underlined with its own nerve net. This organ's main component is a statocyst, a balance sensor consisting of a statolith, a tiny grain of calcium carbonate, supported on four bundles of cilia, called "balancers", that sense its orientation. The statocyst is protected by a transparent dome made of long, immobile cilia. A ctenophore does not automatically try to keep the statolith resting equally on all the balancers. Instead, its response is determined by the animal's "mood", in other words, the overall state of the nervous system. For example, if a ctenophore with trailing tentacles captures prey, it will often put some comb rows into reverse, spinning the mouth towards the prey. Research supports the hypothesis that the ciliated larvae in cnidarians and bilaterians share an ancient and common origin. The larvae's apical organ is involved in the formation of the nervous system. The aboral organ of comb jellies is not homologous with the apical organ in other animals, and the formation of their nervous system has therefore a different embryonic origin. Ctenophore nerve cells and nervous system have different biochemistry as compared to other animals. For instance, they lack the genes and enzymes required to manufacture neurotransmitters like serotonin, dopamine, nitric oxide, octopamine, noradrenaline, and others, otherwise seen in all other animals with a nervous system, with the genes coding for the receptors for each of these neurotransmitters missing. Monofunctional catalase (CAT), one of the three major families of antioxidant enzymes that target hydrogen peroxide, an important signaling molecule for synaptic and neuronal activity, is also absent, most likely due to gene loss. They have been found to use L-glutamate as a neurotransmitter, and have an unusually high variety of ionotropic glutamate receptors and genes for glutamate synthesis and transport compared to other metazoans. The genomic content of the nervous system genes is the smallest known of any animal, and could represent the minimum genetic requirements for a functional nervous system. The fact that portions of the nervous system feature directly fused neurons, without synapses, suggests that ctenophores might form a sister group to other metazoans, having developed a nervous system independently. If ctenophores are the sister group to all other metazoans, nervous systems may have either been lost in sponges and placozoans, or arisen more than once among metazoans. Cydippids Cydippid ctenophores have bodies that are more or less rounded, sometimes nearly spherical and other times more cylindrical or egg-shaped; the common coastal "sea gooseberry", Pleurobrachia, sometimes has an egg-shaped body with the mouth at the narrow end, although some individuals are more uniformly round. From opposite sides of the body extends a pair of long, slender tentacles, each housed in a sheath into which it can be withdrawn. Some species of cydippids have bodies that are flattened to various extents so that they are wider in the plane of the tentacles. The tentacles of cydippid ctenophores are typically fringed with tentilla ("little tentacles"), although a few genera have simple tentacles without these side branches. The tentacles and tentilla are densely covered with microscopic colloblasts that capture prey by sticking to it. Colloblasts are specialized mushroom-shaped cells in the outer layer of the epidermis, and have three main components: a domed head with vesicles (chambers) that contain adhesive; a stalk that anchors the cell in the lower layer of the epidermis or in the mesoglea; and a spiral thread that coils round the stalk and is attached to the head and to the root of the stalk. The function of the spiral thread is uncertain, but it may absorb stress when prey tries to escape, and thus prevent the colloblast from being torn apart. One species, Minictena luteola, which only measure 1.5mm in diameter, have five different types of colloblast cells. In addition to colloblasts, members of the genus Haeckelia, which feed mainly on jellyfish, incorporate their victims' stinging nematocytes into their own tentacles – some cnidaria-eating nudibranchs similarly incorporate nematocytes into their bodies for defense. The tentilla of Euplokamis differ significantly from those of other cydippids: they contain striated muscle, a cell type otherwise unknown in the phylum Ctenophora; and they are coiled when relaxed, while the tentilla of all other known ctenophores elongate when relaxed. Euplokamis tentilla have three types of movement that are used in capturing prey: they may flick out very quickly (in 40 to 60 milliseconds); they can wriggle, which may lure prey by behaving like small planktonic worms; and they coil round prey. The unique flicking is an uncoiling movement powered by contraction of the striated muscle. The wriggling motion is produced by smooth muscles, but of a highly specialized type. Coiling around prey is accomplished largely by the return of the tentilla to their inactive state, but the coils may be tightened by smooth muscle. There are eight rows of combs that run from near the mouth to the opposite end, and are spaced evenly round the body. The "combs" beat in a metachronal rhythm rather like that of a Mexican wave. From each balancer in the statocyst a ciliary groove runs out under the dome and then splits to connect with two adjacent comb rows, and in some species runs along the comb rows. This forms a mechanical system for transmitting the beat rhythm from the combs to the balancers, via water disturbances created by the cilia. Lobates The Lobata has a pair of lobes, which are muscular, cuplike extensions of the body that project beyond the mouth. Their inconspicuous tentacles originate from the corners of the mouth, running in convoluted grooves and spreading out over the inner surface of the lobes (rather than trailing far behind, as in the Cydippida). Between the lobes on either side of the mouth, many species of lobates have four auricles, gelatinous projections edged with cilia that produce water currents that help direct microscopic prey toward the mouth. This combination of structures enables lobates to feed continuously on suspended planktonic prey. Lobates have eight comb-rows, originating at the aboral pole and usually not extending beyond the body to the lobes; in species with (four) auricles, the cilia edging the auricles are extensions of cilia in four of the comb rows. Most lobates are quite passive when moving through the water, using the cilia on their comb rows for propulsion, although Leucothea has long and active auricles whose movements also contribute to propulsion. Members of the lobate genera Bathocyroe and Ocyropsis can escape from danger by clapping their lobes, so that the jet of expelled water drives them back very quickly. Unlike cydippids, the movements of lobates' combs are coordinated by nerves rather than by water disturbances created by the cilia, yet combs on the same row beat in the same Mexican wave style as the mechanically coordinated comb rows of cydippids and beroids. This may have enabled lobates to grow larger than cydippids and to have less egg-like shapes. An unusual species first described in 2000, Lobatolampea tetragona, has been classified as a lobate, although the lobes are "primitive" and the body is medusa-like when floating and disk-like when resting on the sea-bed. Beroids The Beroida, also known as Nuda, have no feeding appendages, but their large pharynx, just inside the large mouth and filling most of the saclike body, bears "macrocilia" at the oral end. These fused bundles of several thousand large cilia are able to "bite" off pieces of prey that are too large to swallow whole – almost always other ctenophores. In front of the field of macrocilia, on the mouth "lips" in some species of Beroe, is a pair of narrow strips of adhesive epithelial cells on the stomach wall that "zip" the mouth shut when the animal is not feeding, by forming intercellular connections with the opposite adhesive strip. This tight closure streamlines the front of the animal when it is pursuing prey. Other body forms The Ganeshida has a pair of small oral lobes and a pair of tentacles. The body is circular rather than oval in cross-section, and the pharynx extends over the inner surfaces of the lobes. The Thalassocalycida, only discovered in 1978 and known from only one species, are medusa-like, with bodies that are shortened in the oral-aboral direction, and short comb-rows on the surface furthest from the mouth, originating from near the aboral pole. They capture prey by movements of the bell and possibly by using two short tentacles. The Cestida ("belt animals") are ribbon-shaped planktonic animals, with the mouth and aboral organ aligned in the middle of opposite edges of the ribbon. There is a pair of comb-rows along each aboral edge, and tentilla emerging from a groove all along the oral edge, which stream back across most of the wing-like body surface. Cestids can swim by undulating their bodies as well as by the beating of their comb-rows. There are two known species, with worldwide distribution in warm, and warm-temperate waters: Cestum veneris ("Venus' girdle") is among the largest ctenophores – up to long, and can undulate slowly or quite rapidly. Velamen parallelum, which is typically less than long, can move much faster in what has been described as a "darting motion". Most Platyctenida have oval bodies that are flattened in the oral-aboral direction, with a pair of tentilla-bearing tentacles on the aboral surface. They cling to and creep on surfaces by everting the pharynx and using it as a muscular "foot". All but one of the known platyctenid species lack comb-rows. Platyctenids are usually cryptically colored, live on rocks, algae, or the body surfaces of other invertebrates, and are often revealed by their long tentacles with many side branches, seen streaming off the back of the ctenophore into the current. Reproduction and development Adults of most species can regenerate tissues that are damaged or removed, although only platyctenids reproduce by cloning, splitting off from the edges of their flat bodies fragments that develop into new individuals. Lab research on Mnemiopsis leidyi also show that when two individuals have parts of their bodies removed, they are able to fuse together, including their nervous and digestive systems, even when the two individuals are genetically different; a phenomenon that has so far only been found in comb jellies. The last common ancestor (LCA) of the ctenophores was hermaphroditic. Some are simultaneous hermaphrodites, which can produce both eggs and sperm at the same time, while others are sequential hermaphrodites, in which the eggs and sperm mature at different times. There is no metamorphosis. At least three species are known to have evolved separate sexes (dioecy); Ocyropsis crystallina and Ocyropsis maculata in the genus Ocyropsis and Bathocyroe fosteri in the genus Bathocyroe. The gonads are located in the parts of the internal canal network under the comb rows, and eggs and sperm are released via pores in the epidermis. Fertilization is generally external, but platyctenids use internal fertilization and keep the eggs in brood chambers until they hatch. Self-fertilization has occasionally been seen in species of the genus Mnemiopsis, and it is thought that most of the hermaphroditic species are self-fertile. Development of the fertilized eggs is direct; there is no distinctive larval form. Juveniles of all groups are generally planktonic, and most species resemble miniature adult cydippids, gradually developing their adult body forms as they grow. In the genus Beroe, however, the juveniles have large mouths and, like the adults, lack both tentacles and tentacle sheaths. In some groups, such as the flat, bottom-dwelling platyctenids, the juveniles behave more like true larvae. They live among the plankton and thus occupy a different ecological niche from their parents, only attaining the adult form by a more radical ontogeny after dropping to the sea-floor. At least in some species, juvenile ctenophores appear capable of producing small quantities of eggs and sperm while they are well below adult size, and adults produce eggs and sperm for as long as they have sufficient food. If they run short of food, they first stop producing eggs and sperm, and then shrink in size. When the food supply improves, they grow back to normal size and then resume reproduction. These features make ctenophores capable of increasing their populations very quickly. Members of the Lobata and Cydippida also have a reproduction form called dissogeny; two sexually mature stages, first as larva and later as juveniles and adults. During their time as larva they are capable of releasing gametes periodically. After their first reproductive period is over they will not produce more gametes again until later. A population of Mertensia ovum in the central Baltic Sea have become paedogenetic, and consist solely of sexually mature larvae less than 1.6 mm. In Mnemiopsis leidyi, nitric oxide (NO) signaling is present both in adult tissues and differentially expressed in later embryonic stages suggesting the involvement of NO in developmental mechanisms. The mature form of the same species is also able to revert back to the cydippid stage when triggered by environmental stressors. Colors and bioluminescence Most ctenophores that live near the surface are mostly colorless and almost transparent. However some deeper-living species are strongly pigmented, for example the species known as "Tortugas red" (see illustration here), which has not yet been formally described. Platyctenids generally live attached to other sea-bottom organisms, and often have similar colors to these host organisms. The gut of the deep-sea genus Bathocyroe is red, which hides the bioluminescence of copepods it has swallowed. The comb rows of most planktonic ctenophores produce a rainbow effect, which is not caused by bioluminescence but by the scattering of light as the combs move. Most species are also bioluminescent, but the light is usually blue or green and can only be seen in darkness. However some significant groups, including all known platyctenids and the cydippid genus Pleurobrachia, are incapable of bioluminescence. When some species, including Bathyctena chuni, Euplokamis stationis and Eurhamphaea vexilligera, are disturbed, they produce secretions (ink) that luminesce at much the same wavelengths as their bodies. Juveniles will luminesce more brightly in relation to their body size than adults, whose luminescence is diffused over their bodies. Detailed statistical investigation has not suggested the function of ctenophores' bioluminescence nor produced any correlation between its exact color and any aspect of the animals' environments, such as depth or whether they live in coastal or mid-ocean waters. In ctenophores, bioluminescence is caused by the activation of calcium-activated proteins named photoproteins in cells called photocytes, which are often confined to the meridional canals that underlie the eight comb rows. In the genome of Mnemiopsis leidyi ten genes encode photoproteins. These genes are co-expressed with opsin genes in the developing photocytes of Mnemiopsis leidyi, raising the possibility that light production and light detection may be working together in these animals. Ecology Distribution Ctenophores are found in most marine environments: from polar waters at −2 °C to the tropics at 30 °C; near coasts and in mid-ocean; from the surface waters to the ocean depths at more than 7000 meters. The best-understood are the genera Pleurobrachia, Beroe and Mnemiopsis, as these planktonic coastal forms are among the most likely to be collected near shore. No ctenophores have been found in fresh water. In 2013 Mnemiopsis was recorded in lake Birket Qarun, and in 2014 in lake El Rayan II, both near Faiyum in Egypt, where they were accidentally introduced by the transport of fish (mullet) fry. Though many species prefer brackish waters like estuaries and coastal lagoons in open connection with the sea, this was the first record from an inland environment. Both lakes are saline, with Birket Qarun being hypersaline, and shows that some ctenophores can establish themselves in saline limnic environments without connection to the ocean. In the long run, it is not expected the populations will survive. The two limiting factors in saline lakes are availability of food and a varied diet, and high temperatures during hot summers. Because a parasitic isopod, Livoneca redmanii, was introduced at the same time, it is difficult to say how much of the ecological impact of invasive species is caused by the ctenophore alone. Ctenophores may be abundant during the summer months in some coastal locations, but in other places, they are uncommon and difficult to find. In bays where they occur in very high numbers, predation by ctenophores may control the populations of small zooplanktonic organisms such as copepods, which might otherwise wipe out the phytoplankton (planktonic plants), which are a vital part of marine food chains. Prey and predators Almost all ctenophores are predators – there are no vegetarians and only one genus that is partly parasitic. If food is plentiful, they can eat 10 times their own weight per day. While Beroe preys mainly on other ctenophores, other surface-water species prey on zooplankton (planktonic animals) ranging in size from the microscopic, including mollusc and fish larvae, to small adult crustaceans such as copepods, amphipods, and even krill. Members of the genus Haeckelia prey on jellyfish and incorporate their prey's nematocysts (stinging cells) into their own tentacles instead of colloblasts. Ctenophores have been compared to spiders in their wide range of techniques for capturing prey – some hang motionless in the water using their tentacles as "webs", some are ambush predators like Salticid jumping spiders, and some dangle a sticky droplet at the end of a fine thread, as bolas spiders do. This variety explains the wide range of body forms in a phylum with rather few species. The two-tentacled "cydippid" Lampea feeds exclusively on salps, close relatives of sea-squirts that form large chain-like floating colonies, and juveniles of Lampea attach themselves like parasites to salps that are too large for them to swallow. Members of the cydippid genus Pleurobrachia and the lobate Bolinopsis often reach high population densities at the same place and time because they specialize in different types of prey: Pleurobrachias long tentacles mainly capture relatively strong swimmers such as adult copepods, while Bolinopsis generally feeds on smaller, weaker swimmers such as rotifers and mollusc and crustacean larvae. Ctenophores used to be regarded as "dead ends" in marine food chains because it was thought their low ratio of organic matter to salt and water made them a poor diet for other animals. It is also often difficult to identify the remains of ctenophores in the guts of possible predators, although the combs sometimes remain intact long enough to provide a clue. Detailed investigation of chum salmon, Oncorhynchus keta, showed that these fish digest ctenophores 20 times as fast as an equal weight of shrimps, and that ctenophores can provide a good diet if there are enough of them around. Beroids prey mainly on other ctenophores. Some jellyfish and turtles eat large quantities of ctenophores, and jellyfish may temporarily wipe out ctenophore populations. Since ctenophores and jellyfish often have large seasonal variations in population, most fish that prey on them are generalists and may have a greater effect on populations than the specialist jelly-eaters. This is underlined by an observation of herbivorous fishes deliberately feeding on gelatinous zooplankton during blooms in the Red Sea. The larvae of some sea anemones are parasites on ctenophores, as are the larvae of some flatworms that parasitize fish when they reach adulthood. Ecological impacts Most species are hermaphrodites, and juveniles of at least some species are capable of reproduction before reaching the adult size and shape. This combination of hermaphroditism and early reproduction enables small populations to grow at an explosive rate. Ctenophores may balance marine ecosystems by preventing an over-abundance of copepods from eating all the phytoplankton (planktonic plants), which are the dominant marine producers of organic matter from non-organic ingredients. On the other hand, in the late 1980s the Western Atlantic ctenophore Mnemiopsis leidyi was accidentally introduced into the Black Sea and Sea of Azov via the ballast tanks of ships, and has been blamed for causing sharp drops in fish catches by eating both fish larvae and small crustaceans that would otherwise feed the adult fish. Mnemiopsis is well equipped to invade new territories (although this was not predicted until after it so successfully colonized the Black Sea), as it can breed very rapidly and tolerate a wide range of water temperatures and salinities. The impact was increased by chronic overfishing, and by eutrophication that gave the entire ecosystem a short-term boost, causing the Mnemiopsis population to increase even faster than normal – and above all by the absence of efficient predators on these introduced ctenophores. Mnemiopsis populations in those areas were eventually brought under control by the accidental introduction of the Mnemiopsis-eating North American ctenophore Beroe ovata, and by a cooling of the local climate from 1991 to 1993, which significantly slowed the animal's metabolism. However the abundance of plankton in the area seems unlikely to be restored to pre-Mnemiopsis levels. In the late 1990s Mnemiopsis appeared in the Caspian Sea. Beroe ovata arrived shortly after, and is expected to reduce but not eliminate the impact of Mnemiopsis there. Mnemiopsis also reached the eastern Mediterranean in the late 1990s and now appears to be thriving in the North Sea and Baltic Sea. Taxonomy The number of known living ctenophore species is uncertain since many of those named and formally described have turned out to be identical to species known under other scientific names. Claudia Mills estimates that there about 100–150 valid species that are not duplicates, and that at least another 25, mostly deep-sea forms, have been recognized as distinct but not yet analyzed in enough detail to support a formal description and naming. Early classification Early writers combined ctenophores with cnidarians into a single phylum called Coelenterata on account of morphological similarities between the two groups. Like cnidarians, the bodies of ctenophores consist of a mass of jelly, with one layer of cells on the outside and another lining the internal cavity. In ctenophores, however, these layers are two cells deep, while those in cnidarians are only a single cell deep. Ctenophores also resemble cnidarians in relying on water flow through the body cavity for both digestion and respiration, as well as in having a decentralized nerve net rather than a brain. Genomic studies have suggested that the neurons of Ctenophora, which differ in many ways from other animal neurons, evolved independently from those of the other animals, and increasing awareness of the differences between the comb jellies and the other Coelentarata has persuaded more recent authors to classify the two as separate phyla. The position of the ctenophores in the evolutionary family tree of animals has long been debated, and the majority view at present, based on molecular phylogenetics, is that cnidarians and bilaterians are more closely related to each other than either is to ctenophores. Modern taxonomy The traditional classification divides ctenophores into two classes, those with tentacles (Tentaculata) and those without (Nuda). The Nuda contains only one order (Beroida) and family (Beroidae), and two genera, Beroe (several species) and Neis (one species). The Tentaculata are divided into the following eight orders: Cydippida, egg-shaped animals with long tentacles Lobata, with paired thick lobes Platyctenida, flattened animals that live on or near the sea-bed; most lack combs as adults, and use their pharynges as suckers to attach themselves to surfaces Ganeshida, with a pair of small lobes round the mouth, but an extended pharynx like that of platyctenids Cambojiida Cryptolobiferida Thalassocalycida, with short tentacles and a jellyfish-like "umbrella" Cestida, ribbon-shaped and the largest ctenophores There are fossil genera is considered stem group. Evolutionary history Despite their fragile, gelatinous bodies, fossils thought to represent ctenophores – apparently with no tentacles but many more comb-rows than modern forms – have been found in Lagerstätten as far back as the early Cambrian, about . Nevertheless, a recent molecular phylogenetics analysis concludes that the common ancestor originated approximately 350 million years ago ± 88 million years ago, conflicting with previous estimates which suggests it occurred after the Cretaceous–Paleogene extinction event. Fossil record Because of their soft, gelatinous bodies, ctenophores are extremely rare as fossils, and fossils that have been interpreted as ctenophores have been found only in Lagerstätten, places where the environment was exceptionally suited to the preservation of soft tissue. Until the mid-1990s, only two specimens good enough for analysis were known, both members of the crown group, from the early Devonian (Emsian) period. Three additional putative species were then found in the Burgess Shale and other Canadian rocks of similar age, about in the mid-Cambrian period. All three lacked tentacles but had between 24–80 comb rows, far more than the eight typical of living species. They also appear to have had internal organ-like structures unlike anything found in living ctenophores. One of the fossil species first reported in 1996 had a large mouth, apparently surrounded by a folded edge that may have been muscular. Evidence from China a year later suggests that such ctenophores were widespread in the Cambrian, but perhaps very different from modern species – for example one fossil's comb-rows were mounted on prominent vanes. The youngest fossil of a species outside the crown group is the species Daihuoides from the late Devonian, which belongs to a basal group that was assumed to have gone extinct more than 140 million years earlier. The Ediacaran Eoandromeda could putatively represent a comb jelly. It has eightfold symmetry, with eight spiral arms resembling the comblike rows of a ctenophore. If it is indeed ctenophore, it places the group close to the origin of the Bilateria. The early Cambrian sessile frond-like fossil Stromatoveris, from China's Chengjiang lagerstätte and dated to about , is very similar to Vendobionta of the preceding Ediacaran period. De-Gan Shu, Simon Conway Morris, et al. found on its branches what they considered rows of cilia, used for filter feeding. They suggested that Stromatoveris was an evolutionary "aunt" of ctenophores, and that ctenophores originated from sessile animals whose descendants became swimmers and changed the cilia from a feeding mechanism to a propulsion system. Other Cambrian fossils that support the idea of ctenophores having evolved from sessile forms are Dinomischus, Daihua, Xianguangia and Siphusauctum which also lived on the seafloor, had organic skeletons and cilia-covered tentacles surrounding their mouth, which have been found by cladistic analysis as members of the ctenophore stem-group 520 million-year-old Cambrian fossils also from Chengjiang in China show a now wholly extinct class of ctenophore, named "Scleroctenophora", that had a complex internal skeleton with long spines. The skeleton also supported eight soft-bodied flaps, which could have been used for swimming and possibly feeding. One form, Thaumactena, had a streamlined body resembling that of arrow worms and could have been an agile swimmer. Relationship to other animal groups The phylogenetic relationship of ctenophores to the rest of Metazoa is very important to our understanding of the early evolution of animals and the origin of multicellularity. It has been the focus of debate for many years. Ctenophores have been purported to be the sister lineage to the Bilateria, sister to the Cnidaria, Placozoa, and Bilateria, and sister to all other animals. Walter Garstang in his book Larval Forms and Other Zoological Verses (Mülleria and the Ctenophore) even expressed a theory that ctenophores were descended from a neotenic Mülleria larva of a polyclad. A series of studies that looked at the presence and absence of members of gene families and signalling pathways (e.g., homeoboxes, nuclear receptors, the Wnt signaling pathway, and sodium channels) showed evidence congruent with the latter two scenarios, that ctenophores are either sister to Cnidaria, Placozoa, and Bilateria or sister to all other animal phyla. Several more recent studies comparing complete sequenced genomes of ctenophores with other sequenced animal genomes have also supported ctenophores as the sister lineage to all other animals. This position would suggest that neural and muscle cell types either were lost in major animal lineages (e.g., Porifera and Placozoa) or evolved independently in the ctenophore lineage. Other researchers have argued that the placement of Ctenophora as sister to all other animals is a statistical anomaly caused by the high rate of evolution in ctenophore genomes, and that Porifera (sponges) is the earliest-diverging animal taxon instead. They also have extremely high rates of mitochondrial evolution, and the smallest known RNA/protein content of the mtDNA genome in animals. As such, the Ctenophora appear to be a basal diploblast clade. In agreement with the latter point, the analysis of a very large sequence alignment at the metazoan taxonomic scale (1,719 proteins totalizing acid positions) showed that ctenophores emerge as the second-earliest branching animal lineage, and sponges are sister-group to all other multicellular animals. Also, research on mucin genes, which allow an animal to produce mucus, shows that sponges have never had them while all other animals, including comb jellies, appear to share genes with a common origin. And it has been revealed that despite all their differences, ctenophoran neurons share the same foundation as cnidarian neurons after findings shows that peptide-expressing neurons are probably ancestral to chemical neurotransmitters. Yet another study strongly rejects the hypothesis that sponges are the sister group to all other extant animals and establishes the placement of Ctenophora as the sister group to all other animals, and disagreement with the last-mentioned paper is explained by methodological problems in analyses in that work. Neither ctenophores nor sponges possess HIF pathways, their genome express only a single type of voltage-gated calcium channel unlike other animals which have three types, and they are the only known animal phyla that lack any true Hox genes. A few species from other phyla; the nemertean pilidium larva, the larva of the phoronid species Phoronopsis harmeri and the acorn worm larva Schizocardium californicum, do not depend on Hox genes in their larval development either, but need them during metamorphosis to reach their adult form. Innexin genes, which code for proteins used for intercellular communication in animals, also appears to have evolved independently in ctenophores. Relationships within Ctenophora Relationships within Ctenophora (2001). Relationships within Ctenophora (2017). Since all modern ctenophores except the beroids have cydippid-like larvae, it has widely been assumed that their last common ancestor also resembled cydippids, having an egg-shaped body and a pair of retractable tentacles. Richard Harbison's purely morphological analysis in 1985 concluded that the cydippids are not monophyletic, in other words do not contain all and only the descendants of a single common ancestor that was itself a cydippid. Instead, he found that various cydippid families were more similar to members of other ctenophore orders than to other cydippids. He also suggested that the last common ancestor of modern ctenophores was either cydippid-like or beroid-like. A molecular phylogeny analysis in 2001, using 26 species, including four recently discovered ones, confirmed that the cydippids are not monophyletic and concluded that the last common ancestor of modern ctenophores was cydippid-like. It also found that the genetic differences between these species were very small – so small that the relationships between the Lobata, Cestida and Thalassocalycida remained uncertain. This suggests that the last common ancestor of modern ctenophores was relatively recent, and perhaps survived the Cretaceous–Paleogene extinction event while other lineages perished. When the analysis was broadened to include representatives of other phyla, it concluded that cnidarians are probably more closely related to bilaterians than either group is to ctenophores but that this diagnosis is uncertain. A more recent 2017 study corroberates the paraphyly of Cydippida but also finds that Lobata is paraphyletic with respect to Cestida. Beyond this though, there is still a lot of missing gaps in the ctenophore tree despite their intensive study. Several families and orders do not have any species with complete genomes and as such their placement remains undetermined.
Biology and health sciences
Other
Animals
62263
https://en.wikipedia.org/wiki/Macaw
Macaw
Macaws are a group of New World parrots that are long-tailed and often colorful, in the tribe Arini. They are popular in aviculture or as companion parrots, although there are conservation concerns about several species in the wild. Biology Of the many different Psittacidae (true parrots) genera, six are classified as macaws: Ara, Anodorhynchus, Cyanopsitta, Primolius, Orthopsittaca, and Diopsittaca. Previously, the members of the genus Primolius were placed in Propyrrhura, but the former is correct in accordance with ICZN rules. In addition, the related macaw-like thick-billed parrot is sometimes referred to as a "macaw", although it is not phylogenetically considered to be a macaw species. Macaws are native to Central America and North America (only Mexico), South America, and formerly the Caribbean. Most species are associated with forests, but others prefer woodland or savannah-like habitats. Proportionately larger beaks, long tails, and relatively bare, light-coloured medial (facial patch) areas distinguish macaws from other parrots. Sometimes the facial patch is smaller in some species and limited to a yellow patch around the eyes and a second patch near the base of the beak in the members of the genus Anodorhynchus. A macaw's facial feather pattern is as unique as a fingerprint. The largest macaws are the hyacinth, Buffon's (great green) and green-winged macaws. While still relatively large parrots, mini-macaws of the genera Cyanopsitta, Orthopsittaca and Primolius are significantly smaller than the members of Anodorhynchus and Ara. The smallest member of the family, the red-shouldered macaw, is no larger than some parakeets of the genus Aratinga. Macaws, like other parrots, toucans and woodpeckers, are zygodactyl, having their first and fourth toes pointing backward. Species in taxonomic order There are 19 species of macaws, including extinct and critically endangered species. In addition, there are several hypothetical extinct species that have been proposed based on very little evidence. Anodorhynchus Glaucous macaw, Anodorhynchus glaucus (critically endangered or extinct) Hyacinth macaw, Anodorhynchus hyacinthinus Indigo macaw or Lear's macaw, Anodorhynchus leari Cyanopsitta Little blue macaw or Spix's macaw, Cyanopsitta spixii (probably extinct in the wild) Ara Blue-and-yellow macaw or blue-and-gold macaw, Ara ararauna Blue-throated macaw, Ara glaucogularis Military macaw, Ara militaris Great green macaw or Buffon's macaw, Ara ambiguus Scarlet macaw or Aracanga, Ara macao Red-and-green macaw or green-winged macaw, Ara chloropterus Red-fronted macaw, Ara rubrogenys Chestnut-fronted macaw or severe macaw, Ara severus †Cuban red macaw, Ara tricolor (extinct) †Saint Croix macaw, Ara autochthones (extinct) Orthopsittaca Red-bellied macaw, Orthopsittaca manilatus Primolius Blue-headed macaw, Primolius couloni Blue-winged macaw or Illiger's macaw, Primolius maracana Golden-collared macaw, Primolius auricollis Diopsittaca Red-shouldered macaw or Hahn's macaw, Diopsittaca nobilis Hypothetical extinct species Several hypothetical extinct species of macaws have been postulated based on little evidence, and they may have been subspecies, or familiar parrots that were imported onto an island and later wrongly presumed to have a separate identity. Martinique macaw, Ara martinica, Rothschild 1905 Lesser Antillean macaw, Ara guadeloupensis, Clark, 1905 Jamaican green-and-yellow macaw, Ara erythrocephala, Rothschild 1905 Jamaican red macaw, Ara gossei, Rothschild 1905 Dominican green-and-yellow macaw, Ara atwoodi, Clark, 1905 Extinctions and conservation status The majority of macaws are now endangered in the wild and a few are extinct. The Spix's macaw is now probably extinct in the wild. The glaucous macaw is also probably extinct, with only two reliable records of sightings in the 20th century. The greatest problems threatening the macaw population are the rapid rate of deforestation and illegal trapping for the bird trade. Prehistoric Native Americans in the American Southwest farmed macaws in establishments known as "feather factories". International trade of all macaw species is regulated by the Convention on International Trade in Endangered Species of Wild Flora and Fauna (CITES). Some species of macaws—the scarlet macaw (Ara macao) as an example—are listed in the CITES Appendix I and may not be lawfully traded for commercial purposes. Other species, such as the red-shouldered macaw (Diopsittaca nobilis), are listed in Appendix II and may legally be traded commercially provided that certain controls are in place, including a non-detriment finding, establishment of an export quota, and issuing of export permits. Hybrids Sometimes macaws are hybridized for the pet trade. Aviculturists have reported an over-abundance of female blue-and-yellow macaws in captivity, which differs from the general rule with captive macaws and other parrots, where the males are more abundant. This would explain why the blue and gold is the most commonly hybridised macaw, and why the hybridising trend took hold among macaws. Common macaw hybrids include the harlequin (Ara ararauna × Ara chloroptera), miligold macaw (Ara ararauna × Ara militaris) and the Catalina (known as the rainbow in Australia, Ara ararauna × Ara macao). In addition, unusual but apparently healthy intergeneric hybrids between the hyacinth macaw and several of the larger Ara macaws have also occasionally been seen in captivity. Another, much rarer, occurrence of a second-generation hybrid (F2) is the miliquin macaw (harlequin and military macaws). Diet and clay licks Macaws eat a variety of foods including seeds, nuts, fruits, palm fruits, leaves, flowers, and stems. Safe vegetables include asparagus, beets, bell peppers, broccoli, butternut, carrots, corn on the cob, dandelion greens, collard greens, hot peppers, spinach, sweet potatoes, tomatoes and zucchini. Wild species may forage widely, over for some of the larger species such as Ara araurana (blue and yellow macaw) and Ara ambigua (great green macaw), in search of seasonally available foods. Some foods eaten by macaws in certain regions in the wild are said to contain toxic or caustic substances which they are able to digest. It has been suggested that parrots and macaws in the Amazon Basin eat clay from exposed river banks to neutralize these toxins. In the western Amazon hundreds of macaws and other parrots descend to exposed river banks to consume clay on an almost daily basis – except on rainy days. Donald Brightsmith, the principal investigator of The Macaw Society, located at the Tambopata Research Center (TRC) in Peru, has studied the clay eating behaviour of parrots at clay licks in Peru. He and fellow investigators found that the soils macaws choose to consume at the clay licks do not have higher levels of cation-exchange capacity (ability to absorb toxins) than that of unused areas of the clay licks and thus the parrots could not be using the clay to neutralize ingested food toxins. Rather, the macaws and other bird and animal species prefer clays with higher levels of sodium. Sodium is a vital element that is scarce in environments greater than 100 kilometres from the ocean. The distribution of clay licks across South America further supports this hypothesis – as the largest and most species-rich clay licks are found on the western side of the Amazon Basin far from oceanic influences. Salt-enriched (NaCl) oceanic aerosols are the main source of environmental sodium near coasts and this decreases drastically farther inland. Clay-eating behaviour by macaws is not seen outside the western Amazon region, even though macaws in these areas consume some toxic foods such as the seeds of Hura crepitans, or sandbox tree, which have toxic sap. Species of parrot that consume more seeds, which potentially have more toxins, do not use clay licks more than species that eat a greater proportion of flowers or fruit in their diets. Studies at TRC have shown a correlation between clay-lick use and the breeding season. Contents of nestling crop samples show a high percentage of clay fed to them by their parents. Calcium for egg development – another hypothesis – does not appear to be a reason for geophagy during this period as peak usage is after the hatching of eggs. Another theory is that the birds, as well as other herbivorous animals, use the clay licks as a source of cobalamin, otherwise known as vitamin B12. Relationship with humans Macaws and their feathers have attracted the attention of people throughout history, most notably in pre-Columbian civilizations such as the Inca, Wari', and Nazca. Macaw feathers were highly desired for their bright colors and acquired through hunting and trade. Feathers were often used as adornment and were found at both ceremonial and burial sites. South American weavers have used their feathers to create a number of textiles, most notably feathered panels and tabards. Due to the fragile nature of the feathers, many of these pieces have begun to deteriorate over time. Gallery
Biology and health sciences
Psittaciformes
Animals
62289
https://en.wikipedia.org/wiki/Monosodium%20glutamate
Monosodium glutamate
Monosodium glutamate (MSG), also known as sodium glutamate, is a sodium salt of glutamic acid. MSG is found naturally in some foods including tomatoes and cheese in this glutamic acid form. MSG is used in cooking as a flavor enhancer with a savory taste that intensifies the umami flavor of food, as naturally occurring glutamate does in foods such as stews and meat soups. MSG was first prepared in 1908 by Japanese biochemist Kikunae Ikeda, who tried to isolate and duplicate the savory taste of kombu, an edible seaweed used as a broth (dashi) for Japanese cuisine. MSG balances, blends, and rounds the perception of other tastes. MSG, along with disodium ribonucleotides, is commonly used and found in stock (bouillon) cubes, soups, ramen, gravy, stews, condiments, savory snacks, etc. The U.S. Food and Drug Administration has given MSG its generally recognized as safe (GRAS) designation. It is a popular misconception that MSG can cause headaches and other feelings of discomfort, known as "Chinese restaurant syndrome". Several blinded studies show no such effects when MSG is combined with food in normal concentrations, and are inconclusive when MSG is added to broth in large concentrations. The European Union classifies it as a food additive permitted in certain foods and subject to quantitative limits. MSG has the HS code 2922.42 and the E number E621. Use Pure MSG is reported not to have a highly pleasant taste until it is combined with a savory aroma. The basic sensory function of MSG is attributed to its ability to enhance savory taste-active compounds when added in the proper concentration. The optimal concentration varies by food; in clear soup, the "pleasure score" rapidly falls with the addition of more than one gram of MSG per 100mL. The sodium content (in mass percent) of MSG, 12.28%, is about one-third of that in sodium chloride (39.34%), due to the greater mass of the glutamate counterion. Although other salts of glutamate have been used in low-salt soups, they are less palatable than MSG. Food scientist Steve Witherly noted in 2017 that MSG may promote healthy eating by enhancing the flavor of food such as kale while reducing the use of salt. The ribonucleotide food additives disodium inosinate (E631) and disodium guanylate (E627), as well as conventional salt, are usually used with monosodium glutamate-containing ingredients as they seem to have a synergistic effect. "Super salt" is a mixture of 9 parts salt, to one part MSG and 0.1 parts disodium ribonucleotides (a mixture of disodium inosinate and disodium guanylate). Safety MSG is generally recognized as safe to eat. A popular belief is that MSG can cause headaches and other feelings of discomfort, but blinded tests have not provided strong evidence of this. International bodies governing food additives currently consider MSG safe for human consumption as a flavor enhancer. Under normal conditions, humans can metabolize relatively large quantities of glutamate, which is naturally produced in the gut in the course of protein hydrolysis. The median lethal dose (LD50) is between 15 and 18 g/kg body weight in rats and mice, respectively, five times the LD50 of table salt (3 g/kg in rats). The use of MSG as a food additive and the natural levels of glutamic acid in foods are not of toxic concern in humans. Specifically MSG in the diet does not increase glutamate in the brain or affect brain function. A 1995 report from the Federation of American Societies for Experimental Biology (FASEB) for the United States Food and Drug Administration (FDA) concluded that MSG is safe when "eaten at customary levels" and, although a subgroup of otherwise-healthy individuals develop an MSG symptom complex when exposed to 3 g of MSG in the absence of food, MSG as a cause has not been established because the symptom reports are anecdotal. According to the report, no data supports the role of glutamate in chronic disease. High quality evidence has failed to demonstrate a relationship between the MSG symptom complex and actual MSG consumption. No association has been demonstrated, and the few responses were inconsistent. No symptoms were observed when MSG was used in food. Adequately controlling for experimental bias includes a blinded, placebo-controlled experimental design and administration by capsule, because of the unique aftertaste of glutamates. In a 1993 study, 71 fasting participants were given 5 g of MSG and then a standard breakfast. One reaction (to the placebo, in a self-identified MSG-sensitive individual) occurred. A study in 2000 tested the reaction of 130 subjects with a reported sensitivity to MSG. Multiple trials were performed, with subjects exhibiting at least two symptoms continuing. Two people out of the 130 responded to all four challenges. Because of the low prevalence, the researchers concluded that a response to MSG was not reproducible. Studies exploring MSG's role in obesity have yielded mixed results. Although several studies have investigated anecdotal links between MSG and asthma, current evidence does not support a causal association. Food Standards Australia New Zealand (FSANZ) MSG technical report concludes, "There is no convincing evidence that MSG is a significant factor in causing systemic reactions resulting in severe illness or mortality. The studies conducted to date on Chinese restaurant syndrome (CRS) have largely failed to demonstrate a causal association with MSG. Symptoms resembling those of CRS may be provoked in a clinical setting in small numbers of individuals by the administration of large doses of MSG without food. However, such effects are neither persistent nor serious and are likely to be attenuated when MSG is consumed with food. In terms of more serious adverse effects such as the triggering of bronchospasm in asthmatic individuals, the evidence does not indicate that MSG is a significant trigger factor." However, the FSANZ MSG report says that although no data is available on average MSG consumption in Australia and New Zealand, "data from the United Kingdom indicates an average intake of 590mg/day, with extreme users (97.5th percentile consumers) consuming 2,330mg/day" (Rhodes et al. 1991). In a highly seasoned restaurant meal, intakes as high as 5,000 mg or more may be possible (Yang et al. 1997). When very large doses of MSG (>5 g MSG in a bolus dose) are ingested, plasma glutamate concentration will significantly increase. However, the concentration typically returns to normal within two hours. In general, foods providing metabolizable carbohydrates significantly attenuate peak plasma glutamate levels at doses up to 150mg/kg body weight. Two earlier studiesthe 1987 Joint FAO/WHO Expert Committee on Food Additives (JECFA) and the 1995 Federation of American Societies for Experimental Biology (FASEB)concluded, "there may be a small number of unstable asthmatics who respond to doses of 1.5–2.5 g of MSG in the absence of food". The FASEB evaluation concluded, "sufficient evidence exists to indicate some individuals may experience manifestations of CRS when exposed to a ≥3 g bolus dose of MSG in the absence of food". Production MSG has been produced by three methods: hydrolysis of vegetable proteins with hydrochloric acid to disrupt peptide bonds (1909–1962); direct chemical synthesis with acrylonitrile (1962–1973), and bacterial fermentation (the current method). Wheat gluten was originally used for hydrolysis because it contains more than 30 g of glutamate and glutamine per 100 g of protein. As demand for MSG increased, chemical synthesis and fermentation were studied. The polyacrylic fiber industry began in Japan during the mid-1950s, and acrylonitrile was adopted as a base material to synthesize MSG. As of 2016, most MSG worldwide is produced by bacterial fermentation in a process similar to making vinegar or yogurt. Sodium is added later, for neutralization. During fermentation, Corynebacterium species, cultured with ammonia and carbohydrates from sugar beets, sugarcane, tapioca or molasses, excrete amino acids into a culture broth from which L-glutamate is isolated. Kyowa Hakko Kogyo (currently Kyowa Kirin) developed industrial fermentation to produce L-glutamate. The conversion yield and production rate (from sugars to glutamate) continues to improve in the industrial production of MSG, keeping up with demand. The product, after filtration, concentration, acidification, and crystallization, is glutamate, sodium ions, and water. Chemical properties The compound is usually available as the monohydrate, a white, odorless, crystalline powder. The solid contains separate sodium cations and glutamate anions in zwitterionic form, −OOC-CH()-()2-COO−. In solution it dissociates into glutamate and sodium ions. MSG is freely soluble in water, but it is not hygroscopic and is insoluble in common organic solvents (such as ether). It is generally stable under food-processing conditions. MSG does not break down during cooking and, like other amino acids, will exhibit a Maillard reaction (browning) in the presence of sugars at very high temperatures. History Glutamic acid was discovered and identified in 1866 by the German chemist Karl Heinrich Ritthausen, who treated wheat gluten (for which it was named) with sulfuric acid. Kikunae Ikeda of Tokyo Imperial University isolated glutamic acid as a taste substance in 1908 from the seaweed Laminaria japonica (kombu) by aqueous extraction and crystallization, calling its taste umami ("delicious taste"). Ikeda noticed that dashi, the Japanese broth of katsuobushi and kombu, had a unique taste not yet scientifically described (not sweet, salty, sour, or bitter). To determine which glutamate could result in the taste of umami, he studied the taste properties of numerous glutamate salts such as calcium, potassium, ammonium, and magnesium glutamate. Of these salts, monosodium glutamate was the most soluble and palatable, as well as the easiest to crystallize. Ikeda called his product "monosodium glutamate" and submitted a patent to produce MSG; the Suzuki brothers began commercial production of MSG in 1909 using the term Ajinomoto ("essence of taste"). Society and culture Regulations United States MSG is one of several forms of glutamic acid found in foods, in large part because glutamic acid (an amino acid) is pervasive in nature. Glutamic acid and its salts may be present in a variety of other additives, including hydrolyzed vegetable protein, autolyzed yeast, hydrolyzed yeast, yeast extract, soy extracts, and protein isolate, which must be specifically labeled. Since 1998, MSG cannot be included in the term "spices and flavorings". However, the term "natural flavor/s" is used by the food industry for glutamic acid (chemically similar to MSG, lacking only the sodium ion). The Food and Drug Administration (FDA) does not require disclosure of components and amounts of "natural flavor/s." Australia and New Zealand Standard 1.2.4 of the Australia and New Zealand Food Standards Code requires MSG to be labeled in packaged foods. The label must have the food-additive class name (e.g. "flavour enhancer"), followed by the name of the additive ("MSG") or its International Numbering System (INS) number, 621. Pakistan The Punjab Food Authority banned Ajinomoto, commonly known as Chinese salt, which contains MSG, from being used in food products in the Punjab Province of Pakistan in January 2018. The prohibition against the import and manufacture of MSG was enforced on 28 February 2018, following an order by the Supreme Court on 10 February 2018. In 2024, the federal government lifted the ban on MSG, following objections from Japan and a review of scientific evidence by an expert committee. The committee comprising experts from various institutions—including the Pakistan Council of Scientific and Industrial Research, National Agricultural Research Centre, and Pakistan Standards and Quality Control Authority—confirmed MSG as a safe food additive. Names The following are alternative names for MSG: Chemical names and identifiers Monosodium glutamate or sodium glutamate Sodium 2-aminopentanedioate Glutamic acid, monosodium salt, monohydrate L-Glutamic acid, monosodium salt, monohydrate L-Monosodium glutamate monohydrate Monosodium L-glutamate monohydrate MSG monohydrate Sodium glutamate monohydrate UNII-W81N5U6R6U Flavour enhancer E621 Trade names Accent, produced by B&G Foods Inc., Parsippany, New Jersey, US Aji-No-Moto, produced by Ajinomoto, 26 countries, head office Japan Tasting Powder Ve-Tsin by Tien Chu Ve-Tsin Sazón, distributed by Goya Foods, Jersey City, NJ Stigma in cuisine Origin The controversy surrounding the safety of MSG started with the publication of Robert Ho Man Kwok's correspondence letter titled "Chinese-Restaurant Syndrome" in the New England Journal of Medicine on 4 April 1968. In his letter, Kwok suggested several possible causes before he nominated MSG for his symptoms. This letter was initially met with insider satirical responses, often using race as prop for humorous effect, within the medical community. During the discursive uptake in media, the conversations were recontextualized as legitimate while the race-based motivations of the humor were not parsed, which replicated historical racial prejudices. Despite the resulting public backlash, the Food and Drug Administration (FDA) did not remove MSG from their Generally Recognized as Safe list. In 1970, a National Research Council under the National Academy of Science, on behalf of the FDA, investigated MSG but concluded that MSG was safe for consumption. Reactions The controversy about MSG is tied to racial stereotypes against East Asian societies. Herein, specifically East Asian cuisine was targeted, whereas the widespread usage of MSG in Western processed food does not generate the same stigma. These kind of perceptions, such as the rhetoric of the so-called Chinese restaurant syndrome, have been attributed to xenophobic or racist biases. Food historian Ian Mosby wrote that fear of MSG in Chinese food is part of the US's long history of viewing the "exotic" cuisine of Asia as dangerous and dirty. In 2016, Anthony Bourdain stated in Parts Unknown that "I think MSG is good stuff ... You know what causes Chinese restaurant syndrome? Racism." In 2020, Ajinomoto, the leading manufacturer of MSG, and others launched the #RedefineCRS campaign, in reference to the term "Chinese restaurant syndrome", to combat the misconceptions about MSG, saying they intended to highlight the xenophobic prejudice against East Asian cuisine and the scientific evidence. Following the campaign, Merriam-Webster announced it would review the term.
Physical sciences
Glutamates
Chemistry
62290
https://en.wikipedia.org/wiki/Dinoflagellate
Dinoflagellate
The dinoflagellates () are a monophyletic group of single-celled eukaryotes constituting the phylum Dinoflagellata and are usually considered protists. Dinoflagellates are mostly marine plankton, but they are also common in freshwater habitats. Their populations vary with sea surface temperature, salinity, and depth. Many dinoflagellates are photosynthetic, but a large fraction of these are in fact mixotrophic, combining photosynthesis with ingestion of prey (phagotrophy and myzocytosis). In terms of number of species, dinoflagellates are one of the largest groups of marine eukaryotes, although substantially smaller than diatoms. Some species are endosymbionts of marine animals and play an important part in the biology of coral reefs. Other dinoflagellates are unpigmented predators on other protozoa, and a few forms are parasitic (for example, Oodinium and Pfiesteria). Some dinoflagellates produce resting stages, called dinoflagellate cysts or dinocysts, as part of their lifecycles; this occurs in 84 of the 350 described freshwater species and a little more than 10% of the known marine species. Dinoflagellates are alveolates possessing two flagella, the ancestral condition of bikonts. About 1,555 species of free-living marine dinoflagellates are currently described. Another estimate suggests about 2,000 living species, of which more than 1,700 are marine (free-living, as well as benthic) and about 220 are from fresh water. The latest estimates suggest a total of 2,294 living dinoflagellate species, which includes marine, freshwater, and parasitic dinoflagellates. A rapid accumulation of certain dinoflagellates can result in a visible coloration of the water, colloquially known as red tide (a harmful algal bloom), which can cause shellfish poisoning if humans eat contaminated shellfish. Some dinoflagellates also exhibit bioluminescence, primarily emitting blue-green light, which may be visible in oceanic areas under certain conditions. Etymology The term "dinoflagellate" is a combination of the Greek dinos and the Latin flagellum. Dinos means "whirling" and signifies the distinctive way in which dinoflagellates were observed to swim. Flagellum means "whip" and this refers to their flagella. History In 1753, the first modern dinoflagellates were described by Henry Baker as "Animalcules which cause the Sparkling Light in Sea Water", and named by Otto Friedrich Müller in 1773. The term derives from the Greek word δῖνος (dînos), meaning whirling, and Latin flagellum, a diminutive term for a whip or scourge. In the 1830s, the German microscopist Christian Gottfried Ehrenberg examined many water and plankton samples and proposed several dinoflagellate genera that are still used today including Peridinium, Prorocentrum, and Dinophysis. These same dinoflagellates were first defined by Otto Bütschli in 1885 as the flagellate order Dinoflagellida. Botanists treated them as a division of algae, named Pyrrophyta or Pyrrhophyta ("fire algae"; Greek pyrr(h)os, fire) after the bioluminescent forms, or Dinophyta. At various times, the cryptomonads, ebriids, and ellobiopsids have been included here, but only the last are now considered close relatives. Dinoflagellates have a known ability to transform from noncyst to cyst-forming strategies, which makes recreating their evolutionary history extremely difficult. Morphology Dinoflagellates are unicellular and possess two dissimilar flagella arising from the ventral cell side (dinokont flagellation). They have a ribbon-like transverse flagellum with multiple waves that beats to the cell's left, and a more conventional one, the longitudinal flagellum, that beats posteriorly. The transverse flagellum is a wavy ribbon in which only the outer edge undulates from base to tip, due to the action of the axoneme which runs along it. The axonemal edge has simple hairs that can be of varying lengths. The flagellar movement produces forward propulsion and also a turning force. The longitudinal flagellum is relatively conventional in appearance, with few or no hairs. It beats with only one or two periods to its wave. The flagella lie in surface grooves: the transverse one in the cingulum and the longitudinal one in the sulcus, although its distal portion projects freely behind the cell. In dinoflagellate species with desmokont flagellation (e.g., Prorocentrum), the two flagella are differentiated as in dinokonts, but they are not associated with grooves. Dinoflagellates have a complex cell covering called an amphiesma or cortex, composed of a series of membranes, flattened vesicles called alveoli (= amphiesmal vesicles) and related structures. In thecate ("armoured") dinoflagellates, these support overlapping cellulose plates to create a sort of armor called the theca or lorica, as opposed to athecate ("nude") dinoflagellates. These occur in various shapes and arrangements, depending on the species and sometimes on the stage of the dinoflagellate. Conventionally, the term tabulation has been used to refer to this arrangement of thecal plates. The plate configuration can be denoted with the plate formula or tabulation formula. Fibrous extrusomes are also found in many forms. A transverse groove, the so-called cingulum (or cigulum) runs around the cell, thus dividing it into an anterior (episoma) and posterior (hyposoma). If and only if a theca is present, the parts are called epitheca and hypotheca, respectively. Posteriorly, starting from the transverse groove, there is a longitudinal furrow called the sulcus. The transverse flagellum strikes in the cingulum, the longitudinal flagellum in the sulcus. Together with various other structural and genetic details, this organization indicates a close relationship between the dinoflagellates, the Apicomplexa, and ciliates, collectively referred to as the alveolates. Dinoflagellate tabulations can be grouped into six "tabulation types": gymnodinoid, suessoid, gonyaulacoid–peridinioid, nannoceratopsioid, dinophysioid, and prorocentroid. Most Dinoflagellates have a plastid derived from secondary endosymbiosis of red algae, however dinoflagellates with plastids derived from green algae and tertiary endosymbiosis of diatoms have also been discovered. Similar to other photosynthetic organisms, dinoflagellates contain chlorophylls a and c2 and the carotenoid beta-carotene. Dinoflagellates also produce the xanthophylls including peridinin, dinoxanthin, and diadinoxanthin. These pigments give many dinoflagellates their typical golden brown color. However, the dinoflagellates Karenia brevis, Karenia mikimotoi, and Karlodinium micrum have acquired other pigments through endosymbiosis, including fucoxanthin. This suggests their chloroplasts were incorporated by several endosymbiotic events involving already colored or secondarily colorless forms. The discovery of plastids in the Apicomplexa has led some to suggest they were inherited from an ancestor common to the two groups, but none of the more basal lines has them. All the same, the dinoflagellate cell consists of the more common organelles such as rough and smooth endoplasmic reticulum, Golgi apparatus, mitochondria, lipid and starch grains, and food vacuoles. Some have even been found with a light-sensitive organelle, the eyespot or stigma, or a larger nucleus containing a prominent nucleolus. The dinoflagellate Erythropsidinium has the smallest known eye. Some athecate species have an internal skeleton consisting of two star-like siliceous elements that has an unknown function, and can be found as microfossils. Tappan gave a survey of dinoflagellates with internal skeletons. This included the first detailed description of the pentasters in Actiniscus pentasterias, based on scanning electron microscopy. They are placed within the order Gymnodiniales, suborder Actiniscineae. Theca structure and formation The formation of thecal plates has been studied in detail through ultrastructural studies. The dinoflagellate nucleus: dinokaryon 'Core dinoflagellates' (dinokaryotes) have a peculiar form of nucleus, called a dinokaryon, in which the chromosomes are attached to the nuclear membrane. These carry reduced number of histones. In place of histones, dinoflagellate nuclei contain a novel, dominant family of nuclear proteins that appear to be of viral origin, thus are called Dinoflagellate viral nucleoproteins (DVNPs) which are highly basic, bind DNA with similar affinity to histones, and occur in multiple posttranslationally modified forms. Dinoflagellate nuclei remain condensed throughout interphase rather than just during mitosis, which is closed and involves a uniquely extranuclear mitotic spindle. This sort of nucleus was once considered to be an intermediate between the nucleoid region of prokaryotes and the true nuclei of eukaryotes, so were termed "mesokaryotic", but now are considered derived rather than primitive traits (i. e. ancestors of dinoflagellates had typical eukaryotic nuclei). In addition to dinokaryotes, DVNPs can be found in a group of basal dinoflagellates (known as Marine Alveolates, "MALVs") that branch as sister to dinokaryotes (Syndiniales). Classification Generality Dinoflagellates are protists and have been classified using both the International Code of Botanical Nomenclature (ICBN, now renamed as ICN) and the International Code of Zoological Nomenclature (ICZN). About half of living dinoflagellate species are autotrophs possessing chloroplasts and half are nonphotosynthesising heterotrophs. The peridinin dinoflagellates, named after their peridinin plastids, appear to be ancestral for the dinoflagellate lineage. Almost half of all known species have chloroplasts, which are either the original peridinin plastids or new plastids acquired from other lineages of unicellular algae through endosymbiosis. The remaining species have lost their photosynthetic abilities and have adapted to a heterotrophic, parasitic or kleptoplastic lifestyle. Most (but not all) dinoflagellates have a dinokaryon, described below (see: Life cycle, below). Dinoflagellates with a dinokaryon are classified under Dinokaryota, while dinoflagellates without a dinokaryon are classified under Syndiniales. Although classified as eukaryotes, the dinoflagellate nuclei are not characteristically eukaryotic, as some of them lack histones and nucleosomes, and maintain continually condensed chromosomes during mitosis. The dinoflagellate nucleus was termed 'mesokaryotic' by Dodge (1966), due to its possession of intermediate characteristics between the coiled DNA areas of prokaryotic bacteria and the well-defined eukaryotic nucleus. This group, however, does contain typically eukaryotic organelles, such as Golgi bodies, mitochondria, and chloroplasts. Jakob Schiller (1931–1937) provided a description of all the species, both marine and freshwater, known at that time. Later, Alain Sournia (1973, 1978, 1982, 1990, 1993) listed the new taxonomic entries published after Schiller (1931–1937). Sournia (1986) gave descriptions and illustrations of the marine genera of dinoflagellates, excluding information at the species level. The latest index is written by Gómez. Identification English-language taxonomic monographs covering large numbers of species are published for the Gulf of Mexico, the Indian Ocean, the British Isles, the Mediterranean and the North Sea. The main source for identification of freshwater dinoflagellates is the Süsswasser Flora. Calcofluor-white can be used to stain thecal plates in armoured dinoflagellates. Ecology and physiology Habitats Dinoflagellates are found in all aquatic environments: marine, brackish, and fresh water, including in snow or ice. They are also common in benthic environments and sea ice. Endosymbionts All Zooxanthellae are dinoflagellates and most of them are members within Symbiodiniaceae (e.g. the genus Symbiodinium). The association between Symbiodinium and reef-building corals is widely known. However, endosymbiontic Zooxanthellae inhabit a great number of other invertebrates and protists, for example many sea anemones, jellyfish, nudibranchs, the giant clam Tridacna, and several species of radiolarians and foraminiferans. Many extant dinoflagellates are parasites (here defined as organisms that eat their prey from the inside, i.e. endoparasites, or that remain attached to their prey for longer periods of time, i.e. ectoparasites). They can parasitize animal or protist hosts. Protoodinium, Crepidoodinium, Piscinoodinium, and Blastodinium retain their plastids while feeding on their zooplanktonic or fish hosts. In most parasitic dinoflagellates, the infective stage resembles a typical motile dinoflagellate cell. Nutritional strategies Three nutritional strategies are seen in dinoflagellates: phototrophy, mixotrophy, and heterotrophy. Phototrophs can be photoautotrophs or auxotrophs. Mixotrophic dinoflagellates are photosynthetically active, but are also heterotrophic. Facultative mixotrophs, in which autotrophy or heterotrophy is sufficient for nutrition, are classified as amphitrophic. If both forms are required, the organisms are mixotrophic sensu stricto. Some free-living dinoflagellates do not have chloroplasts, but host a phototrophic endosymbiont. A few dinoflagellates may use alien chloroplasts (cleptochloroplasts), obtained from food (kleptoplasty). Some dinoflagellates may feed on other organisms as predators or parasites. Food inclusions contain bacteria, bluegreen algae, diatoms, ciliates, and other dinoflagellates. Mechanisms of capture and ingestion in dinoflagellates are quite diverse. Several dinoflagellates, both thecate (e.g. Ceratium hirundinella, Peridinium globulus) and nonthecate (e.g. Oxyrrhis marina, Gymnodinium sp. and Kofoidinium spp.), draw prey to the sulcal region of the cell (either via water currents set up by the flagella or via pseudopodial extensions) and ingest the prey through the sulcus. In several Protoperidinium spp., e.g. P. conicum, a large feeding veil—a pseudopod called the pallium—is extruded to capture prey which is subsequently digested extracellularly (= pallium-feeding). Oblea, Zygabikodinium, and Diplopsalis are the only other dinoflagellate genera known to use this particular feeding mechanism. Gymnodinium fungiforme, commonly found as a contaminant in algal or ciliate cultures, feeds by attaching to its prey and ingesting prey cytoplasm through an extensible peduncle. Two related genera, Polykrikos and Neatodinium, shoot out a harpoon-like organelle to capture prey. Some mixotrophic dinoflagellates are able to produce neurotoxins that have anti-grazing effects on larger copepods and enhance the ability of the dinoflagellate to prey upon larger copepods. Toxic strains of Karlodinium veneficum produce karlotoxin that kills predators who ingest them, thus reducing predatory populations and allowing blooms of both toxic and non-toxic strains of K. veneficum. Further, the production of karlotoxin enhances the predatory ability of K. veneficum by immobilizing its larger prey. K. armiger are more inclined to prey upon copepods by releasing a potent neurotoxin that immobilizes its prey upon contact. When K. armiger are present in large enough quantities, they are able to cull whole populations of their copepod prey. The feeding mechanisms of the oceanic dinoflagellates remain unknown, although pseudopodial extensions were observed in Podolampas bipes. Blooms Introduction Dinoflagellate blooms are generally unpredictable, short, with low species diversity, and with little species succession. The low species diversity can be due to multiple factors. One way a lack of diversity may occur in a bloom is through a reduction in predation and a decreased competition. The first may be achieved by having predators reject the dinoflagellate, by, for example, decreasing the amount of food it can eat. This additionally helps prevent a future increase in predation pressure by causing predators that reject it to lack the energy to breed. A species can then inhibit the growth of its competitors, thus achieving dominance. Harmful algal blooms Dinoflagellates sometimes bloom in concentrations of more than a million cells per millilitre. Under such circumstances, they can produce toxins (generally called dinotoxins) in quantities capable of killing fish and accumulating in filter feeders such as shellfish, which in turn may be passed on to people who eat them. This phenomenon is called a red tide, from the color the bloom imparts to the water. Some colorless dinoflagellates may also form toxic blooms, such as Pfiesteria. Some dinoflagellate blooms are not dangerous. Bluish flickers visible in ocean water at night often come from blooms of bioluminescent dinoflagellates, which emit short flashes of light when disturbed. A red tide occurs because dinoflagellates are able to reproduce rapidly and copiously as a result of the abundant nutrients in the water. Although the resulting red waves are an interesting visual phenomenon, they contain toxins that not only affect all marine life in the ocean, but the people who consume them as well. A specific carrier is shellfish. This can introduce both nonfatal and fatal illnesses. One such poison is saxitoxin, a powerful paralytic neurotoxin. Human inputs of phosphate further encourage these red tides, so strong interest exists in learning more about dinoflagellates, from both medical and economic perspectives. Dinoflagellates are known to be particularly capable of scavenging dissolved organic phosphorus for P-nutrient, several HAS species have been found to be highly versatile and mechanistically diversified in utilizing different types of DOPs. The ecology of harmful algal blooms is extensively studied. Bioluminescence At night, water can have an appearance of sparkling light due to the bioluminescence of dinoflagellates. More than 18 genera of dinoflagellates are bioluminescent, and the majority of them emit a blue-green light. These species contain scintillons, individual cytoplasmic bodies (about 0.5 μm in diameter) distributed mainly in the cortical region of the cell, outpockets of the main cell vacuole. They contain dinoflagellate luciferase, the main enzyme involved in dinoflagellate bioluminescence, and luciferin, a chlorophyll-derived tetrapyrrole ring that acts as the substrate to the light-producing reaction. The luminescence occurs as a brief (0.1 sec) blue flash (max 476 nm) when stimulated, usually by mechanical disturbance. Therefore, when mechanically stimulated—by boat, swimming, or waves, for example—a blue sparkling light can be seen emanating from the sea surface. Dinoflagellate bioluminescence is controlled by a circadian clock and only occurs at night. Luminescent and nonluminescent strains can occur in the same species. The number of scintillons is higher during night than during day, and breaks down during the end of the night, at the time of maximal bioluminescence. The luciferin-luciferase reaction responsible for the bioluminescence is pH sensitive. When the pH drops, luciferase changes its shape, allowing luciferin, more specifically tetrapyrrole, to bind. Dinoflagellates can use bioluminescence as a defense mechanism. They can startle their predators by their flashing light or they can ward off potential predators by an indirect effect such as the "burglar alarm". The bioluminescence attracts attention to the dinoflagellate and its attacker, making the predator more vulnerable to predation from higher trophic levels. Bioluminescent dinoflagellate ecosystem bays are among the rarest and most fragile, with the most famous ones being the Bioluminescent Bay in La Parguera, Lajas, Puerto Rico; Mosquito Bay in Vieques, Puerto Rico; and Las Cabezas de San Juan Reserva Natural Fajardo, Puerto Rico. Also, a bioluminescent lagoon is near Montego Bay, Jamaica, and bioluminescent harbors surround Castine, Maine. Within the United States, Central Florida is home to the Indian River Lagoon which is abundant with dinoflagellates in the summer and bioluminescent ctenophore in the winter. Lipid and sterol production Dinoflagellates produce characteristic lipids and sterols. One of these sterols is typical of dinoflagellates and is called dinosterol. Transport Dinoflagellate theca can sink rapidly to the seafloor in marine snow. Life cycle Introduction Dinoflagellates have a haplontic life cycle, with the possible exception of Noctiluca and its relatives. The life cycle usually involves asexual reproduction by means of mitosis, either through desmoschisis or eleuteroschisis. More complex life cycles occur, more particularly with parasitic dinoflagellates. Sexual reproduction also occurs, though this mode of reproduction is only known in a small percentage of dinoflagellates. This takes place by fusion of two individuals to form a zygote, which may remain mobile in typical dinoflagellate fashion and is then called a planozygote. This zygote may later form a resting stage or hypnozygote, which is called a dinoflagellate cyst or dinocyst. After (or before) germination of the cyst, the hatchling undergoes meiosis to produce new haploid cells. Dinoflagellates appear to be capable of carrying out several DNA repair processes that can deal with different types of DNA damage. Dinoflagellate cysts The life cycle of many dinoflagellates includes at least one nonflagellated benthic stage as a cyst. Different types of dinoflagellate cysts are mainly defined based on morphological (number and type of layers in the cell wall) and functional (long- or short-term endurance) differences. These characteristics were initially thought to clearly distinguish pellicle (thin-walled) cysts from resting (double-walled) dinoflagellate cysts. The former were considered short-term (temporal) and the latter long-term (resting) cysts. However, during the last two decades further knowledge has highlighted the great intricacy of dinoflagellate life histories. More than 10% of the approximately 2000 known marine dinoflagellate species produce cysts as part of their life cycle (see diagram on the right). These benthic phases play an important role in the ecology of the species, as part of a planktonic-benthic link in which the cysts remain in the sediment layer during conditions unfavorable for vegetative growth and, from there, reinoculate the water column when favorable conditions are restored. Indeed, during dinoflagellate evolution the need to adapt to fluctuating environments and/or to seasonality is thought to have driven the development of this life cycle stage. Most protists form dormant cysts in order to withstand starvation and UV damage. However, there are enormous differences in the main phenotypic, physiological and resistance properties of each dinoflagellate species cysts. Unlike in higher plants most of this variability, for example in dormancy periods, has not been proven yet to be attributed to latitude adaptation or to depend on other life cycle traits. Thus, despite recent advances in the understanding of the life histories of many dinoflagellate species, including the role of cyst stages, many gaps remain in knowledge about their origin and functionality. Recognition of the capacity of dinoflagellates to encyst dates back to the early 20th century, in biostratigraphic studies of fossil dinoflagellate cysts. Paul Reinsch was the first to identify cysts as the fossilized remains of dinoflagellates. Later, cyst formation from gamete fusion was reported, which led to the conclusion that encystment is associated with sexual reproduction. These observations also gave credence to the idea that microalgal encystment is essentially a process whereby zygotes prepare themselves for a dormant period. Because the resting cysts studied until that time came from sexual processes, dormancy was associated with sexuality, a presumption that was maintained for many years. This attribution was coincident with evolutionary theories about the origin of eukaryotic cell fusion and sexuality, which postulated advantages for species with diploid resting stages, in their ability to withstand nutrient stress and mutational UV radiation through recombinational repair, and for those with haploid vegetative stages, as asexual division doubles the number of cells. Nonetheless, certain environmental conditions may limit the advantages of recombination and sexuality, such that in fungi, for example, complex combinations of haploid and diploid cycles have evolved that include asexual and sexual resting stages. However, in the general life cycle of cyst-producing dinoflagellates as outlined in the 1960s and 1970s, resting cysts were assumed to be the fate of sexuality, which itself was regarded as a response to stress or unfavorable conditions. Sexuality involves the fusion of haploid gametes from motile planktonic vegetative stages to produce diploid planozygotes that eventually form cysts, or hypnozygotes, whose germination is subject to both endogenous and exogenous controls. Endogenously, a species-specific physiological maturation minimum period (dormancy) is mandatory before germination can occur. Thus, hypnozygotes were also referred to as "resting" or "resistant" cysts, in reference to this physiological trait and their capacity following dormancy to remain viable in the sediments for long periods of time. Exogenously, germination is only possible within a window of favorable environmental conditions. Yet, with the discovery that planozygotes were also able to divide it became apparent that the complexity of dinoflagellate life cycles was greater than originally thought. Following corroboration of this behavior in several species, the capacity of dinoflagellate sexual phases to restore the vegetative phase, bypassing cyst formation, became well accepted. Further, in 2006 Kremp and Parrow showed the dormant resting cysts of the Baltic cold water dinoflagellates Scrippsiella hangoei and Gymnodinium sp. were formed by the direct encystment of haploid vegetative cells, i.e., asexually. In addition, for the zygotic cysts of Pfiesteria piscicida dormancy was not essential. Genomics One of the most striking features of dinoflagellates is the large amount of cellular DNA that they contain. Most eukaryotic algae contain on average about 0.54 pg DNA/cell, whereas estimates of dinoflagellate DNA content range from 3–250 pg/cell, corresponding to roughly 3000–215 000 Mb (in comparison, the haploid human genome is 3180 Mb and hexaploid Triticum wheat is 16 000 Mb). Polyploidy or polyteny may account for this large cellular DNA content, but earlier studies of DNA reassociation kinetics and recent genome analyses do not support this hypothesis. Rather, this has been attributed, hypothetically, to the rampant retroposition found in dinoflagellate genomes. In addition to their disproportionately large genomes, dinoflagellate nuclei are unique in their morphology, regulation, and composition. Their DNA is so tightly packed that exactly how many chromosomes they have is still uncertain. The dinoflagellates share an unusual mitochondrial genome organisation with their relatives, the Apicomplexa. Both groups have very reduced mitochondrial genomes (around 6 kilobases (kb) in the Apicomplexa vs ~16kb for human mitochondria). One species, Amoebophrya ceratii, has lost its mitochondrial genome completely, yet still has functional mitochondria. The genes on the dinoflagellate genomes have undergone a number of reorganisations, including massive genome amplification and recombination which have resulted in multiple copies of each gene and gene fragments linked in numerous combinations. Loss of the standard stop codons, trans-splicing of mRNAs for the mRNA of cox3, and extensive RNA editing recoding of most genes has occurred. The reasons for this transformation are unknown. In a small group of dinoflagellates, called 'dinotoms' (Durinskia and Kryptoperidinium), the endosymbionts (diatoms) still have mitochondria, making them the only organisms with two evolutionarily distinct mitochondria. In most of the species, the plastid genome consist of just 14 genes. The DNA of the plastid in the peridinin-containing dinoflagellates is contained in a series of small circles called minicircles. Each circle contains one or two polypeptide genes. The genes for these polypeptides are chloroplast-specific because their homologs from other photosynthetic eukaryotes are exclusively encoded in the chloroplast genome. Within each circle is a distinguishable 'core' region. Genes are always in the same orientation with respect to this core region. In terms of DNA barcoding, ITS sequences can be used to identify species, where a genetic distance of p≥0.04 can be used to delimit species, which has been successfully applied to resolve long-standing taxonomic confusion as in the case of resolving the Alexandrium tamarense complex into five species. A recent study revealed a substantial proportion of dinoflagellate genes encode for unknown functions, and that these genes could be conserved and lineage-specific. Evolutionary history Dinoflagellates are mainly represented as fossils by dinocysts, which have a long geological record with lowest occurrences during the mid-Triassic, whilst geochemical markers suggest a presence to the Early Cambrian. Some evidence indicates dinosteroids in many Paleozoic and Precambrian rocks might be the product of ancestral dinoflagellates (protodinoflagellates). Dinoflagellates show a classic radiation of morphologies during the Late Triassic through the Middle Jurassic. More modern-looking forms proliferate during the later Jurassic and Cretaceous. This trend continues into the Cenozoic, albeit with some loss of diversity. Molecular phylogenetics show that dinoflagellates are grouped with ciliates and apicomplexans (=Sporozoa) in a well-supported clade, the alveolates. The closest relatives to dinokaryotic dinoflagellates appear to be apicomplexans, Perkinsus, Parvilucifera, syndinians, and Oxyrrhis. Molecular phylogenies are similar to phylogenies based on morphology. The earliest stages of dinoflagellate evolution appear to be dominated by parasitic lineages, such as perkinsids and syndinians (e.g. Amoebophrya and Hematodinium). All dinoflagellates contain red algal plastids or remnant (nonphotosynthetic) organelles of red algal origin. The parasitic dinoflagellate Hematodinium however lacks a plastid entirely. Some groups that have lost the photosynthetic properties of their original red algae plastids has obtained new photosynthetic plastids (chloroplasts) through so-called serial endosymbiosis, both secondary and tertiary: Lepidodinium unusually possesses a green algae-derived plastid (all other serially-acquired plastids can be traced back to red algae). The plastid is most related to free-living Pedinomonas (hence likely secondary). Two previously undescribed dinoflagellates ("MGD" and "TGD") contain a closely-related plastid. Karenia, Karlodinium, and Takayama possess plastids of haptophyte origin, produced in three separate events. "Dinotoms" (Durinskia and Kryptoperidinium) have plastids derived from diatoms. Some species also perform kleptoplasty: Dinophysis have plastids from a cryptomonad, due to kleptoplasty from a cilate prey. The Kareniaceae (which contains the three haptophyte-having genera) contains two separate cases of kleptoplasty. Dinoflagellate evolution has been summarized into five principal organizational types: prorocentroid, dinophysoid, gonyaulacoid, peridinioid, and gymnodinoid. The transitions of marine species into fresh water have been frequent events during the diversification of dinoflagellates and have occurred recently. Many dinoflagellates also have a symbiotic relationship with cyanobacteria, called cyanobionts, which have a reduced genome and has not been found outside their hosts. The Dinophysoid dinoflagellates have two genera, Amphisolenia and Triposolenia, that contain intracellular cyanobionts, and four genera; Citharistes, Histioneis, Parahistioneis, and Ornithocercus, that contain extracellular cyanobionts. Most of the cyanobionts are used for nitrogen fixation, not for photosynthesis, but some don't have the ability to fix nitrogen. The dinoflagellate Ornithocercus magnificus is host for symbionts which resides in an extracellular chamber. While it is not fully known how the dinoflagellate benefit from it, it has been suggested it is farming the cyanobacteria in specialized chambers and regularly digest some of them. Recently, the living fossil Dapsilidinium pastielsii was found inhabiting the Indo-Pacific Warm Pool, which served as a refugium for thermophilic dinoflagellates, and others such as Calciodinellum operosum and Posoniella tricarinelloides were also described from fossils before later being found alive. Examples Alexandrium Gonyaulax Gymnodinium Lingulodinium polyedrum
Biology and health sciences
Other organisms
null
62297
https://en.wikipedia.org/wiki/Alveolate
Alveolate
The alveolates (meaning "pitted like a honeycomb") are a group of protists, considered a major clade and superphylum within Eukarya. They are currently grouped with the stramenopiles and Rhizaria among the protists with tubulocristate mitochondria into the SAR supergroup. Characteristics The most notable shared characteristic is the presence of cortical (near the surface) alveoli (sacs). These are flattened vesicles (sacs) arranged as a layer just under the membrane and supporting it, typically contributing to a flexible pellicle (thin skin). In armored dinoflagellates they may contain stiff plates. Alveolates have mitochondria with tubular cristae (invaginations), and cells often have pore-like intrusions through the cell surface. The group contains free-living and parasitic organisms, predatory flagellates, and photosynthetic organisms. Almost all sequenced mitochondrial genomes of ciliates and apicomplexa are linear. The mitochondria almost all carry mtDNA of their own but with greatly reduced genome sizes. Exceptions are Cryptosporidium which are left with only a mitosome, the circular mitochondrial genomes of Acavomonas and Babesia microti, and Toxoplasmas highly fragmented mitochondrial genome, consisting of 21 sequence blocks which recombine to produce longer segments. History The relationship of apicomplexa, dinoflagellates and ciliates had been suggested during the 1980s, and this was confirmed in the early 1990s by comparisons of ribosomal RNA sequences, most notably by Gajadhar et al. Cavalier-Smith introduced the formal name Alveolata in 1991, although at the time he considered the grouping to be a paraphyletic assemblage. Many biologists prefer the use of the colloquial name 'alveolate'. Classification Alveolata include around nine major and minor groups. They are diverse in form, and are known to be related by various ultrastructural and genetic similarities: Ciliates – very common protozoa with many short cilia arranged in rows, and two nuclei Acavomonidia Colponemidia Dinoflagellates s.l. – mostly marine flagellates many of which have chloroplasts Perkinsozoa Chromerida – a marine phylum of photosynthetic protozoa Colpodellida Voromonadida Apicomplexa – parasitic and secondary non-photosynthetic protozoa that lack axonemal locomotive structures except in gametes The Acavomonidia and Colponemidia were previously grouped together as colponemids, a taxon now split because each has a distinctive organization or ultrastructural identity. The Acavomonidia are closer to the dinoflagellate/perkinsid group than the Colponemidia are. As such, the informal term "colponemids", as it stands currently, covers two non-sister groups within Alveolata: the Acavomonidia and the Colponemidia. The Apicomplexa and dinoflagellates may be more closely related to each other than to the ciliates. Both have plastids, and most share a bundle or cone of microtubules at the top of the cell. In apicomplexans this forms part of a complex used to enter host cells, while in some colorless dinoflagellates it forms a peduncle used to ingest prey. Various other genera are closely related to these two groups, mostly flagellates with a similar apical structure. These include free-living members in Oxyrrhis and Colponema, and parasites in Perkinsus, Parvilucifera, Rastrimonas and the ellobiopsids. In 2001, direct amplification of the rRNA gene in marine picoplankton samples revealed the presence of two novel alveolate lineages, called group I and II. Group I has no cultivated relatives, while group II is related to the dinoflagellate parasite Amoebophrya, which was classified until now in the Syndiniales dinoflagellate order. Some studies suggested the haplosporids, mostly parasites of marine invertebrates, might belong here, but they lack alveoli and are now placed among the Cercozoa. The ellobiopsids are of uncertain relation within the alveolates. Silberman et al 2004 establish that the Thalassomyces genus of ellobiopsids are alveolates using phylogenetic analysis, however no more certainty exists on their place. Phylogeny In 2017, Thomas Cavalier-Smith described the phylogeny of the Alveolata as follows: TaxonomyAlveolata''' Cavalier-Smith 1991 [Alveolatobiontes] Phylum Ciliophora Doflein 1901 stat. n. Copeland 1956 [Ciliata Perty 1852; Infusoria Bütschli 1887; Ciliae, Ciliozoa, Cytoidea, Eozoa, Heterocaryota, Heterokaryota] Subphylum Postciliodesmatophora Gerassimova & Seravin 1976 Class Heterotrichea Stein 1859 Class Karyorelictea Corliss 1974 Subphylum Intramacronucleata Lynn 1996 Class ?Mesodiniea Chen et al. 2015 Infraphylum Lamellicorticata Class Litostomatea Small & Lynn 1981 Class Armophorea Lynn 2004 Class Cariacotrichea Orsi et al. 2011 Class Spirotrichea Bütschli 1889 Infraphylum Ventrata Cavalier-Smith 2004 [Conthreep Lynn 2012] Order ?Discotrichida Chen et al. 2015 Class Protocruziea Chen et al. 2015 [Protocruziidia de Puytorac, Grain & Mignot 1987] Class Colpodea Small & Lynn 1981 Class Nassophorea Small & Lynn 1981 Class Phyllopharyngea de Puytorac et al. 1974 Class Prostomatea Schewiakoff 1896 Class Plagiopylea Small & Lynn 1985 sensu Lynn 2008 Class Oligohymenophorea de Puytorac et al. 1974 Phylum Miozoa Cavalier-Smith 1987 Subphylum Colponemidia Tikhonenkov, Mylnikov & Keeling 2013 Class Colponemea Cavalier-Smith 1993 Subphylum Acavomonadia Tikhonenkov et al. 2014 Class Acavomonadea Tikhonenkov et al. 2014 Subphylum Myzozoa Cavalier-Smith 2004 Infraphylum Apicomplexa Levine 1970 emend. Adl et al. 2005 Order ?Vitrellida Cavalier-Smith 2017 Class ?Myzomonadea Cavalier-Smith & Chao 2004 sensu Ruggiero et al. 2015 Class Chromerea Order Colpodellida Patterson & Zölffel 1991 [Spiromonadida Krylov & Mylnikov 1986] Superclass Sporozoa Leuckart 1879 stat. nov. Cavalier-Smith 2013 [Gamontozoa] Class Blastogregarinida Chatton & Villeneuve 1936 [Blastogregarinina; Blastogregarinorina Chatton & Villeneuve 1936] Class Paragregarea Cavalier-Smith 2014 Class Gregarinomorphea Grassé 1953 Class Coccidiomorphea Doflein 1901 Infraphylum Dinozoa Cavalier-Smith 1981 emend. 2003 Order ?Acrocoelida Cavalier-Smith & Chao 2004 Order ?Rastromonadida Cavalier-Smith & Chao 2004 Class Squirmidea Norén 1999 stat. nov. Cavalier-Smith 2014 Superclass Perkinsozoa Norén et al. 1999 s.s. Class Perkinsea Levine 1978 [Perkinsasida Levine 1978] Superclass Dinoflagellata Butschli 1885 stat. nov. Cavalier-Smith 1999 sensu Cavalier-Smith 2013 [Dinozoa Cavalier-Smith 1981] Class Pronoctilucea Class Ellobiopsea Cavalier-Smith 1993 [Ellobiophyceae Loeblich III 1970; Ellobiopsida Whisler 1990] Class Myzodinea Cavalier-Smith 2017 Class Oxyrrhea Cavalier-Smith 1987 Class Syndinea Chatton 1920 s.l. [Syndiniophyceae Loeblich III 1970 s.s.; Syndina Cavalier-Smith] Class Endodinea Cavalier-Smith 2017 Class Noctiluciphyceae Fensome et al. 1993 [Noctilucae Haeckel 1866; Noctilucea Haeckel 1866 stat. nov.; Cystoflagellata Haeckel 1873 stat. nov. Butschli 1887] Class Dinophyceae Pascher 1914 [Peridinea Ehrenberg 1830 stat. nov. Wettstein] Development The development of plastids among the alveolates is intriguing. Cavalier-Smith proposed the alveolates developed from a chloroplast-containing ancestor, which also gave rise to the Chromista (the chromalveolate hypothesis). Other researchers have speculated that the alveolates originally lacked plastids and possibly the dinoflagellates and Apicomplexa acquired them separately. However, it now appears that the alveolates, the dinoflagellates, the Chromerida and the heterokont algae acquired their plastids from a red alga with evidence of a common origin of this organelle in all these four clades. Evolution A Bayesian estimate places the evolution of the alveolate group at ~. The Alveolata consist of Myzozoa, Ciliates, and Colponemids. In other words, the term Myzozoa, meaning "to siphon the contents from prey", may be applied informally to the common ancestor of the subset of alveolates that are neither ciliates nor colponemids. Predation upon algae is an important driver in alveolate evolution, as it can provide sources for endosymbiosis of novel plastids. The term Myzozoa is therefore a handy concept for tracking the history of the alveolate phylum. The ancestors of the alveolate group may have been photosynthetic. The ancestral alveolate probably possessed a plastid. Chromerids, apicomplexans, and peridinin dinoflagellates have retained this organelle. Going one step even further back, the chromerids, the peridinin dinoflagellates and the heterokont algae have been argued to possess a monophyletic plastid lineage in common, i.e. acquired their plastids from a red alga, and so it seems likely that the common ancestor of alveolates and heterokonts was also photosynthetic. In one school of thought the common ancestor of the dinoflagellates, apicomplexans, Colpodella, Chromerida, and Voromonas'' was a myzocytotic predator with two heterodynamic flagella, micropores, trichocysts, rhoptries, micronemes, a polar ring and a coiled open sided conoid. While the common ancestor of alveolates may also have possessed some of these characteristics, it has been argued that Myzocytosis was not one of these characteristics, as ciliates ingest prey by a different mechanism. An ongoing debate concerns the number of membranes surrounding the plastid across apicomplexans and certain dinoflagellates, and the origin of these membranes. This ultrastructural character can be used to group organisms and if the character is in common, it can imply that phyla had a common photosynthetic ancestor. On the basis that apicomplexans possess a plastid surrounded by four membranes, and that peridinin dinoflagellates possess a plastid surrounded by three membranes, Petersen et al. have been unable to rule out that the shared stramenopile-alveolate plastid could have been recycled multiple times in the alveolate phylum, the source being stramenopile-alveolate donors, through the mechanism of ingestion and endosymbiosis. Ciliates are a model alveolate, having been genetically studied in great depth over the longest period of any alveolate lineage. They are unusual among eukaryotes in that reproduction involves a micronucleus and a macronucleus. Their reproduction is easily studied in the lab, and made them a model eukaryote historically. Being entirely predatory and lacking any remnant plastid, their development as a phylum illustrates how predation and autotrophy are in dynamic balance and that the balance can swing one way or other at the point of origin of a new phylum from mixotrophic ancestors, causing one ability to be lost. Epigenetics Few algae have been studied for epigenetics. Those for which epigenetic data are available include some algal alveolates.
Biology and health sciences
Other organisms
null
62299
https://en.wikipedia.org/wiki/Cryptomonad
Cryptomonad
The cryptomonads (or cryptophytes) are a group of algae, most of which have plastids. They are traditionally considered a division of algae among phycologists, under the name of Cryptophyta. They are common in freshwater, and also occur in marine and brackish habitats. Each cell is around 10–50 μm in size and flattened in shape, with an anterior groove or pocket. At the edge of the pocket there are typically two slightly unequal flagella. Some may exhibit mixotrophy. They are classified as clade Cryptomonada, which is divided into two classes: heterotrophic Goniomonadea and phototrophic Cryptophyceae. The two groups are united under three shared morphological characteristics: presence of a periplast, ejectisomes with secondary scroll, and mitochondrial cristae with flat tubules. Genetic studies as early as 1994 also supported the hypothesis that Goniomonas was sister to Cryptophyceae. A study in 2018 found strong evidence that the common ancestor of Cryptomonada was an autotrophic protist. Characteristics Cryptomonads are distinguished by the presence of characteristic extrusomes called ejectosomes, which consist of two connected spiral ribbons held under tension. If the cells are irritated either by mechanical, chemical or light stress, they discharge, propelling the cell in a zig-zag course away from the disturbance. Large ejectosomes, visible under the light microscope, are associated with the pocket; smaller ones occur underneath the periplast, the cryptophyte-specific cell surrounding. Except for the class Goniomonadea, which lacks plastids entirely, and Cryptomonas paramecium (previously called Chilomonas paramecium), which has leucoplasts, cryptomonads have one or two chloroplasts. These contain chlorophylls a and c, together with phycobiliproteins and other pigments, and vary in color (brown, red to blueish-green). Each is surrounded by four membranes, and there is a reduced cell nucleus called a nucleomorph between the middle two. This indicates that the plastid was derived from a eukaryotic symbiont, shown by genetic studies to have been a red alga. However, the plastids are very different from red algal plastids: phycobiliproteins are present but only in the thylakoid lumen and are present only as phycoerythrin or phycocyanin. In the case of Rhodomonas, the crystal structure has been determined to 1.63Å; and it has been shown that the alpha subunit bears no relation to any other known phycobiliprotein. A few cryptomonads, such as Cryptomonas, can form palmelloid stages, but readily escape the surrounding mucus to become free-living flagellates again. Some Cryptomonas species may also form immotile microbial cysts—resting stages with rigid cell walls to survive unfavorable conditions. Cryptomonad flagella are inserted parallel to one another, and are covered by bipartite hairs called mastigonemes, formed within the endoplasmic reticulum and transported to the cell surface. Small scales may also be present on the flagella and cell body. The mitochondria have flat cristae, and mitosis is open; sexual reproduction has also been reported. Classification The first mention of cryptomonads appears to have been made by Christian Gottfried Ehrenberg in 1831, while studying Infusoria. Later, botanists treated them as a separate algae group, class Cryptophyceae or division Cryptophyta, while zoologists treated them as the flagellate protozoa order Cryptomonadina. In some classifications, the cryptomonads were considered close relatives of the dinoflagellates because of their (seemingly) similar pigmentation, being grouped as the Pyrrhophyta. Cryptomonad chloroplasts are closely related to those of the heterokonts and haptophytes, and the three groups were united by Cavalier-Smith as the Chromista. However, the case that the organisms themselves are closely related was counter-indicated by the major differences in cell organization (ultrastructural identity), suggesting that the three major lineages assigned to the chromists had acquired plastids independently, and that chromists are polyphyletic. The perspective that cryptomonads are primitively heterotrophic and secondarily acquired chloroplasts, is supported by molecular evidence. Parfrey et al. and Burki et al. placed Cryptophyceae as a sister clade to the Green Algae, or green algae plus glaucophytes. The sister group to the cryptomonads is likely the kathablepharids (also referred to as katablepharids), a group of flagellates that also have ejectisomes. One suggested grouping is as follows: (1) Cryptomonas, (2) Chroomonas/Komma and Hemiselmis, (3) Rhodomonas/Rhinomonas/Storeatula, (4) Guillardia/Hanusia, (5) Geminigera/Plagioselmis/Teleaulax, (6) Proteomonas sulcata, (7) Falcomonas daucoides.
Biology and health sciences
Other organisms
null
62329
https://en.wikipedia.org/wiki/Meta-analysis
Meta-analysis
Meta-analysis is a method of synthesis of quantitative data from multiple independent studies addressing a common research question. An important part of this method involves computing a combined effect size across all of the studies. As such, this statistical approach involves extracting effect sizes and variance measures from various studies. By combining these effect sizes the statistical power is improved and can resolve uncertainties or discrepancies found in individual studies. Meta-analyses are integral in supporting research grant proposals, shaping treatment guidelines, and influencing health policies. They are also pivotal in summarizing existing research to guide future studies, thereby cementing their role as a fundamental methodology in metascience. Meta-analyses are often, but not always, important components of a systematic review. History The term "meta-analysis" was coined in 1976 by the statistician Gene Glass, who stated "Meta-analysis refers to the analysis of analyses". Glass's work aimed at describing aggregated measures of relationships and effects. While Glass is credited with authoring the first modern meta-analysis, a paper published in 1904 by the statistician Karl Pearson in the British Medical Journal collated data from several studies of typhoid inoculation and is seen as the first time a meta-analytic approach was used to aggregate the outcomes of multiple clinical studies. Numerous other examples of early meta-analyses can be found including occupational aptitude testing, and agriculture. The first model meta-analysis was published in 1978 on the effectiveness of psychotherapy outcomes by Mary Lee Smith and Gene Glass. After publication of their article there was pushback on the usefulness and validity of meta-analysis as a tool for evidence synthesis. The first example of this was by Hans Eysenck who in a 1978 article in response to the work done by Mary Lee Smith and Gene Glass called meta-analysis an "exercise in mega-silliness". Later Eysenck would refer to meta-analysis as "statistical alchemy". Despite these criticisms the use of meta-analysis has only grown since its modern introduction. By 1991 there were 334 published meta-analyses; this number grew to 9,135 by 2014. The field of meta-analysis expanded greatly since the 1970s and touches multiple disciplines including psychology, medicine, and ecology. Further the more recent creation of evidence synthesis communities has increased the cross pollination of ideas, methods, and the creation of software tools across disciplines. Literature Search One of the most important steps of a meta-analysis is data collection. For an efficient database search, appropriate keywords and search limits need to be identified. The use of Boolean operators and search limits can assist the literature search. A number of databases are available (e.g., PubMed, Embase, PsychInfo), however, it is up to the researcher to choose the most appropriate sources for their research area. Indeed, many scientists use duplicate search terms within two or more databases to cover multiple sources. The reference lists of eligible studies can also be searched for eligible studies (i.e., snowballing). The initial search may return a large volume of studies. Quite often, the abstract or the title of the manuscript reveals that the study is not eligible for inclusion, based on the pre-specified criteria. These studies can be discarded. However, if it appears that the study may be eligible (or even if there is some doubt) the full paper can be retained for closer inspection. The references lists of eligible articles can also be searched for any relevant articles. These search results need to be detailed in a PRIMSA flow diagram which details the flow of information through all stages of the review. Thus, it is important to note how many studies were returned after using the specified search terms and how many of these studies were discarded, and for what reason. The search terms and strategy should be specific enough for a reader to reproduce the search. The date range of studies, along with the date (or date period) the search was conducted should also be provided. A data collection form provides a standardized means of collecting data from eligible studies. For a meta-analysis of correlational data, effect size information is usually collected as Pearson's r statistic. Partial correlations are often reported in research, however, these may inflate relationships in comparison to zero-order correlations. Moreover, the partialed out variables will likely vary from study-to-study. As a consequence, many meta-analyses exclude partial correlations from their analysis. As a final resort, plot digitizers can be used to scrape data points from scatterplots (if available) for the calculation of Pearson's r. Data reporting important study characteristics that may moderate effects, such as the mean age of participants, should also be collected. A measure of study quality can also be included in these forms to assess the quality of evidence from each study. There are more than 80 tools available to assess the quality and risk of bias in observational studies reflecting the diversity of research approaches between fields. These tools usually include an assessment of how dependent variables were measured, appropriate selection of participants, and appropriate control for confounding factors. Other quality measures that may be more relevant for correlational studies include sample size, psychometric properties, and reporting of methods. A final consideration is whether to include studies from the gray literature, which is defined as research that has not been formally published. This type of literature includes conference abstracts, dissertations, and pre-prints. While the inclusion of gray literature reduces the risk of publication bias, the methodological quality of the work is often (but not always) lower than formally published work. Reports from conference proceedings, which are the most common source of gray literature, are poorly reported and data in the subsequent publication is often inconsistent, with differences observed in almost 20% of published studies. Methods and assumptions Approaches In general, two types of evidence can be distinguished when performing a meta-analysis: individual participant data (IPD), and aggregate data (AD). The aggregate data can be direct or indirect. AD is more commonly available (e.g. from the literature) and typically represents summary estimates such as odds ratios or relative risks. This can be directly synthesized across conceptually similar studies using several approaches. On the other hand, indirect aggregate data measures the effect of two treatments that were each compared against a similar control group in a meta-analysis. For example, if treatment A and treatment B were directly compared vs placebo in separate meta-analyses, we can use these two pooled results to get an estimate of the effects of A vs B in an indirect comparison as effect A vs Placebo minus effect B vs Placebo. IPD evidence represents raw data as collected by the study centers. This distinction has raised the need for different meta-analytic methods when evidence synthesis is desired, and has led to the development of one-stage and two-stage methods. In one-stage methods the IPD from all studies are modeled simultaneously whilst accounting for the clustering of participants within studies. Two-stage methods first compute summary statistics for AD from each study and then calculate overall statistics as a weighted average of the study statistics. By reducing IPD to AD, two-stage methods can also be applied when IPD is available; this makes them an appealing choice when performing a meta-analysis. Although it is conventionally believed that one-stage and two-stage methods yield similar results, recent studies have shown that they may occasionally lead to different conclusions. Statistical models for aggregate data Fixed effect model The fixed effect model provides a weighted average of a series of study estimates. The inverse of the estimates' variance is commonly used as study weight, so that larger studies tend to contribute more than smaller studies to the weighted average. Consequently, when studies within a meta-analysis are dominated by a very large study, the findings from smaller studies are practically ignored. Most importantly, the fixed effects model assumes that all included studies investigate the same population, use the same variable and outcome definitions, etc. This assumption is typically unrealistic as research is often prone to several sources of heterogeneity. If we start with a collection of independent effect size estimates, each estimate a corresponding effect size we can assume that where denotes the observed effect in the -th study, the corresponding (unknown) true effect, is the sampling error, and . Therefore, the ’s are assumed to be unbiased and normally distributed estimates of their corresponding true effects. The sampling variances (i.e., values) are assumed to be known. Random effects model Most meta-analyses are based on sets of studies that are not exactly identical in their methods and/or the characteristics of the included samples. Differences in the methods and sample characteristics may introduce variability (“heterogeneity”) among the true effects. One way to model the heterogeneity is to treat it as purely random. The weight that is applied in this process of weighted averaging with a random effects meta-analysis is achieved in two steps: Step 1: Inverse variance weighting Step 2: Un-weighting of this inverse variance weighting by applying a random effects variance component (REVC) that is simply derived from the extent of variability of the effect sizes of the underlying studies. This means that the greater this variability in effect sizes (otherwise known as heterogeneity), the greater the un-weighting and this can reach a point when the random effects meta-analysis result becomes simply the un-weighted average effect size across the studies. At the other extreme, when all effect sizes are similar (or variability does not exceed sampling error), no REVC is applied and the random effects meta-analysis defaults to simply a fixed effect meta-analysis (only inverse variance weighting). The extent of this reversal is solely dependent on two factors: Heterogeneity of precision Heterogeneity of effect size Since neither of these factors automatically indicates a faulty larger study or more reliable smaller studies, the re-distribution of weights under this model will not bear a relationship to what these studies actually might offer. Indeed, it has been demonstrated that redistribution of weights is simply in one direction from larger to smaller studies as heterogeneity increases until eventually all studies have equal weight and no more redistribution is possible. Another issue with the random effects model is that the most commonly used confidence intervals generally do not retain their coverage probability above the specified nominal level and thus substantially underestimate the statistical error and are potentially overconfident in their conclusions. Several fixes have been suggested but the debate continues on. A further concern is that the average treatment effect can sometimes be even less conservative compared to the fixed effect model and therefore misleading in practice. One interpretational fix that has been suggested is to create a prediction interval around the random effects estimate to portray the range of possible effects in practice. However, an assumption behind the calculation of such a prediction interval is that trials are considered more or less homogeneous entities and that included patient populations and comparator treatments should be considered exchangeable and this is usually unattainable in practice. There are many methods used to estimate between studies variance with restricted maximum likelihood estimator being the least prone to bias and one of the most commonly used. Several advanced iterative techniques for computing the between studies variance exist including both maximum likelihood and restricted maximum likelihood methods and random effects models using these methods can be run with multiple software platforms including Excel, Stata, SPSS, and R. Most meta-analyses include between 2 and 4 studies and such a sample is more often than not inadequate to accurately estimate heterogeneity. Thus it appears that in small meta-analyses, an incorrect zero between study variance estimate is obtained, leading to a false homogeneity assumption. Overall, it appears that heterogeneity is being consistently underestimated in meta-analyses and sensitivity analyses in which high heterogeneity levels are assumed could be informative. These random effects models and software packages mentioned above relate to study-aggregate meta-analyses and researchers wishing to conduct individual patient data (IPD) meta-analyses need to consider mixed-effects modelling approaches./ Quality effects model Doi and Thalib originally introduced the quality effects model. They introduced a new approach to adjustment for inter-study variability by incorporating the contribution of variance due to a relevant component (quality) in addition to the contribution of variance due to random error that is used in any fixed effects meta-analysis model to generate weights for each study. The strength of the quality effects meta-analysis is that it allows available methodological evidence to be used over subjective random effects, and thereby helps to close the damaging gap which has opened up between methodology and statistics in clinical research. To do this a synthetic bias variance is computed based on quality information to adjust inverse variance weights and the quality adjusted weight of the ith study is introduced. These adjusted weights are then used in meta-analysis. In other words, if study i is of good quality and other studies are of poor quality, a proportion of their quality adjusted weights is mathematically redistributed to study i giving it more weight towards the overall effect size. As studies become increasingly similar in terms of quality, re-distribution becomes progressively less and ceases when all studies are of equal quality (in the case of equal quality, the quality effects model defaults to the IVhet model – see previous section). A recent evaluation of the quality effects model (with some updates) demonstrates that despite the subjectivity of quality assessment, the performance (MSE and true variance under simulation) is superior to that achievable with the random effects model. This model thus replaces the untenable interpretations that abound in the literature and a software is available to explore this method further. Network meta-analysis methods Indirect comparison meta-analysis methods (also called network meta-analyses, in particular when multiple treatments are assessed simultaneously) generally use two main methodologies. First, is the Bucher method which is a single or repeated comparison of a closed loop of three-treatments such that one of them is common to the two studies and forms the node where the loop begins and ends. Therefore, multiple two-by-two comparisons (3-treatment loops) are needed to compare multiple treatments. This methodology requires that trials with more than two arms have two arms only selected as independent pair-wise comparisons are required. The alternative methodology uses complex statistical modelling to include the multiple arm trials and comparisons simultaneously between all competing treatments. These have been executed using Bayesian methods, mixed linear models and meta-regression approaches. Bayesian framework Specifying a Bayesian network meta-analysis model involves writing a directed acyclic graph (DAG) model for general-purpose Markov chain Monte Carlo (MCMC) software such as WinBUGS. In addition, prior distributions have to be specified for a number of the parameters, and the data have to be supplied in a specific format. Together, the DAG, priors, and data form a Bayesian hierarchical model. To complicate matters further, because of the nature of MCMC estimation, overdispersed starting values have to be chosen for a number of independent chains so that convergence can be assessed. Recently, multiple R software packages were developed to simplify the model fitting (e.g., metaBMA and RoBMA) and even implemented in statistical software with graphical user interface (GUI): JASP. Although the complexity of the Bayesian approach limits usage of this methodology, recent tutorial papers are trying to increase accessibility of the methods. Methodology for automation of this method has been suggested but requires that arm-level outcome data are available, and this is usually unavailable. Great claims are sometimes made for the inherent ability of the Bayesian framework to handle network meta-analysis and its greater flexibility. However, this choice of implementation of framework for inference, Bayesian or frequentist, may be less important than other choices regarding the modeling of effects (see discussion on models above). Frequentist multivariate framework On the other hand, the frequentist multivariate methods involve approximations and assumptions that are not stated explicitly or verified when the methods are applied (see discussion on meta-analysis models above). For example, the mvmeta package for Stata enables network meta-analysis in a frequentist framework. However, if there is no common comparator in the network, then this has to be handled by augmenting the dataset with fictional arms with high variance, which is not very objective and requires a decision as to what constitutes a sufficiently high variance. The other issue is use of the random effects model in both this frequentist framework and the Bayesian framework. Senn advises analysts to be cautious about interpreting the 'random effects' analysis since only one random effect is allowed for but one could envisage many. Senn goes on to say that it is rather naıve, even in the case where only two treatments are being compared to assume that random-effects analysis accounts for all uncertainty about the way effects can vary from trial to trial. Newer models of meta-analysis such as those discussed above would certainly help alleviate this situation and have been implemented in the next framework. Generalized pairwise modelling framework An approach that has been tried since the late 1990s is the implementation of the multiple three-treatment closed-loop analysis. This has not been popular because the process rapidly becomes overwhelming as network complexity increases. Development in this area was then abandoned in favor of the Bayesian and multivariate frequentist methods which emerged as alternatives. Very recently, automation of the three-treatment closed loop method has been developed for complex networks by some researchers as a way to make this methodology available to the mainstream research community. This proposal does restrict each trial to two interventions, but also introduces a workaround for multiple arm trials: a different fixed control node can be selected in different runs. It also utilizes robust meta-analysis methods so that many of the problems highlighted above are avoided. Further research around this framework is required to determine if this is indeed superior to the Bayesian or multivariate frequentist frameworks. Researchers willing to try this out have access to this framework through a free software. Tailored meta-analysis Another form of additional information comes from the intended setting. If the target setting for applying the meta-analysis results is known then it may be possible to use data from the setting to tailor the results thus producing a 'tailored meta-analysis'., This has been used in test accuracy meta-analyses, where empirical knowledge of the test positive rate and the prevalence have been used to derive a region in Receiver Operating Characteristic (ROC) space known as an 'applicable region'. Studies are then selected for the target setting based on comparison with this region and aggregated to produce a summary estimate which is tailored to the target setting. Aggregating IPD and AD Meta-analysis can also be applied to combine IPD and AD. This is convenient when the researchers who conduct the analysis have their own raw data while collecting aggregate or summary data from the literature. The generalized integration model (GIM) is a generalization of the meta-analysis. It allows that the model fitted on the individual participant data (IPD) is different from the ones used to compute the aggregate data (AD). GIM can be viewed as a model calibration method for integrating information with more flexibility. Validation of meta-analysis results The meta-analysis estimate represents a weighted average across studies and when there is heterogeneity this may result in the summary estimate not being representative of individual studies. Qualitative appraisal of the primary studies using established tools can uncover potential biases, but does not quantify the aggregate effect of these biases on the summary estimate. Although the meta-analysis result could be compared with an independent prospective primary study, such external validation is often impractical. This has led to the development of methods that exploit a form of leave-one-out cross validation, sometimes referred to as internal-external cross validation (IOCV). Here each of the k included studies in turn is omitted and compared with the summary estimate derived from aggregating the remaining k- 1 studies. A general validation statistic, Vn based on IOCV has been developed to measure the statistical validity of meta-analysis results. For test accuracy and prediction, particularly when there are multivariate effects, other approaches which seek to estimate the prediction error have also been proposed. Challenges A meta-analysis of several small studies does not always predict the results of a single large study. Some have argued that a weakness of the method is that sources of bias are not controlled by the method: a good meta-analysis cannot correct for poor design or bias in the original studies. This would mean that only methodologically sound studies should be included in a meta-analysis, a practice called 'best evidence synthesis'. Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size. However, others have argued that a better approach is to preserve information about the variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach. More recently, and under the influence of a push for open practices in science, tools to develop "crowd-sourced" living meta-analyses that are updated by communities of scientists in hopes of making all the subjective choices more explicit. Publication bias: the file drawer problem Another potential pitfall is the reliance on the available body of published studies, which may create exaggerated outcomes due to publication bias, as studies which show negative results or insignificant results are less likely to be published. For example, pharmaceutical companies have been known to hide negative studies and researchers may have overlooked unpublished studies such as dissertation studies or conference abstracts that did not reach publication. This is not easily solved, as one cannot know how many studies have gone unreported. This file drawer problem characterized by negative or non-significant results being tucked away in a cabinet, can result in a biased distribution of effect sizes thus creating a serious base rate fallacy, in which the significance of the published studies is overestimated, as other studies were either not submitted for publication or were rejected. This should be seriously considered when interpreting the outcomes of a meta-analysis. The distribution of effect sizes can be visualized with a funnel plot which (in its most common version) is a scatter plot of standard error versus the effect size. It makes use of the fact that the smaller studies (thus larger standard errors) have more scatter of the magnitude of effect (being less precise) while the larger studies have less scatter and form the tip of the funnel. If many negative studies were not published, the remaining positive studies give rise to a funnel plot in which the base is skewed to one side (asymmetry of the funnel plot). In contrast, when there is no publication bias, the effect of the smaller studies has no reason to be skewed to one side and so a symmetric funnel plot results. This also means that if no publication bias is present, there would be no relationship between standard error and effect size. A negative or positive relation between standard error and effect size would imply that smaller studies that found effects in one direction only were more likely to be published and/or to be submitted for publication. Apart from the visual funnel plot, statistical methods for detecting publication bias have also been proposed. These are controversial because they typically have low power for detection of bias, but also may make false positives under some circumstances. For instance small study effects (biased smaller studies), wherein methodological differences between smaller and larger studies exist, may cause asymmetry in effect sizes that resembles publication bias. However, small study effects may be just as problematic for the interpretation of meta-analyses, and the imperative is on meta-analytic authors to investigate potential sources of bias. The problem of publication bias is not trivial as it is suggested that 25% of meta-analyses in the psychological sciences may have suffered from publication bias. However, low power of existing tests and problems with the visual appearance of the funnel plot remain an issue, and estimates of publication bias may remain lower than what truly exists. Most discussions of publication bias focus on journal practices favoring publication of statistically significant findings. However, questionable research practices, such as reworking statistical models until significance is achieved, may also favor statistically significant findings in support of researchers' hypotheses. Problems related to studies not reporting non-statistically significant effects Studies often do not report the effects when they do not reach statistical significance. For example, they may simply say that the groups did not show statistically significant differences, without reporting any other information (e.g. a statistic or p-value). Exclusion of these studies would lead to a situation similar to publication bias, but their inclusion (assuming null effects) would also bias the meta-analysis. Problems related to the statistical approach Other weaknesses are that it has not been determined if the statistically most accurate method for combining results is the fixed, IVhet, random or quality effect models, though the criticism against the random effects model is mounting because of the perception that the new random effects (used in meta-analysis) are essentially formal devices to facilitate smoothing or shrinkage and prediction may be impossible or ill-advised. The main problem with the random effects approach is that it uses the classic statistical thought of generating a "compromise estimator" that makes the weights close to the naturally weighted estimator if heterogeneity across studies is large but close to the inverse variance weighted estimator if the between study heterogeneity is small. However, what has been ignored is the distinction between the model we choose to analyze a given dataset, and the mechanism by which the data came into being. A random effect can be present in either of these roles, but the two roles are quite distinct. There's no reason to think the analysis model and data-generation mechanism (model) are similar in form, but many sub-fields of statistics have developed the habit of assuming, for theory and simulations, that the data-generation mechanism (model) is identical to the analysis model we choose (or would like others to choose). As a hypothesized mechanisms for producing the data, the random effect model for meta-analysis is silly and it is more appropriate to think of this model as a superficial description and something we choose as an analytical tool – but this choice for meta-analysis may not work because the study effects are a fixed feature of the respective meta-analysis and the probability distribution is only a descriptive tool. Problems arising from agenda-driven bias The most severe fault in meta-analysis often occurs when the person or persons doing the meta-analysis have an economic, social, or political agenda such as the passage or defeat of legislation. People with these types of agendas may be more likely to abuse meta-analysis due to personal bias. For example, researchers favorable to the author's agenda are likely to have their studies cherry-picked while those not favorable will be ignored or labeled as "not credible". In addition, the favored authors may themselves be biased or paid to produce results that support their overall political, social, or economic goals in ways such as selecting small favorable data sets and not incorporating larger unfavorable data sets. The influence of such biases on the results of a meta-analysis is possible because the methodology of meta-analysis is highly malleable. A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals, 15 from specialty medicine journals, and three from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed a total of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources, with 219 (69%) receiving funding from industry (i.e. one or more authors having financial ties to the pharmaceutical industry). Of the 509 RCTs, 132 reported author conflict of interest disclosures, with 91 studies (69%) disclosing one or more authors having financial ties to industry. The information was, however, seldom reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded "without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers' understanding and appraisal of the evidence from the meta-analysis may be compromised." For example, in 1998, a US federal judge found that the United States Environmental Protection Agency had abused the meta-analysis process to produce a study claiming cancer risks to non-smokers from environmental tobacco smoke (ETS) with the intent to influence policy makers to pass smoke-free–workplace laws. Comparability and validity of included studies Meta-analysis may often not be a substitute for an adequately powered primary study, particularly in the biological sciences. Heterogeneity of methods used may lead to faulty conclusions. For instance, differences in the forms of an intervention or the cohorts that are thought to be minor or are unknown to the scientists could lead to substantially different results, including results that distort the meta-analysis' results or are not adequately considered in its data. Vice versa, results from meta-analyses may also make certain hypothesis or interventions seem nonviable and preempt further research or approvals, despite certain modifications – such as intermittent administration, personalized criteria and combination measures – leading to substantially different results, including in cases where such have been successfully identified and applied in small-scale studies that were considered in the meta-analysis. Standardization, reproduction of experiments, open data and open protocols may often not mitigate such problems, for instance as relevant factors and criteria could be unknown or not be recorded. There is a debate about the appropriate balance between testing with as few animals or humans as possible and the need to obtain robust, reliable findings. It has been argued that unreliable research is inefficient and wasteful and that studies are not just wasteful when they stop too late but also when they stop too early. In large clinical trials, planned, sequential analyses are sometimes used if there is considerable expense or potential harm associated with testing participants. In applied behavioural science, "megastudies" have been proposed to investigate the efficacy of many different interventions designed in an interdisciplinary manner by separate teams. One such study used a fitness chain to recruit a large number participants. It has been suggested that behavioural interventions are often hard to compare [in meta-analyses and reviews], as "different scientists test different intervention ideas in different samples using different outcomes over different time intervals", causing a lack of comparability of such individual investigations which limits "their potential to inform policy". Weak inclusion standards lead to misleading conclusions Meta-analyses in education are often not restrictive enough in regards to the methodological quality of the studies they include. For example, studies that include small samples or researcher-made measures lead to inflated effect size estimates. However, this problem also troubles meta-analysis of clinical trials. The use of different quality assessment tools (QATs) lead to including different studies and obtaining conflicting estimates of average treatment effects. Applications in modern science Modern statistical meta-analysis does more than just combine the effect sizes of a set of studies using a weighted average. It can test if the outcomes of studies show more variation than the variation that is expected because of the sampling of different numbers of research participants. Additionally, study characteristics such as measurement instrument used, population sampled, or aspects of the studies' design can be coded and used to reduce variance of the estimator (see statistical models above). Thus some methodological weaknesses in studies can be corrected statistically. Other uses of meta-analytic methods include the development and validation of clinical prediction models, where meta-analysis may be used to combine individual participant data from different research centers and to assess the model's generalisability, or even to aggregate existing prediction models. Meta-analysis can be done with single-subject design as well as group research designs. This is important because much research has been done with single-subject research designs. Considerable dispute exists for the most appropriate meta-analytic technique for single subject research. Meta-analysis leads to a shift of emphasis from single studies to multiple studies. It emphasizes the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed "meta-analytic thinking". The results of a meta-analysis are often shown in a forest plot. Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed 'inverse variance method'. The average effect size across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each study's effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the Mantel–Haenszel method and the Peto method. Seed-based d mapping (formerly signed differential mapping, SDM) is a statistical technique for meta-analyzing studies on differences in brain activity or structure which used neuroimaging techniques such as fMRI, VBM or PET. Different high throughput techniques such as microarrays have been used to understand Gene expression. MicroRNA expression profiles have been used to identify differentially expressed microRNAs in particular cell or tissue type or disease conditions or to check the effect of a treatment. A meta-analysis of such expression profiles was performed to derive novel conclusions and to validate the known findings. Meta-analysis of whole genome sequencing studies provides an attractive solution to the problem of collecting large sample sizes for discovering rare variants associated with complex phenotypes. Some methods have been developed to enable functionally informed rare variant association meta-analysis in biobank-scale cohorts using efficient approaches for summary statistic storage. Sweeping meta-analyses can also be used to estimate a network of effects. This allows researchers to examine patterns in the fuller panorama of more accurately estimated results and draw conclusions that consider the broader context (e.g., how personality-intelligence relations vary by trait family).
Mathematics
Statistics and probability
null
4261505
https://en.wikipedia.org/wiki/Houndshark
Houndshark
The Triakidae or houndsharks are a family of ground sharks, consisting of about 40 species in nine genera. In some classifications, the family is split into two subfamilies, with the genera Mustelus, Scylliogaleus and Triakis in the subfamily Triakinae, and the remainders in the subfamily Galeorhininae. Houndsharks are distinguished by possessing two large, spineless dorsal fins, an anal fin and oval eyes with nictitating eyelids. They are small to medium in size, ranging from in adult length. They are found throughout the world in warm and temperate waters, where they feed on small fish and invertebrates on the seabed and in midwater. Genera Houndsharks are classified into subfamilies and genera as follows: Galeorhininae Gill, 1862 Furgaleus Whitley, 1951 (whiskery shark) Galeorhinus Blainville, 1816 (school shark) Gogolia Compagno, 1973 (sailback houndshark) Hemitriakis Herre, 1923 Hypogaleus J. L. B. Smith, 1957 (blacktip tope) Iago Compagno & Springer, 1971 Triakinae Gray, 1851 Mustelus H. F. Linck, 1790 (smooth-hound) Scylliogaleus Boulenger, 1902 Triakis J. P. Müller & Henle, 1839
Biology and health sciences
Sharks
Animals
13632049
https://en.wikipedia.org/wiki/Gamma-ray%20burst%20emission%20mechanisms
Gamma-ray burst emission mechanisms
Gamma-ray burst emission mechanisms are theories that explain how the energy from a gamma-ray burst progenitor (regardless of the actual nature of the progenitor) is turned into radiation. These mechanisms are a major topic of research as of 2007. Neither the light curves nor the early-time spectra of GRBs show resemblance to the radiation emitted by any familiar physical process. Compactness problem It has been known for many years that ejection of matter at relativistic velocities (velocities very close to the speed of light) is a necessary requirement for producing the emission in a gamma-ray burst. GRBs vary on such short timescales (as short as milliseconds) that the size of the emitting region must be very small, or else the time delay due to the finite speed of light would "smear" the emission out in time, wiping out any short-timescale behavior. At the energies involved in a typical GRB, so much energy crammed into such a small space would make the system opaque to photon-photon pair production, making the burst far less luminous and also giving it a very different spectrum from what is observed. However, if the emitting system is moving towards Earth at relativistic velocities, the burst is compressed in time (as seen by an Earth observer, due to the relativistic Doppler effect) and the emitting region inferred from the finite speed of light becomes much smaller than the true size of the GRB (see relativistic beaming). GRBs and internal shocks A related constraint is imposed by the relative timescales seen in some bursts between the short-timescale variability and the total length of the GRB. Often this variability timescale is far shorter than the total burst length. For example, in bursts as long as 100 seconds, the majority of the energy can be released in short episodes less than 1 second long. If the GRB were due to matter moving towards Earth (as the relativistic motion argument enforces), it is hard to understand why it would release its energy in such brief interludes. The generally accepted explanation for this is that these bursts involve the collision of multiple shells traveling at slightly different velocities; so-called "internal shocks". The collision of two thin shells flash-heats the matter, converting enormous amounts of kinetic energy into the random motion of particles, greatly amplifying the energy release due to all emission mechanisms. Which physical mechanisms are at play in producing the observed photons is still an area of debate, but the most likely candidates appear to be synchrotron radiation and inverse Compton scattering. As of 2007 there is no theory that has successfully described the spectrum of all gamma-ray bursts (though some theories work for a subset). However, the so-called Band function (named after David Band) has been fairly successful at fitting, empirically, the spectra of most gamma-ray bursts: A few gamma-ray bursts have shown evidence for an additional, delayed emission component at very high energies (GeV and higher). One theory for this emission invokes inverse Compton scattering. If a GRB progenitor, such as a Wolf-Rayet star, were to explode within a stellar cluster, the resulting shock wave could generate gamma-rays by scattering photons from neighboring stars. About 30% of known galactic Wolf-Rayet stars, are located in dense clusters of O stars with intense ultraviolet radiation fields, and the collapsar model suggests that WR stars are likely GRB progenitors. Therefore, a substantial fraction of GRBs are expected to occur in such clusters. As the relativistic matter ejected from an explosion slows and interacts with ultraviolet-wavelength photons, some photons gain energy, generating gamma-rays. Afterglows and external shocks The GRB itself is very rapid, lasting from less than a second up to a few minutes at most. Once it disappears, it leaves behind a counterpart at longer wavelengths (X-ray, UV, optical, infrared, and radio) known as the afterglow that generally remains detectable for days or longer. In contrast to the GRB emission, the afterglow emission is not believed to be dominated by internal shocks. In general, all the ejected matter has by this time coalesced into a single shell traveling outward into the interstellar medium (or possibly the stellar wind) around the star. At the front of this shell of matter is a shock wave referred to as the "external shock" as the still relativistically moving matter ploughs into the tenuous interstellar gas or the gas surrounding the star. As the interstellar matter moves across the shock, it is immediately heated to extreme temperatures. (How this happens is still poorly understood as of 2007, since the particle density across the shock wave is too low to create a shock wave comparable to those familiar in dense terrestrial environments – the topic of "collisionless shocks" is still largely hypothesis but seems to accurately describe a number of astrophysical situations. Magnetic fields are probably critically involved.) These particles, now relativistically moving, encounter a strong local magnetic field and are accelerated perpendicular to the magnetic field, causing them to radiate their energy via synchrotron radiation. Synchrotron radiation is well understood, and the afterglow spectrum has been modeled fairly successfully using this template. It is generally dominated by electrons (which move and therefore radiate much faster than protons and other particles) so radiation from other particles is generally ignored. In general, the GRB assumes the form of a power-law with three break points (and therefore four different power-law segments.) The lowest break point, , corresponds to the frequency below which the GRB is opaque to radiation and so the spectrum attains the form Rayleigh-Jeans tail of blackbody radiation. The two other break points, and , are related to the minimum energy acquired by an electron after it crosses the shock wave and the time it takes an electron to radiate most of its energy, respectively. Depending on which of these two frequencies is higher, two different regimes are possible: Fast cooling () - Shortly after the GRB, the shock wave imparts immense energy to the electrons and the minimum electron Lorentz factor is very high. In this case, the spectrum looks like: Slow cooling () – Later after the GRB, the shock wave has slowed down and the minimum electron Lorentz factor is much lower.: The afterglow changes with time. It must fade, obviously, but the spectrum changes as well. For the simplest case of adiabatic expansion into a uniform-density medium, the critical parameters evolve as: Here is the flux at the current peak frequency of the GRB spectrum. (During fast-cooling this is at ; during slow-cooling it is at .) Note that because drops faster than , the system eventually switches from fast-cooling to slow-cooling. Different scalings are derived for radiative evolution and for a non-constant-density environment (such as a stellar wind), but share the general power-law behavior observed in this case. Several other known effects can modify the evolution of the afterglow: Reverse shocks and the optical flash There can be "reverse shocks", which propagate back into the shocked matter once it begins to encounter the interstellar medium. The twice-shocked material can produce a bright optical/UV flash, which has been seen in a few GRBs, though it appears not to be a common phenomenon. Refreshed shocks and late-time flares There can be "refreshed" shocks if the central engine continues to release fast-moving matter in small amounts even out to late times, these new shocks will catch up with the external shock to produce something like a late-time internal shock. This explanation has been invoked to explain the frequent flares seen in X-rays and at other wavelengths in many bursts, though some theorists are uncomfortable with the apparent demand that the progenitor (which one would think would be destroyed by the GRB) remains active for very long. Jet effects Gamma-ray burst emission is believed to be released in jets, not spherical shells. Initially the two scenarios are equivalent: the center of the jet is not "aware" of the jet edge, and due to relativistic beaming we only see a small fraction of the jet. However, as the jet slows down, two things eventually occur (each at about the same time): First, information from the edge of the jet that there is no pressure to the side propagates to its center, and the jet matter can spread laterally. Second, relativistic beaming effects subside, and once Earth observers see the entire jet the widening of the relativistic beam is no longer compensated by the fact that we see a larger emitting region. Once these effects appear the jet fades very rapidly, an effect that is visible as a power-law "break" in the afterglow light curve. This is the so-called "jet break" that has been seen in some events and is often cited as evidence for the consensus view of GRBs as jets. Many GRB afterglows do not display jet breaks, especially in the X-ray, but they are more common in the optical light curves. Though as jet breaks generally occur at very late times (~1 day or more) when the afterglow is quite faint, and often undetectable, this is not necessarily surprising. Dust extinction and hydrogen absorption There may be dust along the line of sight from the GRB to Earth, both in the host galaxy and in the Milky Way. If so, the light will be attenuated and reddened and an afterglow spectrum may look very different from that modeled. At very high frequencies (far-ultraviolet and X-ray) interstellar hydrogen gas becomes a significant absorber. In particular, a photon with a wavelength of less than 91 nanometers is energetic enough to completely ionize neutral hydrogen and is absorbed with almost 100% probability even through relatively thin gas clouds. (At much shorter wavelengths the probability of absorption begins to drop again, which is why X-ray afterglows are still detectable.) As a result, observed spectra of very high-redshift GRBs often drop to zero at wavelengths less than that of where this hydrogen ionization threshold (known as the Lyman break) would be in the GRB host's reference frame. Other, less dramatic hydrogen absorption features are also commonly seen in high-z GRBs, such as the Lyman alpha forest.
Physical sciences
Stellar astronomy
Astronomy
13636218
https://en.wikipedia.org/wiki/Alaska%20pollock
Alaska pollock
The Alaska pollock or walleye pollock (Gadus chalcogrammus) is a marine fish species of the cod genus Gadus and family Gadidae. It is a semi-pelagic schooling fish widely distributed in the North Pacific, with largest concentrations found in the eastern Bering Sea. Name and differentiation Alaska pollock was in 1956 put in its own genus, Theragra, and classified as Theragra chalcogramma, but research in 2008 has shown it is rather closely related to the Atlantic cod and should therefore be moved back to Gadus, where it was originally placed. In 2014, the U.S. Food and Drug Administration announced that the official scientific name for the fish was changed from Theragra chalcogramma back to its original taxon Gadus chalcogrammus, highlighting its close genetic relationship to the other species of the cod genus Gadus. Since 2014, registries of scientific names for fish species (e.g. the United Nations' ASFIS list and the World Registry of Marine Species (WoRMS) have largely adopted the Gadus chalcogrammus name. The change of the official scientific name was followed by a discussion to change the common name as well, to highlight the fish as a member of the cod genus. The common names "Alaska pollock" and "walleye pollock", both used as trade names internationally, are considered misleading by scientific and trade experts, as the names do not reflect the scientific classification. While belonging to the same family as the Atlantic pollock, the Alaska pollock is not a member of the genus Pollachius, but of the cod genus Gadus. Nevertheless, alternative trade names highlighting its placement in the cod genus, such as "snow cod", "bigeye cod", or direct deductions from the scientific names such as "copperline cod" (gadus meaning 'cod', from meaning 'copper', and meaning 'line') or "lesser cod" (from the synonymous taxon Gadus minor) have yet to find widespread acceptance. The National Oceanic and Atmospheric Administration even states that "[the common name] might never change, as common names are separate from scientific names". In addition, Norwegian pollock (Theragra finnmarchica), a rare fish of Norwegian waters, is likely the same species as the Alaska pollock. Ecology The speckled coloring of Alaska pollock makes it more difficult for predators to see them when they are near sandy ocean floors. They are a relatively fast-growing and short-lived species, currently representing a major biological component of the Bering Sea ecosystem. It has been found that catches of Alaska pollock go up three years after stormy summers. The storms stir up nutrients, and this results in phytoplankton being plentiful for longer, which in turn allows more pollock hatchlings to survive. The Alaska pollock has well-developed drumming muscles that the fish use to produce sounds during courtship, like many other gadids. Foraging behavior The primary factor in determining the foraging behavior of the Alaskan pollock is age. Young pollocks can be divided into two sub-groups, fish with lengths below and fish greater than 60mm. Both groups mainly feed on copepods. However, the latter group will also forage for krill. Therefore, food depletion has a larger effect on smaller pollock. The variation in size of each subgroup also affects seasonal foraging behavior. During the winter, when food is scarce, foraging can be costly due to the fact that longer hunting time increases the risk of meeting a predator. The larger young pollocks have no need to hunt during the winter because they have a higher capacity for energy storage, while smaller fish do not, and have to continue foraging, putting them at greater risk. To maximize their chances of survival, large pollock increase their calorie intake in autumn to gain weight, while smaller ones focus solely on growing in size. Alaskan pollock exhibit diel vertical migration, following the seasonal movement of their food. Although pollocks exhibit vertical movement during the day, their average depth changes with the seasons. Originally, the change in depth was attributed to the amount of light or water temperature, but in fact, it follows the movement of food species. In August, when food is abundantly available near the surface, pollocks will be found at shallower depths. In November, they are found deeper along with their planktonic food source. Distribution Alaska pollock in the Pacific Ocean The Alaska pollock's main habitats are the coastal areas of the Northern Pacific, especially the waters off Alaska (Eastern Bering Sea, Gulf of Alaska, Aleutian Islands) as well as off Russia, Japan and Korea (Western Bering Sea and Sea of Okhotsk). The largest concentrations of Alaska pollock are found in the eastern Bering Sea. Small populations in the Arctic Ocean (Barents Sea) Very small populations of fish genetically identical to Gadus chalcogrammus are found in the Barents Sea waters of northern Norway and Russia. This fish was initially described as its own species under the taxon Theragra finnmarchica by Norwegian zoologist Einar Koefoed in 1956. The common name used for the fish was "Norway pollock". Genetic analyses have shown that the fish is genetically identical to the Alaska pollock. It is therefore considered to be conspecific with the Pacific species and is attributed to Gadus chalcogrammus. The history of the species in the Barents Sea is unknown. The initial specification as an own species by Koefoed was based on two specimens landed in Berlevåg, northern Norway, in 1932 (hence the Norwegian name, ). Based on morphological differences, Koefoed considered Theragra finnmarchica a new species, related to but separate from the Alaska pollock. Just seven specimens of the fish are known to have been caught between 1957 and early 2002 in the Arctic Ocean. In 2003 and 2004, 31 new specimens were caught. All specimens were large ( in total length) and caught in the coastal waters between Vesterålen in the west and Varangerfjord in the east. By 2006, 54 individuals had been recorded. Sequencing of mitochondrial DNA of two specimens of Theragra finnmarchica and 10 Theragra chalcogramma (today: Gadus chalcogrammus) revealed no significant genetic differences, leading Ursvik et al. to suggest that T. finnmarchica and T. chalcogramma are the same species. An analysis of a much larger sample size (44 T. finnmarchica and 20 T. chalcogramma) using both genetic and morphological methods led to similar conclusions. While the putative species could not be separated genetically, they showed some consistent differences in morphology. Only one characteristic showed no overlap. Byrkjedal et al. conclude that T. finnmarchica should be considered a junior synonym of T. chalcogramma. These analyses also suggest that T. finnmarchica is a near relative of the Atlantic cod, and that both Alaska and Norway pollock should be moved to genus Gadus. Norway pollock (Theragra finnmarchica) was listed as Near Threatened in the 2010 Norwegian Red List for Species based on criteria D1: "Very small or geographically very restricted population: Number of mature individuals". The IUCN Red List currently lists Alaskan pollock as Near Threatened in Europe. Fisheries The Alaska pollock has been said to be "the largest remaining source of palatable fish in the world". Around of Alaska pollock are caught each year in the North Pacific, from Alaska to northern Japan. Alaska pollock is the world's second most important fish species, after the Peruvian anchoveta, in terms of total catch. Alaska pollock landings are the largest of any single fish species in the U.S, with the average annual Eastern Bering Sea catch between 1979 and 2022 being 1.26 million metric tons. Alaska pollock catches from U.S. fisheries have been relatively consistent at approximately 1.3 million tons a year, on average 92 percent from the Bering Sea and 8 percent from the Gulf of Alaska. Each year's quota is adjusted based on stock assessments conducted by the Alaska Fisheries Science Center to prevent overfishing. For example quotas were reduced from 2008 to 2010 in the Bering Sea due to stock declines. Independent certification groups have hailed the fishery as an example of good management. For example, the Gulf of Alaska and Bering Sea/Aleutian Islands fisheries were separately certified as "sustainable" by the Marine Stewardship Council (MSC) in 2005, and were certified in 2010 and 2016.  The fisheries received a combined re-certification in 2020. The Marine Conservation Society rates Alaska pollock harvested from the Gulf of Alaska, Bering Sea, and Aleutian Islands as sustainable, but not those from the Western Bering Sea in Russian waters. In 2021, the MSC awarded the U.S. trade associations Association of Genuine Alaska Pollock Producers (GAPP) and At-Sea Processors Association with its Ocean Champion Award that recognizes organizations for meeting MSC's commitment to a healthier ocean and a more transparent supply chain. The MSC recognized Alaska pollock from U.S. fisheries as one of the "healthiest" and "most sustainable sources" of protein. As food Compared to other cod species and pollock, Alaska pollock has a milder taste, whiter color and lower oil content. Fillets High-quality, single-frozen whole Alaska pollock fillets may be layered into a block mold and deep-frozen to produce fish blocks that are used throughout Europe and North America as the raw material for high-quality breaded and battered fish products. Lower-quality, double-frozen fillets or minced trim pieces may also be frozen in block forms and used as raw material for lower-quality, low-cost breaded and battered fish sticks and portions. Alaska pollock is commonly used in the fast food industry in products such as McDonald's Filet-O-Fish sandwich, Burger King Big Fish Sandwich, Wendy's Crispy Panko Fish Sandwich, Arby's King's Hawaiian Fish Deluxe, Arby's Crispy Fish Sandwich, Arby's Spicy Fish Sandwich, Long John Silver's Baja Fish Taco, Bojangles Bojangular, Birds Eye's Fish Fingers in Crispy Batter, 7-Eleven's Fish Bites, White Castle's Panko Breaded Fish Sliders, and Captain D's Seafood Kitchen. Some of these items are seasonal offerings during Lent, when seafood demand is higher. Surimi Single-frozen Alaska pollock is considered to be the premier raw material for surimi. The most common use of surimi in the United States is "imitation crabmeat", though it is often seen labeled in retailers and grocers as "surimi seafood" sticks, flakes, or chunks. There are five main forms of surimi seafood: chunk, leg, flake, salad, and shred. Surimi made from minced Alaska Pollock retains the aforementioned carbon footprint advantage. Pollock roe Pollock roe is a popular culinary ingredient in Korea, Japan, and Russia. In Korea, the roe is called (, literally 'Alaska pollock's roe'), and the salted roe is called (, literally 'pollock roe jeotgal). The food was introduced to Japan after World War II, and since has been called () in Japanese. A milder, less spicy version is usually called (, literally 'cod's roe'), which is also the Japanese name for pollock roe itself. In Russia, pollock roe is consumed as a sandwich spread. The product, resembling liquid paste due to the small size of eggs and oil added, is sold canned. Use as food in Korea Alaska pollock is considered the "national fish" of Korea. The Korean name of the fish, (), has also spread to some neighbouring countries: it is called () in Russia and its roe is called () in Japan, although the Japanese name for the fish itself is (). In Korea, is called thirty-odd additional names, including (, fresh), (, frozen), (, dried), (, dried in winter with repeated freezing and thawing), (, dried young), and (, half-dried young). Koreans have been eating Alaska pollock since the Joseon era. One of the earliest mentions is from Seungjeongwon ilgi (Journal of the Royal Secretariat), where a 1652 entry stated: "The management administration should be strictly interrogated for bringing in pollock roe instead of cod roe." Alaska pollocks were the most commonly caught fish in Korea in 1940, when more than 270,000 tonnes were caught from the Sea of Japan. It outnumbers the current annual consumption of Alaska pollock in South Korea, estimated at 260,000 tonnes in 2016. Nowadays, however, Alaska pollock consumption in South Korea rely heavily on import from Russia, due to rises in sea water temperatures. In 2019, South Korea imposed a total ban on pollock fishing "to help replenish depleted stocks" of the fish.
Biology and health sciences
Acanthomorpha
null
13640867
https://en.wikipedia.org/wiki/Gamma-ray%20burst%20progenitors
Gamma-ray burst progenitors
Gamma-ray burst progenitors are the types of celestial objects that can emit gamma-ray bursts (GRBs). GRBs show an extraordinary degree of diversity. They can last anywhere from a fraction of a second to many minutes. Bursts could have a single profile or oscillate wildly up and down in intensity, and their spectra are highly variable unlike other objects in space. The near complete lack of observational constraint led to a profusion of theories, including evaporating black holes, magnetic flares on white dwarfs, accretion of matter onto neutron stars, antimatter accretion, supernovae, hypernovae, and rapid extraction of rotational energy from supermassive black holes, among others. There are at least two different types of progenitors (sources) of GRBs: one responsible for the long-duration, soft-spectrum bursts and one (or possibly more) responsible for short-duration, hard-spectrum bursts. The progenitors of long GRBs are believed to be massive, low-metallicity stars exploding due to the collapse of their cores. The progenitors of short GRBs are thought to arise from mergers of compact binary systems like neutron stars, which was confirmed by the GW170817 observation of a neutron star merger and a kilonova. Long GRBs: massive stars Collapsar model As of 2007, there is almost universal agreement in the astrophysics community that the long-duration bursts are associated with the deaths of massive stars in a specific kind of supernova-like event commonly referred to as a collapsar or hypernova. Very massive stars are able to fuse material in their centers all the way to iron, at which point a star cannot continue to generate energy by fusion and collapses, in this case, immediately forming a black hole. Matter from the star around the core rains down towards the center and (for rapidly rotating stars) swirls into a high-density accretion disk. The infall of this material into the black hole drives a pair of jets out along the rotational axis, where the matter density is much lower than in the accretion disk, towards the poles of the star at velocities approaching the speed of light, creating a relativistic shock wave at the front. If the star is not surrounded by a thick, diffuse hydrogen envelope, the jets' material can pummel all the way to the stellar surface. The leading shock actually accelerates as the density of the stellar matter it travels through decreases, and by the time it reaches the surface of the star it may be traveling with a Lorentz factor of 100 or higher (that is, a velocity of 0.9999 times the speed of light). Once it reaches the surface, the shock wave breaks out into space, with much of its energy released in the form of gamma-rays. Three very special conditions are required for a star to evolve all the way to a gamma-ray burst under this theory: the star must be very massive (probably at least 40 Solar masses on the main sequence) to form a central black hole in the first place, the star must be rapidly rotating to develop an accretion torus capable of launching jets, and the star must have low metallicity in order to strip off its hydrogen envelope so the jets can reach the surface. As a result, gamma-ray bursts are far rarer than ordinary core-collapse supernovae, which only require that the star be massive enough to fuse all the way to iron. Evidence for the collapsar view This consensus is based largely on two lines of evidence. First, long gamma-ray bursts are found without exception in systems with abundant recent star formation, such as in irregular galaxies and in the arms of spiral galaxies. This is strong evidence of a link to massive stars, which evolve and die within a few hundred million years and are never found in regions where star formation has long ceased. This does not necessarily prove the collapsar model (other models also predict an association with star formation) but does provide significant support. Second, there are now several observed cases where a supernova has immediately followed a gamma-ray burst. While most GRBs occur too far away for current instruments to have any chance of detecting the relatively faint emission from a supernova at that distance, for lower-redshift systems there are several well-documented cases where a GRB was followed within a few days by the appearance of a supernova. These supernovae that have been successfully classified are type Ib/c, a rare class of supernova caused by core collapse. Type Ib and Ic supernovae lack hydrogen absorption lines, consistent with the theoretical prediction of stars that have lost their hydrogen envelope. The GRBs with the most obvious supernova signatures include GRB 060218 (SN 2006aj), GRB 030329 (SN 2003dh), and GRB 980425 (SN 1998bw), and a handful of more distant GRBs show supernova "bumps" in their afterglow light curves at late times. Possible challenges to this theory emerged recently, with the discovery of two nearby long gamma-ray bursts that lacked the signature of any type of supernova: both GRB060614 and GRB 060505 defied predictions that a supernova would emerge despite intense scrutiny from ground-based telescopes. Both events were, however, associated with actively star-forming stellar populations. One possible explanation is that during the core collapse of a very massive star a black hole can form, which then 'swallows' the entire star before the supernova blast can reach the surface. Short GRBs: degenerate binary systems Short gamma-ray bursts appear to be an exception. Until 2007, only a handful of these events have been localized to a definite galactic host. However, those that have been localized appear to show significant differences from the long-burst population. While at least one short burst has been found in the star-forming central region of a galaxy, several others have been associated with the outer regions and even the outer halo of large elliptical galaxies in which star formation has nearly ceased. All the hosts identified so far have also been at low redshift. Furthermore, despite the relatively nearby distances and detailed follow-up study for these events, no supernova has been associated with any short GRB. Neutron star and neutron star/black hole mergers While the astrophysical community has yet to settle on a single, universally favored model for the progenitors of short GRBs, the generally preferred model is the merger of two compact objects as a result of gravitational inspiral: two neutron stars, or a neutron star and a black hole. While thought to be rare in the Universe, a small number of cases of close neutron star - neutron star binaries are known in our Galaxy, and neutron star - black hole binaries are believed to exist as well. According to Einstein's theory of general relativity, systems of this nature will slowly lose energy due to gravitational radiation and the two degenerate objects will spiral closer and closer together, until in the last few moments, tidal forces rip the neutron star (or stars) apart and an immense amount of energy is liberated before the matter plunges into a single black hole. The whole process is believed to occur extremely quickly and be completely over within a few seconds, accounting for the short nature of these bursts. Unlike long-duration bursts, there is no conventional star to explode and therefore no supernova. This model has been well-supported so far by the distribution of short GRB host galaxies, which have been observed in old galaxies with no star formation (for example, GRB050509B, the first short burst to be localized to a probable host) as well as in galaxies with star formation still occurring (such as GRB050709, the second), as even younger-looking galaxies can have significant populations of old stars. However, the picture is clouded somewhat by the observation of X-ray flaring in short GRBs out to very late times (up to many days), long after the merger should have been completed, and the failure to find nearby hosts of any sort for some short GRBs. Magnetar giant flares One final possible model that may describe a small subset of short GRBs are the so-called magnetar giant flares (also called megaflares or hyperflares). Early high-energy satellites discovered a small population of objects in the Galactic plane that frequently produced repeated bursts of soft gamma-rays and hard X-rays. Because these sources repeat and because the explosions have very soft (generally thermal) high-energy spectra, they were quickly realized to be a separate class of object from normal gamma-ray bursts and excluded from subsequent GRB studies. However, on rare occasions these objects, now believed to be extremely magnetized neutron stars and sometimes termed magnetars, are capable of producing extremely luminous outbursts. The most powerful such event observed to date, the giant flare of 27 December 2004, originated from the magnetar SGR 1806-20 and was bright enough to saturate the detectors of every gamma-ray satellite in orbit and significantly disrupted Earth's ionosphere. While still significantly less luminous than "normal" gamma-ray bursts (short or long), such an event would be detectable to current spacecraft from galaxies as far as the Virgo cluster and, at this distance, would be difficult to distinguish from other types of short gamma-ray burst on the basis of the light curve alone. To date, three gamma-ray bursts have been associated with SGR flares in galaxies beyond the Milky Way: GRB 790305b in the Large Magellanic Cloud, GRB 051103 from M81 and GRB 070201 from M31. Diversity in the origin of long GRBs HETE II and Swift observations reveal that long gamma-ray bursts come with and without supernovae, and with and without pronounced X-ray afterglows. It gives a clue to a diversity in the origin of long GRBs, possibly in- and outside of star-forming regions, with otherwise a common inner engine. The timescale of tens of seconds of long GRBs hereby appears to be intrinsic to their inner engine, for example, associated with a viscous or a dissipative process. The most powerful stellar mass transient sources are the above-mentioned progenitors (collapsars and mergers of compact objects), all producing rotating black holes surrounded by debris in the form of an accretion disk or torus. A rotating black hole carries spin-energy in angular momentum as does a spinning top: where and denote the moment of inertia and the angular velocity of the black hole in the trigonometric expression for the specific angular momentum of a Kerr black hole of mass . With no small parameter present, it has been well-recognized that the spin energy of a Kerr black hole can reach a substantial fraction (29%) of its total mass-energy , thus holding promise to power the most remarkable transient sources in the sky. Of particular interest are mechanisms for producing non-thermal radiation by the gravitational field of rotating black holes, in the process of spin-down against their surroundings in aforementioned scenarios. By Mach's principle, spacetime is dragged along with mass-energy, with the distant stars on cosmological scales or with a black hole in close proximity. Thus, matter tends to spin-up around rotating black holes, for the same reason that pulsars spin down by shedding angular momentum in radiation to infinity. A major amount of spin-energy of rapidly spinning black holes can thereby be released in a process of viscous spin-down against an inner disk or torus—into various emission channels. Spin-down of rapidly spinning stellar mass black holes in their lowest energy state takes tens of seconds against an inner disk, representing the remnant debris of the merger of two neutron stars, the break-up of a neutron star around a companion black hole or formed in core-collapse of a massive star. Forced turbulence in the inner disk stimulates the creation of magnetic fields and multipole mass-moments, thereby opening radiation channels in radio, neutrinos and, mostly, in gravitational waves with distinctive chirps shown in the diagram with the creation of astronomical amounts of Bekenstein-Hawking entropy. Transparency of matter to gravitational waves offers a new probe to the inner-most workings of supernovae and GRBs. The gravitational-wave observatories LIGO and Virgo are designed to probe stellar mass transients in a frequency range of tens to about fifteen hundred Hz. The above-mentioned gravitational-wave emissions fall well within the LIGO-Virgo bandwidth of sensitivity; for long GRBs powered by "naked inner engines" produced in the binary merger of a neutron star with another neutron star or companion black hole, the above-mentioned magnetic disk winds dissipate into long-duration radio-bursts, that may be observed by the novel Low Frequency Array (LOFAR).
Physical sciences
Stellar astronomy
Astronomy
3126375
https://en.wikipedia.org/wiki/Axiidea
Axiidea
Axiidea is an infraorder of decapod crustaceans. They are colloquially known as mud shrimp, ghost shrimp, or burrowing shrimp; however, these decapods are only distantly related to true shrimp. Axiidea and Gebiidea are divergent infraorders of the former infraorder Thalassinidea. These infraorders have converged ecologically and morphologically as burrowing forms. Based on molecular evidence as of 2009, it is now widely believed that these two infraorders represent two distinct lineages separate from one another. Since this is a recent change, much of the literature and research surrounding these infraorders still refers to the Axiidea and Gebiidea in combination as "thalassinidean" for the sake of clarity and reference. This division based on molecular evidence is consistent with the groupings proposed by Robert Gurney in 1938 based on larval developmental stages. Axiidea are noted for the burrows with complex architecture that they make in the ocean floor sediment. These burrows can be classified based on their external characteristics in the sediment as well as the trophic group that the species falls into. The population density of most species of Axiidea tends to be high, so these organisms play an important role in the biogeochemical processes of the ocean floor sediments, and in the creation of habitats that favor various marine benthic communities. Classification The infraorder Axiidea belongs to the group Reptantia, which consists of the walking/crawling decapods (lobsters and crabs). The cladogram below shows Axiidea as more basal than Gebiidea within the larger order Decapoda, from analysis by Wolfe et al., 2019. The infraorder Axiidea comprises the following families: A few subfamilies of Axiidea have been proposed to become families, but have not for a variety of reasons. Examples of these subfamilies include the subfamily Gourretiidae, discovered by Sakai in 1999. Gourretiidae is a subfamily of the Ctenochelidae, and has been proposed to become a family instead, but phylogenetic analyses do not yet support that proposal. Similarly, molecular studies do not support the subfamily Eiconaxiidae being separate from family Axiidae. There is also no molecular evidence to separate the subfamily Calocardidae from Axiidae. The cladogram below shows Axiidea's internal family relationships from analysis by Wolfe et al., 2019. Description The length of an adult Axiidea can range from about in some species, to over in other species. The color of the Axiidea can range a variety of colors, including white, pink, red, orange, and dark brown. The rostrum can range from being nearly invisible, to fairly rigid and extending past the eyes. The carapace also ranges from fairly rigid to transparent, showing the organs underneath. Axiidea can range from having a well-calcified exoskeleton, to barely calcified elongated exoskeletons, which show an adaptation to burrowing in certain species. The sex of the Axiidea can be determined by the pleopod structure on the underbelly of the organism. This structure is underdeveloped or absent in the males. The sex ratio in most species of Axiidea tends to be 1:1, although in certain habitats one sex can slightly outnumber the other. Duration of egg incubation periods, and therefore also larval development, is dependent on the environmental factors surrounding the habitat of each individual species. Environmental factors tend to include developmental constraints, salinity of the marine environment, and temperature of the water. Furthermore, the duration of the zoeal, or larval, phase ranges quite a bit, and has been estimated to last as little as 2 to 3 days in some species of Axiidea, to 5 to 6 months in other species. The pre-zoeal hatching stage is marked by poor swimming ability and lack of setae, and the zoeal stages are planktonic. The megalopa stage represents the transition from plankton to their benthic habitats, and morphological development is marked by the growth of functional mouthparts resembling those of juveniles or adults. Burrows Burrows can be divided into two groups in terms of external characteristics, depending on the existence of a mound of sediment around the entrance of the burrow. These two groups can be further divided based on whether they contain plant material within the burrow. Burrows tend to be narrow, and can range from Y or U shaped in certain species, to intricate branching tunnels and deep wells in other species. Burrows can also differ within the classifications of external characteristics, based on the feeding mode for each organism. There are three general trophic groups that the families within the infraorder Axiidea can fall into. The first trophic group are the detritophages, or deposit feeders. The other two trophic groups are the drift catchers, which collect plant matter that drifts based on ocean currents, and the suspension feeders, which feed on plant matter that is suspended in the water. Drift catcher burrows tend to lack the external characteristic of the mound around the entrance of the burrow, and their burrows tend to be very deep and contain chambers that are filled with seagrasses and other sea debris. Suspension feeder burrows tend to be in the Y or U shapes, and also lack seagrasses and debris within them in contrast to the drift catchers; furthermore, the sediment within the lower parts of these burrows can also serve as food for the suspension feeders. The feeding mode affects the burrow, because Axiidea consume amounts of sediment, and the sediment that is rejected makes up parts of the burrow. The seagrasses consumed by the Axiidea are therefore present in the burrows and provide a way to classify the species. The burrows created by detritophage species of Axiidea are more likely to change over the life of the organism than the burrows of filter feeders because detritophage species of Axiidea can build new passages and chambers over the course of their feeding. Each burrow is typically inhabited by one organism, however, certain species of Axiidea live in pairs. Distribution and ecology Axiidea typically live in marine environments with soft-bottom sediments. Axiidea are found in most oceans and seas, except for high latitude polar seas. Distribution shows a clear gradient based on latitude, with low species numbers at higher latitudes and higher species numbers in low latitudes. Therefore, Axiidea are most diverse in temperate to tropical regions. Within the intertidal regions, Axiidea can be used as fishing bait or even for human consumption. Axiidea rarely range into the deep sea with depths more than , instead with 95% of species preferring the shallow water of intertidal or subtidal (less than ) areas.
Biology and health sciences
Decapoda
Animals
3128803
https://en.wikipedia.org/wiki/Drying
Drying
Drying is a mass transfer process consisting of the removal of water or another solvent by evaporation from a solid, semi-solid or liquid. This process is often used as a final production step before selling or packaging products. To be considered "dried", the final product must be solid, in the form of a continuous sheet (e.g., paper), long pieces (e.g., wood), particles (e.g., cereal grains or corn flakes) or powder (e.g., sand, salt, washing powder, milk powder). A source of heat and an agent to remove the vapor produced by the process are often involved. In bioproducts like food, grains, and pharmaceuticals like vaccines, the solvent to be removed is almost invariably water. Desiccation may be synonymous with drying or considered an extreme form of drying. In the most common case, a gas stream, e.g., air, applies the heat by convection and carries away the vapor as humidity. Other possibilities are vacuum drying, where heat is supplied by conduction or radiation (or microwaves), while the vapor thus produced is removed by the vacuum system. Another indirect technique is drum drying (used, for instance, for manufacturing potato flakes), where a heated surface is used to provide the energy, and aspirators draw the vapor outside the room. In contrast, the mechanical extraction of the solvent, e.g., water, by filtration or centrifugation, is not considered "drying" but rather "draining". Drying mechanism In some products having a relatively high initial moisture content, an initial linear reduction of the average product moisture content as a function of time may be observed for a limited time, often known as a "constant drying rate period". Usually, in this period, it is surface moisture outside individual particles that is being removed. The drying rate during this period is mostly dependent on the rate of heat transfer to the material being dried. Therefore, the maximum achievable drying rate is considered to be heat-transfer limited. If drying is continued, the slope of the curve, the drying rate, becomes less steep (falling rate period) and eventually tends to become nearly horizontal at very long times. The product moisture content is then constant at the "equilibrium moisture content", where it is, in practice, in equilibrium with the dehydrating medium. In the falling-rate period, water migration from the product interior to the surface is mostly by molecular diffusion, i.e. the water flux is proportional to the moisture content gradient. This means that water moves from zones with higher moisture content to zones with lower values, a phenomenon explained by the second law of thermodynamics. If water removal is considerable, the products usually undergo shrinkage and deformation, except in a well-designed freeze-drying process. The drying rate in the falling-rate period is controlled by the rate of removal of moisture or solvent from the interior of the solid being dried and is referred to as being "mass-transfer limited". This is widely noticed in hygroscopic products such as fruits and vegetables, where drying occurs in the falling rate period with the constant drying rate period said to be negligible. Methods of drying The following are some general methods of drying: Application of hot air (convective or direct drying). Air heating increases the drying force for heat transfer and accelerates drying. It also reduces air relative humidity, further increasing the driving force for drying. In the falling rate period, as moisture content falls, the solids heat up and the higher temperatures speed up diffusion of water from the interior of the solid to the surface. However, product quality considerations limit the applicable rise to air temperature. Excessively hot air can almost completely dehydrate the solid surface, so that its pores shrink and almost close, leading to crust formation or "case hardening", which is usually undesirable. For instance in wood (timber) drying, air is heated (which speeds up drying) though some steam is also added to it (which hinders drying rate to a certain extent) in order to avoid excessive surface dehydration and product deformation owing to high moisture gradients across timber thickness. Spray drying belongs in this category. Indirect or contact drying (heating through a hot wall), as drum drying, vacuum drying. Again, higher wall temperatures will speed up drying but this is limited by product degradation or case-hardening. Drum drying belongs in this category. Dielectric drying (radiofrequency or microwaves being absorbed inside the material) is the focus of intense research nowadays. It may be used to assist air drying or vacuum drying. Researchers have found that microwave finish drying speeds up the otherwise very low drying rate at the end of the classical drying methods. Vacuum microwave drying is used for the US Army's experimental Close Combat Assault Ration. Freeze drying or lyophilization is a drying method where the solvent is frozen prior to drying and is then sublimed, i.e., passed to the gas phase directly from the solid phase, below the melting point of the solvent. It is increasingly applied to dry foods, beyond its already classical pharmaceutical or medical applications. It keeps biological properties of proteins, and retains vitamins and bioactive compounds. Pressure can be reduced by a high vacuum pump (though freeze drying at atmospheric pressure is possible in dry air). If using a vacuum pump, the vapor produced by sublimation is removed from the system by converting it into ice in a condenser, operating at very low temperatures, outside the freeze drying chamber. Supercritical drying (superheated steam drying) involves steam drying of products containing water. This process is feasible because water in the product is boiled off, and joined with the drying medium, increasing its flow. It is usually employed in closed circuit and allows a proportion of latent heat to be recovered by recompression, a feature which is not possible with conventional air drying, for instance. The process has potential for use in foods if carried out at reduced pressure, to lower the boiling point. Natural air drying takes place when materials are dried with unheated forced air, taking advantage of its natural drying potential. The process is slow and weather-dependent, so a wise strategy "fan off-fan on" must be devised considering the following conditions: Air temperature, relative humidity and moisture content and temperature of the material being dried. Grains are increasingly dried with this technique, and the total time (including fan off and on periods) may last from one week to various months, if a winter rest can be tolerated in cold areas. Applications of drying Film formation In the coatings and adhesives industry, drying is used to cure solvent-based films. In some cases, highly structured films can result. For example, evaporation of solvent from a solution containing helical polymer results in a highly ordered array of squashed toroidal structures. Food Foods are dried to inhibit microbial development and quality decay. However, the extent of drying depends on product end-use. Cereals and oilseeds are dried after harvest to the moisture content that allows microbial stability during storage. Vegetables are blanched before drying to avoid rapid darkening, and drying is not only carried out to inhibit microbial growth, but also to avoid browning during storage. Concerning dried fruits, the reduction of moisture acts in combination with its acid and sugar contents to provide protection against microbial growth. Products such as milk powder must be dried to very low moisture contents in order to ensure flowability and avoid caking. This moisture is lower than that required to ensure inhibition to microbial development. Other products as crackers are dried beyond the microbial growth threshold to confer a crispy texture, which is liked by consumers. Non-food products Among non-food products, some of those that require considerable drying are wood (as part of timber processing), paper, flax, and washing powder. The first two, owing to their organic origins, may develop mold if insufficiently dried. Another benefit of drying is a reduction in volume and weight. Sludges and fecal materials from sanitation processes In the area of sanitation, drying of sewage sludge from sewage treatment plants, fecal sludge or feces collected in urine-diverting dry toilets (UDDT) is a common method to achieve pathogen kill, as pathogens can only tolerate a certain dryness level. In addition, drying is required as a process step if the excreta based materials are meant to be incinerated.
Physical sciences
Phase separations
Chemistry
3130999
https://en.wikipedia.org/wiki/Japanese%20rhinoceros%20beetle
Japanese rhinoceros beetle
The Japanese rhinoceros beetle (Allomyrina dichotoma), also known as the Japanese rhino beetle, the Japanese horned beetle, or by its Japanese name ( or ), is a species of rhinoceros beetle. They are commonly found in continental Asia in countries such as China, the Korean peninsula, Japan, and Taiwan. In these areas, this species of beetle is often found in broad-leaved forests with tropical or sub-tropical climates. This beetle is well known for the prominent cephalic horn found on males. Male Japanese rhinoceros beetles will use this horn to fight other males for territory and access to female mating partners. Upon contact, males will attempt to flip each other onto their backs or off of their feeding tree. In response to selective pressures, smaller male A. dichotoma have adapted a "sneak-like behavior". These smaller beetles will attempt to avoid physical confrontation with larger males and try to mate with females. List of subspecies Allomyrina dichotoma dichotoma: Mainland China, the Korean Peninsula Allomyrina dichotoma inchachina: Kume Island Allomyrina dichotoma septentrionalis: Honshu, Shikoku, Kyushu, Tsushima Island Allomyrina dichotoma takarai Okinawa Allomyrina dichotoma tunobosonis: Taiwan Allomyrina dichotoma politus: Thailand Allomyrina dichotoma tsuchiyai: Kuchinoerabu Island Allomyrina dichotoma shizuae: Yakushima Island, Tanegashima Island Description These beetles have a dark brown and red appearance. However, their bodies can appear to be black without direct light. On average, males tend to measure between 40 and 80 mm, while females are typically smaller, growing between 35 and 60 mm long. Male A. dichotoma have a distinct sexually dimorphic horn protruding from the base of its head which can reach a length of up to one-third of its body length. The length of the male A. dichotoma elytra has been recorded to be between 19 and 33 mm and the male horn can range between 7 and 32 mm. As the horn is a sexually dimorphic trait, only male Japanese rhinoceros beetles will grow one. This cephalic horn is typically somewhat thin and "pitchfork shaped". This appendage acts as a lever arm and is commonly used as a tool to fight other males for access to territory and females. Despite the large size of the cephalic horn, male Japanese rhinoceros beetles are still capable of flight; male and females have been reported to fly at similar average speeds. Males with proportionally large horns compared to their body size possess larger wings to compensate. Geographic range A. dichotoma can be found widely distributed throughout Asia, including China, Japan, Taiwan, Vietnam, Myanmar, Laos, India, Thailand, and the Korean Peninsula. Habitat This beetle species prefers to live in broad-leaved forests with tropical or sub-tropical climates. They can also be found often in mountainous environments. Across populations and regions, male beetles can vary greatly in size and horn performance, and it is suggested that differences are due to relative intensities of selection. Bark-carving behavior Adult Japanese rhinoceros beetles un-burrow from the earth during the summer months between June and August. They prefer to congregate on wounded tree trunks. Quercus acustissima, Quercus serrata, and Quercus mongolica grosseserrata are the most common trees they choose. A tree wound is caused by boring insects which break through the exterior of the tree and feed on the nutrient-rich sap on the interior. Adult A. dichotoma takes advantage of the easily accessible food and consume the exposed tree sap. A subspecies of A. dichotoma known as Trypoxylus dichotomus septentrionalis exhibits bark-carving behavior. This variety of Japanese rhinoceros beetle does not require other insects to breach the tough arboreal exterior to access sap. Notably, these beetles conduct this behavior on Fraxinus griffithii trees, which have a thinner bark than the aforementioned species; this thinner exterior is considerably easier to cut through. These beetles cut into the tree by using their clypeus as a chisel. They hold on tightly to the tree and move their head back and forth to cut into the bark. For a short time, sap flows out of the newly made wound, and the Japanese rhinoceros beetle can feed. After a few minutes, the sap stops flowing, so the beetle begins to carve again. Life cycle Female A. dichotoma beetles oviposit by scattering their eggs in the humus portion of soil during July and September. The larvae feed on the humus, develop into the third instar phase and pupate during June-July of the following year. Adult beetles emerge from the soil in the few months after pupating. It takes A. dichotoma beetles 1 year to develop into adults after being laid as eggs. Larval behavior Chemical cues Larval aggregation in A. dichotoma is driven by chemical cues. The larvae in this species burrow into the dirt, so chemical and acoustic cues are more relevant than visual cues. Studies have shown that chemical cues are necessary for larvae gathering. Larvae with nonfunctioning chemosensory organs cannot aggregate, so chemical cues are likely an important signal guiding larval aggregation. Notably, first instar larvae do not aggregate with other larvae because they demonstrate cannibalistic tendencies at this stage. Second and third instar larvae are not cannibalistic, so they aggregate normally. These larvae also do not discriminate based on kinship when they group. They will group at conspecific larvae but do not demonstrate preferences based on shared genetic similarities. Mature larvae have been shown to construct pupal cells close to groups of larvae that are already living in the soil. These larvae recognize chemicals produced by other larvae and use these signals to determine where they will make their pupation site. One main benefit of grouping is an increase in diet quality for the larvae. Increase larval activities, like ingesting humus or burrowing, increase symbiotic microorganism activity in the nearby soil. This increase in symbiotic microorganism activity creates a more nutritious diet for the larvae to consume. Two potential costs of group living in these larvae include increased risk of fungal epidemic and increased risk of predation. Metarhizium fungal infection is lethal for A. dichotoma larvae, and grouping together leaves the larvae more susceptible to infection. Mogera imaizumii are able to quickly detect and consume these larvae, so living together leave them more vulnerable to being consumed. Burrowing A. dichotoma larvae remain buried in the soil until they emerge to breed in the summer. Despite this behavior, the shape of this beetle is not well suited for burrowing. The transverse sectional diameter of the A. dichomota last instar larva is around 20 mm which creates resistance as the larvae move through the soil. To make up for this inefficiency, A. dichotoma developed a rotational burrowing technique. These larvae possess a C-shaped body, so they burrow into the ground by rotating and using their tails to kick soil upward. Kicking the soil up helps the larvae sink into the ground by fluidizing the soil, reducing resistance, and allowing them to burrow more effectively. Genetics The horn found in male A. dichotoma is a well-studied example of an exaggerated trait that evolved through intrasexual selection. This horn is a sexually dimorphic trait, which means there must be a sex-determination gene involved in its development and evolution. The doublesex/Mab-3 related (DMRT) transcription factor family is a family of genes that are heavily involved in sexually dimorphic traits. These genes are evolutionarily conserved across many taxa, including worms, mammals, and beetles. The specific variant of doublesex implicated in the Japanese rhinoceros beetle is known as Td-dsx which stands for T. dichotomus dsx homologue. During development, alternative splicing of Td-dsx results in the formation of male and female isoforms of the gene. Td-dsx expression at the horn-forming area of the head increases during the prepupal stage, resulting in horn development in male A. dichotomus. Expression of male and female isoforms of Td-dsx increases during this time, but only the male isoform leads to horn growth. Knockdown of the male isoform of Td-dsx has shown to result in no horn growth in male Japanese rhinoceros beetles. Physiology Japanese rhinoceros beetles have rounded pupal horns that have been shown to transform into an angular adult horn following adhesion and shrinkage-based stimuli. This occurs as a part of the beetles’ natural metamorphosis as they come into a new exoskeletal morphogenesis. The mechanical mechanisms underlying such a physiological transformation are still relatively unstudied and unknown, but researchers have confirmed that physical stimulation causes pupal remodeling in Japanese rhinoceros beetles. It is entirely possible that a similar mechanism exists in closely related species, which would indicate a genetic underpinning to the morphological mechanism. It is thought that cell division and internal pressure are contributing factors to the alterations of epithelial layers after ecdysis. Mating Male-male interactions A. dichotoma are well known for their male aggressive behavior. Males will often use their large horns to fight other males over territory and the access to female Japanese rhinoceros beetles. These beetles will often fight on the trunks of host trees to determine who will keep or gain the territory. The goal of these fights is to uproot the opposition and either throw the other male onto their back or off the tree outright. This aggressive behavior has been broken down into four stages. Stage 1 This stage is known as the encounter, and consists of two males seeing each other but not yet making physical contact. Stage 2 This stage is known as shoving, and occurs when the males make physical contact and begin to shove each other with their horns. This stage is the most important because the beetles will analyze each other. At this time, each beetle will figure out the size of the opponent and decide if they want to fight or flee. Stage 3 This stage is known as prying. At this point, the males will use their horns to try and flip the other onto their backs. The battling beetles will proceed to step 4a or 4b depending on the size differences. Stage 4 Stage 4 consists of 4a and 4b. During stage 4a, known as chasing, if the horn length and/or body size is considerably large, the larger male will chase after the smaller one. The smaller male will retreat. During stage 4b, known as flipping, if the horn length and/or body size is small or negligible, the beetles will fight until one is flipped. This process takes considerably more time and energy than stage 4a. Sneak-like behavior Smaller A. dichotoma can make use of alternative reproductive behaviors to circumvent horn-to-horn combat. One of these alternative behaviors has been described as "sneak-like behavior" of which there are three variations. The first sneak-like behavior occurs when a male approaches another male from behind while the latter male has already assumed a mounted position on a female. The former beetle will try to use its horn to separate the latter beetle from the female. The second situation occurs when a male approaches another male while the latter male is positioned face-to-face with a female. In this case, the former male will attempt to mount the female. The third type of sneak-like behavior occurs when two males are fighting over a female and a third male attempts to mount the female. Female-female interactions Female A. dichotoma do not possess nearly as long horns as their male counterparts, but they still possess a noticeable horn, nonetheless. Although females of this species do not participate in the same shoving and throwing behaviors that the males do, they still exhibit some intrasexual aggressive behaviors. Female A. dichotoma have been observed using their smaller cephalic horns to head-butt other females in the area. They do so to fight over territory and access to food. Larger females have an advantage when it comes to this fighting behavior. Similarly to male A. dichotoma, smaller females developed a sneaky, non-confrontational strategy to gain access to resources and reproduce. Once defeated, smaller females will mount a larger female. The mounted female and mounting female rarely fight, and the mounting female will be able to access resources, including food and males. Sexual selection Sexual selection on Japanese rhinoceros beetles has been extensively studied so far as to elucidate the mechanisms by which weapons of sexual selection diverge and evolve more rapidly than other body parts. From an ecological perspective and a reproductive perspective, different populations of Japanese rhinoceros beetles differ greatly in relative horn size. It’s known that rhinoceros beetles with larger horns win fights against other male competitors and have better reproductive success. Thus, research on local habitat conditions and breeding ecology has uncovered that sexual selection strength across populations could be the key step in better understanding mating dynamics and sexual selection patterns in diverse Japanese rhinoceros beetle populations. Sexual dimorphisms With regards to sexual dimorphisms, Japanese rhinoceros beetles suffer male-biased predation by both avian predators and mammalian predators. It was discovered that sexually-selected traits impose an increased risk of predation on male Japanese rhinoceros beetles while larger individuals of both sexes. Researchers identified such a mechanism with raccoon dogs and jungle crows as predators. Interestingly, predation might act as a stabilizing selection pressure acting against the exaggeration and excessive evolution/propagation of male sexual traits. The prominent horn on the males makes this species a popular model organism for the study of sexual dimorphic traits. Intra-species competition Body size and horn length are both important factors in determining the winner when male A. dichotoma fight. The male horn size is the most important factor used to predict the winner of these fights, but a larger horn is not always best. A male A. dichotoma with a large body benefits from a larger horn so it can fight other males for access to females. The main reproductive strategy of these larger beetles is combat. The same is not true for smaller Japanese rhinoceros beetles which prefer less confrontational strategies like sneaking. In this case, the smaller beetle will prefer a smaller horn, so it is more mobile and better able to infiltrate the larger male’s territory while it is preoccupied. Therefore, horn length can be used as a metric for measuring the fighting ability of a male Japanese rhinoceros beetle, but it is not as useful to use as a measure of reproductive ability. Studies have shown that there is wide variation in male horn lengths, which indicates that a single horn length is not selected for over others. A large horn is useful for fighting but acts as a hindrance when the beetle digs into nearby litter to hide during the day. The large horn has shown to reduce the efficiency of this digging behavior, which leaves the beetle vulnerable to predators. Larger horns also impair flight, making it more difficult for Japanese rhinoceros beetles to move closer to potential mates. Other studies have shown that larger horns may be more fragile than smaller horns. Severe injuries sustained beetles with larger horns resulted in some of these beetles losing their horns, while similar injuries in beetles with smaller horns did not. Feeding resources Larval nutrition has a strong effect on overall growth in A. dichotoma. Poor nutritional environments in the larval stage leads to decreased growth rate, which can prolong the larval period. A. dichotoma are univoltine and only produce one brood during the three summer months. If the larval period is extended for too long, the beetle can miss its breeding window, which would severely harm its individual fitness. Low nutrition levels in the larval stage are also correlated with decreased adult size of the eyes, wings, elytra, and wings in male and female Japanese rhinoceros beetles. Genitalia, however, are not affected by nutrition levels. Males produced similarly sized genitalia regardless of nutrition levels in the larval stage. Mating and fertilization were similarly unaffected. Contrary to genitalia development in males, the male cephalic and thoracic horns are incredibly sensitive to larval nutrition levels. Low nutrition levels are associated with a 50% decrease in thoracic horn length and a 60% decrease in cephalic horn length. Interactions with humans Pets Rhinoceros beetles are a popular pet in Japan; in the past, they would be gathered from the wild, though in recent years beetles have also been bred in captivity. Their small size and easy upkeep compared to a mammal pet has made them popular. Research applications A. dichotoma is a useful model organism for scientific research in insects. It is easy and convenient to set up a breeding system for these beetles in the laboratory. Breeding the beetles and culturing the progeny is a well-documented process. The Japanese rhinoceros beetle can also be bred using a soil-free apparatus which allows for non-invasive and uninterrupted monitoring of growth and development. These larvae are also easy to preserve because they can be kept at low temperatures to prevent pupation from occurring. This added element of control makes these beetles convenient to use for research purposes throughout the year. RNA interference protocols have also been developed for A. dichotomus, so it is easy to conduct experiments on genes of interest. This species of beetle is also very large, so large amounts of DNA and RNA can be extracted from a single beetle for use in sequencing analysis. A. dichotomus has become a particularly popular model organism because of its horn. The horn developmental pathways and mechanism have been thoroughly studied. A protein with antibacterial properties has been discovered in A. dichotomus, alongside a molecule with potential anti-prion activity. A. dichotoma has proven to be a useful model organism for research in fields including drug discovery, ethology, behavioral ecology, and evolutionary developmental biology. Use in medicine The use of Japanese rhinoceros beetles in traditional Chinese medicine inspired research studies to corroborate its use. To the surprise of many researchers, compounds found in the extracts of A. dichotoma larvae have proven to exhibit anti-obesity effects as well as antibiotic properties. A. dichotoma has been a popular ingredient in Chinese traditional medicine for almost 2000 years. Research has corroborated that A. dichotoma extracts have potential health benefits. A study has shown that A. dichotomus larvae extract can significantly decrease the expression of genes associated with fat creation. The study implies that Japanese rhinoceros beetle larvae could function as a potential food source to counteract obesity. Another study discovered two proteins in A. dichotomus larva which exhibited antibacterial activity. These proteins are named A. d. coleoptericin A and B, with A. d being an abbreviation for A. dichotomus. A. d. coleoptericin A and B demonstrate significant activity against methicillin resistant Staphylococcus aureus (MRSA), a notoriously difficult strain of bacteria to treat with antibiotics. A. dichotoma larvae are known to consume rotting wood and fruits, so it is hypothesized that these larvae are capable of producing phytochemicals. Phytochemicals are natural bioactive compounds that provide resistance to bacterial and viral infections. Researchers were interested in investigating the potential health benefits associated with these larvae and found that A. dichotoma extract contains moderate antioxidant properties. Compounds found in the larvae extract are capable of scavenging for free oxygen radicals and prevent harmful oxidation in the body. The demand for natural substances that can reduce biological toxicity and food deterioration has risen due to synthetic alternatives causing harm to humans. The Japanese rhinoceros beetle larvae extract has potential to serve as an aforementioned natural alternative. Anti-prion activity To this day, little is known about prion diseases. There is no cure and the mechanism by which normal proteins are converted to abnormal prion remains unknown. Substances found in the hemolymph of A. dichotoma have been shown to exhibit anti-prion activity once they are browned or heated for an extended period of time. Administration of heated hemolymph has been shown to reduce abnormal prion protein levels in prion-infected cells. This compound has yet to be identified but is hypothesized to be a Maillard reaction product. Previous studies have shown that some Maillard reaction products are involved in the post-translational modification of prions. This compound in the hemolymph of A. dichotoma demonstrates strain-dependent anti-prion activity, as it only reduces prion formation in RML prion-infected cells. Akihabara culture Insects are a prominent part of Japanese Akihabara culture. Japanese rhinoceros beetles has been referenced in popular role-playing games like Dragon Quest, which includes three monsters that resemble A. dichotoma. It has also been referenced in animated series as vehicles. In Time Bokan, an animated children’s TV show from the late 1970s, the main characters possessed a time machine which resembled a rhinoceros beetle. This vehicle possessed the iconic cephalic horn found on male A. dichotoma and had wheels instead of legs. In Mega Man X3, a maverick boss by the name of Gravity Beetle that the player fights against was modeled after the rhinoceros beetle. The beetle also appeared in Mushihimesama, a shooting-game where enemies are oversized insects. The main character befriends a large rhinoceros beetle and uses it to defeat enemy insects. Another game that featured the beetle is called Air. Air is a popular gal-gê game which features a female rhinoceros beetle. Miss Misuzu Kamio, the game’s main character, befriends this beetle at the beginning of the game and brings it along with her throughout the rest of the story. A. dichotoma is an iconic insect that can be found throughout Japanese culture. As a food resource A. dichotoma larvae are edible and are reported to have a high value for nutritional content. The larvae of this species is commonly eaten throughout East Asia. Although there are benefits associated with the consumption of the larvae, many people are deterred from eating them because of their distinctly unpalatable flavor. The primary volatiles that are associated with these flavors are indoles. Several studies have investigated methods to improve the flavor profile of A. dichotoma larvae. One study found that the use of yeast fermentation to process larva powder could reduce the effect of the unpalatable indole profile and also increase the effects of volatiles that are traditionally associated with fruit-like flavors. Another study found that lactic acid fermentation via bacteria could also improve the flavor profiles of the larvae.
Biology and health sciences
Beetles (Coleoptera)
Animals
11084869
https://en.wikipedia.org/wiki/Gravitational-wave%20observatory
Gravitational-wave observatory
A gravitational-wave detector (used in a gravitational-wave observatory) is any device designed to measure tiny distortions of spacetime called gravitational waves. Since the 1960s, various kinds of gravitational-wave detectors have been built and constantly improved. The present-day generation of laser interferometers has reached the necessary sensitivity to detect gravitational waves from astronomical sources, thus forming the primary tool of gravitational-wave astronomy. The first direct observation of gravitational waves was made in September 2015 by the Advanced LIGO observatories, detecting gravitational waves with wavelengths of a few thousand kilometers from a merging binary of stellar black holes. In June 2023, four pulsar timing array collaborations presented the first strong evidence for a gravitational wave background of wavelengths spanning light years, most likely from many binaries of supermassive black holes. Challenge The direct detection of gravitational waves is complicated by the extraordinarily small effect the waves produce on a detector. The amplitude of a spherical wave falls off as the inverse of the distance from the source. Thus, even waves from extreme systems such as merging binary black holes die out to a very small amplitude by the time they reach the Earth. Astrophysicists predicted that some gravitational waves passing the Earth might produce differential motion on the order 10−18 m in a LIGO-size instrument. Resonant mass antennas A simple device to detect the expected wave motion is called a resonant mass antenna – a large, solid body of metal isolated from outside vibrations. This type of instrument was the first type of gravitational-wave detector. Strains in space due to an incident gravitational wave excite the body's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. However, up to 2018, no gravitational wave observation that would have been widely accepted by the research community has been made on any type of resonant mass antenna, despite certain claims of observation by researchers operating the antennas. There are three types of resonant mass antenna that have been built: room-temperature bar antennas, cryogenically cooled bar antennas and cryogenically cooled spherical antennas. The earliest type was the room-temperature bar-shaped antenna called a Weber bar; these were dominant in 1960s and 1970s and many were built around the world. It was claimed by Weber and some others in the late 1960s and early 1970s that these devices detected gravitational waves; however, other experimenters failed to detect gravitational waves using them, and a consensus developed that Weber bars would not be a practical means to detect gravitational waves. The second generation of resonant mass antennas, developed in the 1980s and 1990s, were the cryogenic bar antennas which are also sometimes called Weber bars. In the 1990s there were five major cryogenic bar antennas: AURIGA (Padua, Italy), NAUTILUS (Rome, Italy), EXPLORER (CERN, Switzerland), ALLEGRO (Louisiana, US), and NIOBE (Perth, Australia). In 1997, these five antennas run by four research groups formed the International Gravitational Event Collaboration (IGEC) for collaboration. While there were several cases of unexplained deviations from the background signal, there were no confirmed instances of the observation of gravitational waves with these detectors. In the 1980s, there was also a cryogenic bar antenna called ALTAIR, which, along with a room-temperature bar antenna called GEOGRAV, was built in Italy as a prototype for later bar antennas. Operators of the GEOGRAV-detector claimed to have observed gravitational waves coming from the supernova SN1987A (along with another room-temperature bar antenna), but these claims were not adopted by the wider community. These modern cryogenic forms of the Weber bar operated with superconducting quantum interference devices to detect vibration (ALLEGRO, for example). Some of them continued in operation after the interferometric antennas started to reach astrophysical sensitivity, such as AURIGA, an ultracryogenic resonant cylindrical bar gravitational wave detector based at INFN in Italy. The AURIGA and LIGO teams collaborated in joint observations. In the 2000s, the third generation of resonant mass antennas, the spherical cryogenic antennas, emerged. Four spherical antennas were proposed around year 2000 and two of them were built as downsized versions, the others were cancelled. The proposed antennas were GRAIL (Netherlands, downsized to MiniGRAIL), TIGA (US, small prototypes made), SFERA (Italy), and Graviton (Brasil, downsized to Mario Schenberg). The two downsized antennas, MiniGRAIL and the Mario Schenberg, are similar in design and are operated as a collaborative effort. MiniGRAIL is based at Leiden University, and consists of an exactingly machined sphere cryogenically cooled to . The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. It is the current consensus that current cryogenic resonant mass detectors are not sensitive enough to detect anything but extremely powerful (and thus very rare) gravitational waves. As of 2020, no detection of gravitational waves by cryogenic resonant antennas has occurred. Laser interferometers A more sensitive detector uses laser interferometry to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). Ground-based interferometers are now operational. Currently, the most sensitive ground-based laser interferometer is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO is famous as the site of the first confirmed detections of gravitational waves in 2015. LIGO has two detectors: one in Livingston, Louisiana; the other at the Hanford site in Richland, Washington. Each consists of two light storage arms which are 4 km in length. These are at 90 degree angles to each other, with the light passing through diameter vacuum tubes running the entire . A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which a Michelson interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 meters. LIGO should be able to detect gravitational waves as small as . Upgrades to LIGO and other detectors such as Virgo, GEO600, and TAMA 300 should increase the sensitivity further, and the next generation of instruments (Advanced LIGO Plus and Advanced Virgo Plus) will be more sensitive still. Another highly sensitive interferometer (KAGRA) began operations in 2020. A key point is that a ten-times increase in sensitivity (radius of "reach") increases the volume of space accessible to the instrument by one thousand. This increases the rate at which detectable signals should be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly. One analogy is to rainfall: the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals at low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these "stationary" (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other "non-stationary" noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All these must be taken into account and excluded by analysis before a detection may be considered a true gravitational-wave event. Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to shot noise, as well as artifacts caused by cosmic rays and solar wind. Einstein@Home In some sense, the easiest signals to detect should be constant sources. Supernovae and neutron star or black hole mergers should have larger amplitudes and be more interesting, but the waves generated will be more complicated. The waves given off by a spinning, bumpy neutron star would be "monochromatic" – like a pure tone in acoustics. It would not change very much in amplitude or frequency. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Pulsar timing arrays A different approach to detecting gravitational waves is used by pulsar timing arrays, such as the European Pulsar Timing Array, the North American Nanohertz Observatory for Gravitational Waves, and the Parkes Pulsar Timing Array. These projects propose to detect gravitational waves by looking at the effect these waves have on the incoming signals from an array of 20–50 well-known millisecond pulsars. As a gravitational wave passing through the Earth contracts space in one direction and expands space in another, the times of arrival of pulsar signals from those directions are shifted correspondingly. By studying a fixed set of pulsars across the sky, these arrays should be able to detect gravitational waves in the nanohertz range. Such signals are expected to be emitted by pairs of merging supermassive black holes. In June 2023, four pulsar timing array collaborations, the three mentioned above and the Chinese Pulsar Timing Array, presented independent but similar evidence for a stochastic background of nanohertz gravitational waves. The source of this background could not yet be identified. Cosmic microwave background The cosmic microwave background, radiation left over from when the Universe cooled sufficiently for the first atoms to form, can contain the imprint of gravitational waves from the very early Universe. The microwave radiation is polarized. The pattern of polarization can be split into two classes called E-modes and B-modes. This is in analogy to electrostatics where the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes can be created by a variety of processes, but the B-modes can only be produced by gravitational lensing, gravitational waves, or scattering from dust. On 17 March 2014, astronomers at the Harvard-Smithsonian Center for Astrophysics announced the apparent detection of the imprint gravitational waves in the cosmic microwave background, which, if confirmed, would provide strong evidence for inflation and the Big Bang. However, on 19 June 2014, lowered confidence in confirming the findings was reported; and on 19 September 2014, even more lowered confidence. Finally, on 30 January 2015, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. Novel detector designs There are currently two detectors focusing on detections at the higher end of the gravitational-wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Two have been fabricated and they are currently expected to be sensitive to periodic spacetime strains of , given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of , with an expectation to reach a sensitivity of . The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ~ 1010 Hz (10 GHz) and h ~ 10−30 to 10−31. Levitated Sensor Detector is a proposed detector for gravitational waves with a frequency between 10 kHz and 300 kHz, potentially coming from primordial black holes. It will use optically-levitated dielectric particles in an optical cavity. A torsion-bar antenna (TOBA) is a proposed design composed of two, long, thin bars, suspended as torsion pendula in a cross-like fashion, in which the differential angle is sensitive to tidal gravitational wave forces. Detectors based on matter waves (atom interferometers) have also been proposed and are being developed. There have been proposals since the beginning of the 2000s. Atom interferometry is proposed to extend the detection bandwidth in the infrasound band (10 mHz – 10 Hz), where current ground based detectors are limited by low frequency gravity noise. A demonstrator project called Matter wave laser based Interferometer Gravitation Antenna (MIGA) started construction in 2018 in the underground environment of LSBB (Rustrel, France). List of gravitational wave detectors Resonant mass detectors First generation Weber bar (1960s–80s) Second generation EXPLORER (CERN, 1985–) GEOGRAV (Rome, 1980s–) ALTAIR (Frascati, 1990–) ALLEGRO (Baton Rouge, 1991–2008) NIOBE (Perth, 1993–) NAUTILUS (Rome, 1995–) AURIGA (Padova, 1997–) Third generation Mario Schenberg (São Paulo, 2003–) MiniGrail (Leiden, 2003–) Interferometers Interferometric gravitational-wave detectors are often grouped into generations based on the technology used. The interferometric detectors deployed in the 1990s and 2000s were proving grounds for many of the foundational technologies necessary for initial detection and are commonly referred to as the first generation. The second generation of detectors operating in the 2010s, mostly at the same facilities like LIGO and Virgo, improved on these designs with sophisticated techniques such as cryogenic mirrors and the injection of squeezed vacuum. This led to the first unambiguous detection of a gravitational wave by Advanced LIGO in 2015. The third generation of detectors are currently in the planning phase, and seek to improve over the second generation by achieving greater detection sensitivity and a larger range of accessible frequencies. All these experiments involve many technologies under continuous development over multiple decades, so the categorization by generation is necessarily only rough. First generation (1995) TAMA 300 (1995) GEO600 (2002) LIGO (2006) CLIO (2007) Virgo interferometer Second generation (2010) GEO High Frequency (2015) Advanced LIGO (2016) Advanced Virgo (2019) KAGRA (LCGT) (2023) IndIGO (LIGO-India) Third generation (2030s) Einstein Telescope (2030s) Cosmic Explorer Space based (2034) Laser Interferometer Space Antenna (LISA, its technology demonstrator LISA Pathfinder was launched December 2015) (2030s?) Taiji (gravitational wave observatory) (2035) TianQin (2027) Deci-hertz Interferometer Gravitational wave Observatory (DECIGO) Pulsar timing (2005) Parkes Pulsar Timing Array (2009) European Pulsar Timing Array (2010) North American Nanohertz Observatory for Gravitational Waves (NANOGrav) (2016) International Pulsar Timing Array, a joint project combining the Parkes, European and NANOGrav arrays above (2016) Indian Pulsar Timing Array Experiment (InPTA) (?) Chinese Pulsar Timing Array (CPTA) (?) MeerKAT Pulsar Timing Array (MeerTime)
Technology
Telescope
null
11084989
https://en.wikipedia.org/wiki/Gravitational-wave%20astronomy
Gravitational-wave astronomy
Gravitational-wave astronomy is a subfield of astronomy concerned with the detection and study of gravitational waves emitted by astrophysical sources. Gravitational waves are minute distortions or ripples in spacetime caused by the acceleration of massive objects. They are produced by cataclysmic events such as the merger of binary black holes, the coalescence of binary neutron stars, supernova explosions and processes including those of the early universe shortly after the Big Bang. Studying them offers a new way to observe the universe, providing valuable insights into the behavior of matter under extreme conditions. Similar to electromagnetic radiation (such as light wave, radio wave, infrared radiation and X-rays) which involves transport of energy via propagation of electromagnetic field fluctuations, gravitational radiation involves fluctuations of the relatively weaker gravitational field. The existence of gravitational waves was first suggested by Oliver Heaviside in 1893 and then later conjectured by Henri Poincaré in 1905 as the gravitational equivalent of electromagnetic waves before they were predicted by Albert Einstein in 1916 as a corollary to his theory of general relativity. In 1978, Russell Alan Hulse and Joseph Hooton Taylor Jr. provided the first experimental evidence for the existence of gravitational waves by observing two neutron stars orbiting each other and won the 1993 Nobel Prize in physics for their work. In 2015, nearly a century after Einstein's forecast, the first direct observation of gravitational waves as a signal from the merger of two black holes confirmed the existence of these elusive phenomena and opened a new era in astronomy. Subsequent detections have included binary black hole mergers, neutron star collisions, and other violent cosmic events. Gravitational waves are now detected using laser interferometry, which measures tiny changes in the length of two perpendicular arms caused by passing waves. Observatories like LIGO (Laser Interferometer Gravitational-wave Observatory), Virgo and KAGRA (Kamioka Gravitational Wave Detector) use this technology to capture the faint signals from distant cosmic events. LIGO co-founders Barry C. Barish, Kip S. Thorne, and Rainer Weiss were awarded the 2017 Nobel Prize in Physics for their ground-breaking contributions in gravitational wave astronomy. When distant astronomical objects are observed using electromagnetic waves, different phenomena like scattering, absorption, reflection, refraction, etc. causes information loss. There remains various regions in space only partially penetrable by photons, such as the insides of nebulae, the dense dust clouds at the galactic core, the regions near black holes, etc. Gravitational astronomy have the potential to be used parallelly with electromagnetic astronomy to study the universe at a better resolution. In an approach known as multi-messenger astronomy, gravitational wave data is combined with data from other wavelengths to get a more complete picture of astrophysical phenomena. Gravitational wave astronomy helps understand the early universe, test theories of gravity, and reveal the distribution of dark matter and dark energy. Particularly, it can help find the Hubble constant, which tells about the rate of accelerated expansion of the universe. All of these open doors to a physics beyond the Standard Model (BSM). Challenges that remain in the field include noise interference, the lack of ultra-sensitive instruments, and the detection of low-frequency waves. Ground-based detectors face problems with seismic vibrations produced by environmental disturbances and the limitation of the arm length of detectors due to the curvature of the Earth’s surface. In the future, the field of gravitational wave astronomy will try develop upgraded detectors and next-generation observatories, along with possible space-based detectors such as LISA (Laser Interferometer Space Antenna). LISA will be able to listen to distant sources like compact supermassive black holes in the galactic core and primordial black holes, as well as low-frequency sensitive signals sources such as binary white dwarf merger and sources from the early universe. Introduction Gravitational waves are waves of the intensity of gravity generated by the accelerated masses of an orbital binary system that propagate as waves outward from their source at the speed of light. They were first proposed by Oliver Heaviside in 1893 and then later by Henri Poincaré in 1905 as waves similar to electromagnetic waves but the gravitational equivalent. Gravitational waves were later predicted in 1916 by Albert Einstein on the basis of his general theory of relativity as ripples in spacetime. Later he refused to accept gravitational waves. Gravitational waves transport energy as gravitational radiation, a form of radiant energy similar to electromagnetic radiation. Newton's law of universal gravitation, part of classical mechanics, does not provide for their existence, since that law is predicated on the assumption that physical interactions propagate instantaneously (at infinite speed) – showing one of the ways the methods of Newtonian physics are unable to explain phenomena associated with relativity. The first indirect evidence for the existence of gravitational waves came in 1974 from the observed orbital decay of the Hulse–Taylor binary pulsar, which matched the decay predicted by general relativity as energy is lost to gravitational radiation. In 1993, Russell A. Hulse and Joseph Hooton Taylor Jr. received the Nobel Prize in Physics for this discovery. Direct observation of gravitational waves was not made until 2015, when a signal generated by the merger of two black holes was received by the LIGO gravitational wave detectors in Livingston, Louisiana, and in Hanford, Washington. The 2017 Nobel Prize in Physics was subsequently awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the direct detection of gravitational waves. In gravitational-wave astronomy, observations of gravitational waves are used to infer data about the sources of gravitational waves. Sources that can be studied this way include binary star systems composed of white dwarfs, neutron stars, and black holes; events such as supernovae; and the formation of the early universe shortly after the Big Bang. Instruments and challenges Collaboration between detectors aids in collecting unique and valuable information, owing to different specifications and sensitivity of each. There are several ground-based laser interferometers which span several miles/kilometers, including: the two Laser Interferometer Gravitational-Wave Observatory (LIGO) detectors in WA and LA, USA; Virgo, at the European Gravitational Observatory in Italy; GEO600 in Germany, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. While LIGO, Virgo, and KAGRA have made joint observations to date, GEO600 is currently utilized for trial and test runs, due to lower sensitivity of its instruments, and has not participated in joint runs with the others recently. High frequency In 2015, the LIGO project was the first to directly observe gravitational waves using laser interferometers. The LIGO detectors observed gravitational waves from the merger of two stellar-mass black holes, matching predictions of general relativity. These observations demonstrated the existence of binary stellar-mass black hole systems, and were the first direct detection of gravitational waves and the first observation of a binary black hole merger. This finding has been characterized as revolutionary to science, because of the verification of our ability to use gravitational-wave astronomy to progress in our search and exploration of dark matter and the big bang. Low frequency An alternative means of observation is using pulsar timing arrays (PTAs). There are three consortia, the European Pulsar Timing Array (EPTA), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav), and the Parkes Pulsar Timing Array (PPTA), which co-operate as the International Pulsar Timing Array. These use existing radio telescopes, but since they are sensitive to frequencies in the nanohertz range, many years of observation are needed to detect a signal and detector sensitivity improves gradually. Current bounds are approaching those expected for astrophysical sources. In June 2023, four PTA collaborations, the three mentioned above and the Chinese Pulsar Timing Array, delivered independent but similar evidence for a stochastic background of nanohertz gravitational waves. Each provided an independent first measurement of the theoretical Hellings-Downs curve, i.e., the quadrupolar correlation between two pulsars as a function of their angular separation in the sky, which is a telltale sign of the gravitational wave origin of the observed background. The sources of this background remain to be identified, although binaries of supermassive black holes are the most likely candidates. Intermediate frequencies Further in the future, there is the possibility of space-borne detectors. The European Space Agency has selected a gravitational-wave mission for its L3 mission, due to launch 2034, the current concept is the evolved Laser Interferometer Space Antenna (eLISA). Also in development is the Japanese Deci-hertz Interferometer Gravitational wave Observatory (DECIGO). Scientific value Astronomy has traditionally relied on electromagnetic radiation. Originating with the visible band, as technology advanced, it became possible to observe other parts of the electromagnetic spectrum, from radio to gamma rays. Each new frequency band gave a new perspective on the Universe and heralded new discoveries. During the 20th century, indirect and later direct measurements of high-energy, massive particles provided an additional window into the cosmos. Late in the 20th century, the detection of solar neutrinos founded the field of neutrino astronomy, giving an insight into previously inaccessible phenomena, such as the inner workings of the Sun. The observation of gravitational waves provides a further means of making astrophysical observations. Russell Hulse and Joseph Taylor were awarded the 1993 Nobel Prize in Physics for showing that the orbital decay of a pair of neutron stars, one of them a pulsar, fits general relativity's predictions of gravitational radiation. Subsequently, many other binary pulsars (including one double pulsar system) have been observed, all fitting gravitational-wave predictions. In 2017, the Nobel Prize in Physics was awarded to Rainer Weiss, Kip Thorne and Barry Barish for their role in the first detection of gravitational waves. Gravitational waves provide complementary information to that provided by other means. By combining observations of a single event made using different means, it is possible to gain a more complete understanding of the source's properties. This is known as multi-messenger astronomy. Gravitational waves can also be used to observe systems that are invisible (or almost impossible to detect) by any other means. For example, they provide a unique method of measuring the properties of black holes. Gravitational waves can be emitted by many systems, but, to produce detectable signals, the source must consist of extremely massive objects moving at a significant fraction of the speed of light. The main source is a binary of two compact objects. Example systems include: Compact binaries made up of two closely orbiting stellar-mass objects, such as white dwarfs, neutron stars or black holes. Wider binaries, which have lower orbital frequencies, are a source for detectors like LISA. Closer binaries produce a signal for ground-based detectors like LIGO. Ground-based detectors could potentially detect binaries containing an intermediate mass black hole of several hundred solar masses. Supermassive black hole binaries, consisting of two black holes with masses of 105–109 solar masses. Supermassive black holes are found at the centre of galaxies. When galaxies merge, it is expected that their central supermassive black holes merge too. These are potentially the loudest gravitational-wave signals. The most massive binaries are a source for PTAs. Less massive binaries (about a million solar masses) are a source for space-borne detectors like LISA. Extreme-mass-ratio systems of a stellar-mass compact object orbiting a supermassive black hole. These are sources for detectors like LISA. Systems with highly eccentric orbits produce a burst of gravitational radiation as they pass through the point of closest approach; systems with near-circular orbits, which are expected towards the end of the inspiral, emit continuously within LISA's frequency band. Extreme-mass-ratio inspirals can be observed over many orbits. This makes them excellent probes of the background spacetime geometry, allowing for precision tests of general relativity. In addition to binaries, there are other potential sources: Supernovae generate high-frequency bursts of gravitational waves that could be detected with LIGO or Virgo. Rotating neutron stars are a source of continuous high-frequency waves if they possess axial asymmetry. Early universe processes, such as inflation or a phase transition. Cosmic strings could also emit gravitational radiation if they do exist. Discovery of these gravitational waves would confirm the existence of cosmic strings. Gravitational waves interact only weakly with matter. This is what makes them difficult to detect. It also means that they can travel freely through the Universe, and are not absorbed or scattered like electromagnetic radiation. It is therefore possible to see to the center of dense systems, like the cores of supernovae or the Galactic Center. It is also possible to see further back in time than with electromagnetic radiation, as the early universe was opaque to light prior to recombination, but transparent to gravitational waves. The ability of gravitational waves to move freely through matter also means that gravitational-wave detectors, unlike telescopes, are not pointed to observe a single field of view but observe the entire sky. Detectors are more sensitive in some directions than others, which is one reason why it is beneficial to have a network of detectors. Directionalization is also poor, due to the small number of detectors. In cosmic inflation Cosmic inflation, a hypothesized period when the universe rapidly expanded during the first 10−36 seconds after the Big Bang, would have given rise to gravitational waves; that would have left a characteristic imprint in the polarization of the CMB radiation. It is possible to calculate the properties of the primordial gravitational waves from measurements of the patterns in the microwave radiation, and use those calculations to learn about the early universe. Development As a young area of research, gravitational-wave astronomy is still in development; however, there is consensus within the astrophysics community that this field will evolve to become an established component of 21st century multi-messenger astronomy. Gravitational-wave observations complement observations in the electromagnetic spectrum. These waves also promise to yield information in ways not possible via detection and analysis of electromagnetic waves. Electromagnetic waves can be absorbed and re-radiated in ways that make extracting information about the source difficult. Gravitational waves, however, only interact weakly with matter, meaning that they are not scattered or absorbed. This should allow astronomers to view the center of a supernova, stellar nebulae, and even colliding galactic cores in new ways. Ground-based detectors have yielded new information about the inspiral phase and mergers of binary systems of two stellar mass black holes, and merger of two neutron stars. They could also detect signals from core-collapse supernovae, and from periodic sources such as pulsars with small deformations. If there is truth to speculation about certain kinds of phase transitions or kink bursts from long cosmic strings in the very early universe (at cosmic times around 10−25 seconds), these could also be detectable. Space-based detectors like LISA should detect objects such as binaries consisting of two white dwarfs, and AM CVn stars (a white dwarf accreting matter from its binary partner, a low-mass helium star), and also observe the mergers of supermassive black holes and the inspiral of smaller objects (between one and a thousand solar masses) into such black holes. LISA should also be able to listen to the same kind of sources from the early universe as ground-based detectors, but at even lower frequencies and with greatly increased sensitivity. Detecting emitted gravitational waves is a difficult endeavor. It involves ultra-stable high-quality lasers and detectors calibrated with a sensitivity of at least 2·10−22 Hz−1/2 as shown at the ground-based detector, GEO600. It has also been proposed that even from large astronomical events, such as supernova explosions, these waves are likely to degrade to vibrations as small as an atomic diameter. Pinpointing the location of where the gravitational waves comes from is also a challenge. But deflected waves through gravitational lensing combined with machine learning could make it easier and more accurate. Just as the light from the SN Refsdal supernova was detected a second time almost a year after it was first discovered, due to gravitational lensing sending some of the light on a different path through the universe, the same approach could be used for gravitational waves. While still at an early stage, a technique similar to the triangulation used by cell phones to determine their location in relation to GPS satellites, will help astronomers tracking down the origin of the waves.
Physical sciences
Basics
Astronomy
9615240
https://en.wikipedia.org/wiki/Nitazoxanide
Nitazoxanide
Nitazoxanide, sold under the brand name Alinia among others, is a broad-spectrum antiparasitic and broad-spectrum antiviral medication that is used in medicine for the treatment of various helminthic, protozoal, and viral infections. It is indicated for the treatment of infection by Cryptosporidium parvum and Giardia lamblia in immunocompetent individuals and has been repurposed for the treatment of influenza. Nitazoxanide has also been shown to have in vitro antiparasitic activity and clinical treatment efficacy for infections caused by other protozoa and helminths; evidence suggested that it possesses efficacy in treating a number of viral infections as well. Chemically, nitazoxanide is the prototype member of the thiazolides, a class of drugs which are synthetic nitrothiazolyl-salicylamide derivatives with antiparasitic and antiviral activity. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class. Nitazoxanide tablets were approved as a generic medication in the United States in 2020. Uses Nitazoxanide is an effective first-line treatment for infection by Blastocystis species and is indicated for the treatment of infection by Cryptosporidium parvum or Giardia lamblia in immunocompetent adults and children. It is also an effective treatment option for infections caused by other protozoa and helminths (e.g., Entamoeba histolytica, Hymenolepis nana, Ascaris lumbricoides, and Cyclospora cayetanensis). Chronic hepatitis B Nitazoxanide alone has shown preliminary evidence of efficacy in the treatment of chronic hepatitis B over a one-year course of therapy. Nitazoxanide 500 mg twice daily resulted in a decrease in serum HBV DNA in all of 4 HBeAg-positive patients, with undetectable HBV DNA in 2 of 4 patients, loss of HBeAg in 3 patients, and loss of HBsAg in one patient. Seven of 8 HBeAg-negative patients treated with nitazoxanide 500 mg twice daily had undetectable HBV DNA and 2 had loss of HBsAg. Additionally, nitazoxanide monotherapy in one case and nitazoxanide plus adefovir in another case resulted in undetectable HBV DNA, loss of HBeAg and loss of HBsAg. These preliminary studies showed a higher rate of HBsAg loss than any currently licensed therapy for chronic hepatitis B. The similar mechanism of action of interferon and nitazoxanide suggest that stand-alone nitazoxanide therapy or nitazoxanide in concert with nucleos(t)ide analogs have the potential to increase loss of HBsAg, which is the ultimate end-point of therapy. A formal phase 2 study is being planned for 2009. Chronic hepatitis C Romark initially decided to focus on the possibility of treating chronic hepatitis C with nitazoxanide. The drug garnered interest from the hepatology community after three phase II clinical trials involving the treatment of hepatitis C with nitazoxanide produced positive results for treatment efficacy and similar tolerability to placebo without any signs of toxicity. A meta-analysis from 2014 concluded that the previous held trials were of low-quality and withheld with a risk of bias. The authors concluded that more randomized trials with low risk of bias are needed to determine if Nitazoxanide can be used as an effective treatment for chronic hepatitis C patients. Contraindications Nitazoxanide is contraindicated only in individuals who have experienced a hypersensitivity reaction to nitazoxanide or the inactive ingredients of a nitazoxanide formulation. Adverse effects The side effects of nitazoxanide do not significantly differ from a placebo treatment for giardiasis; these symptoms include stomach pain, headache, upset stomach, vomiting, discolored urine, excessive urinating, skin rash, itching, fever, flu syndrome, and others. Nitazoxanide does not appear to cause any significant adverse effects when taken by healthy adults. Overdose Information on nitazoxanide overdose is limited. Oral doses of 4 grams in healthy adults do not appear to cause any significant adverse effects. In various animals, the oral LD50 is higher than 10 . Interactions Due to the exceptionally high plasma protein binding (>99.9%) of nitazoxanide's metabolite, tizoxanide, the concurrent use of nitazoxanide with other highly plasma protein-bound drugs with narrow therapeutic indices (e.g., warfarin) increases the risk of drug toxicity. In vitro evidence suggests that nitazoxanide does not affect the CYP450 system. Pharmacology Pharmacodynamics The anti-protozoal activity of nitazoxanide is believed to be due to interference with the pyruvate:ferredoxin oxidoreductase (PFOR) enzyme-dependent electron-transfer reaction that is essential to anaerobic energy metabolism. PFOR inhibition may also contribute to its activity against anaerobic bacteria. It has also been shown to have activity against influenza A virus in vitro. The mechanism appears to be by selectively blocking the maturation of the viral hemagglutinin at a stage preceding resistance to endoglycosidase H digestion. This impairs hemagglutinin intracellular trafficking and insertion of the protein into the host plasma membrane. Nitazoxanide modulates a variety of other pathways in vitro, including glutathione-S-transferase and glutamate-gated chloride ion channels in nematodes, respiration and other pathways in bacteria and cancer cells, and viral and host transcriptional factors. Pharmacokinetics Following oral administration, nitazoxanide is rapidly hydrolyzed to the pharmacologically active metabolite, tizoxanide, which is 99% protein bound. Tizoxanide is then glucuronide conjugated into the active metabolite, tizoxanide glucuronide. Peak plasma concentrations of the metabolites tizoxanide and tizoxanide glucuronide are observed 1–4 hours after oral administration of nitazoxanide, whereas nitazoxanide itself is not detected in blood plasma. Roughly of an oral dose of nitazoxanide is excreted as its metabolites in feces, while the remainder of the dose excreted in urine. Tizoxanide is excreted in the urine, bile and feces. Tizoxanide glucuronide is excreted in urine and bile. Chemistry Acetic acid [2-[(5-nitro-2-thiazolyl)amino]-oxomethyl]phenyl ester is a carboxylic ester and a member of benzamides. It is functionally related to a salicylamide. Nitazoxanide is the prototype member of the thiazolides, which is a drug class of structurally-related broad-spectrum antiparasitic compounds. Nitazoxanide belongs to the class of drugs known as thiazolides. It is a broad-spectrum anti-infective drug that significantly modulates the survival, growth, and proliferation of a range of extracellular and intracellular protozoa, helminths, anaerobic and microaerophilic bacteria, in addition to viruses. Nitazoxanide is a light yellow crystalline powder. It is poorly soluble in ethanol and practically insoluble in water. The molecular formula of Nitazoxanide is C12H9N3O5S and its molecular weight is 307.28 g/mol2. Tizoxanide, an active metabolite of nitazoxanide in humans, is also an antiparasitic drug of the thiazolide class. IUPAC Name: [[[2-[(5-nitro-1,3-thiazol-2-yl)carbamoyl]phenyl] acetate2]] Canonical SMILES: CC(=O)OC1=CC=CC=C1C(=O)NC2=NC=C(S2)N+[O-]2 MeSH Synonyms: 1) 2-(acetolyloxy)-n-(5-nitro-2-thiazolyl)benzamide 2) Alinia 3) Colufase 4) Cryptaz 5) Daxon 6) Heliton 7) Ntz 8) Taenitaz History Nitazoxanide was originally discovered in the 1980s by Jean-François Rossignol at the Pasteur Institute. Initial studies demonstrated activity versus tapeworms. In vitro studies demonstrated much broader activity. Dr. Rossignol co-founded Romark Laboratories, with the goal of bringing nitazoxanide to market as an anti-parasitic drug. Initial studies in the USA were conducted in collaboration with Unimed Pharmaceuticals, Inc. (Marietta, GA) and focused on development of the drug for treatment of cryptosporidiosis in AIDS. Controlled trials began shortly after the advent of effective anti-retroviral therapies. The trials were abandoned due to poor enrollment and the FDA rejected an application based on uncontrolled studies. Subsequently, Romark launched a series of controlled trials. A placebo-controlled study of nitazoxanide in cryptosporidiosis demonstrated significant clinical improvement in adults and children with mild illness. Among malnourished children in Zambia with chronic cryptosporidiosis, a three-day course of therapy led to clinical and parasitologic improvement and improved survival. In Zambia and in a study conducted in Mexico, nitazoxanide was not successful in the treatment of cryptosporidiosis in advanced infection with human immunodeficiency virus at the doses used. However, it was effective in patients with higher CD4 counts. In treatment of giardiasis, nitazoxanide was superior to placebo and comparable to metronidazole. Nitazoxanide was successful in the treatment of metronidazole-resistant giardiasis. Studies have suggested efficacy in the treatment of cyclosporiasis, isosporiasis, and amebiasis. Recent studies have also found it to be effective against beef tapeworm (Taenia saginata). Pharmaceutical products Dosage forms Nitazoxanide is currently available in two oral dosage forms: a tablet (500 mg) and an oral suspension (100 mg per 5 ml when reconstituted). An extended release tablet (675 mg) has been used in clinical trials for chronic hepatitis C; however, this form is not currently marketed or available for prescription. Brand names Nitazoxanide is sold under the brand names Adonid, Alinia, Allpar, Annita, Celectan, Colufase, Daxon, Dexidex, Diatazox, Kidonax, Mitafar, Nanazoxid, Parazoxanide, Netazox, Niazid, Nitamax, Nitax, Nitaxide, Nitaz, Nizonide, , Pacovanton, Paramix, Toza, and Zox. Research , nitazoxanide was in phase 3 clinical trials for the treatment influenza due to its inhibitory effect on a broad range of influenza virus subtypes and efficacy against influenza viruses that are resistant to neuraminidase inhibitors like oseltamivir. Nitazoxanide is also being researched as a potential treatment for COVID-19, chronic hepatitis B, chronic hepatitis C, rotavirus and norovirus gastroenteritis.
Biology and health sciences
Antiparasitic
Health
12935
https://en.wikipedia.org/wiki/Gram%20stain
Gram stain
Gram stain (Gram staining or Gram's method), is a method of staining used to classify bacterial species into two large groups: gram-positive bacteria and gram-negative bacteria. It may also be used to diagnose a fungal infection. The name comes from the Danish bacteriologist Hans Christian Gram, who developed the technique in 1884. Gram staining differentiates bacteria by the chemical and physical properties of their cell walls. Gram-positive cells have a thick layer of peptidoglycan in the cell wall that retains the primary stain, crystal violet. Gram-negative cells have a thinner peptidoglycan layer that allows the crystal violet to wash out on addition of ethanol. They are stained pink or red by the counterstain, commonly safranin or fuchsine. Lugol's iodine solution is always added after addition of crystal violet to form a stable complex with crystal violet that strengthen the bonds of the stain with the cell wall. Gram staining is almost always the first step in the identification of a bacterial group. While Gram staining is a valuable diagnostic tool in both clinical and research settings, not all bacteria can be definitively classified by this technique. This gives rise to gram-variable and gram-indeterminate groups. History The method is named after its inventor, the Danish scientist Hans Christian Gram (1853–1938), who developed the technique while working with Carl Friedländer in the morgue of the city hospital in Berlin in 1884. Gram devised his technique not for the purpose of distinguishing one type of bacterium from another but to make bacteria more visible in stained sections of lung tissue. Gram noticed that some bacterial cells possessed noticeable resistance to decolorization. Based on these observations, Gram developed the initial gram staining procedure, initially making use of Ehrlich's aniline-gentian violet, Lugol's iodine, absolute alcohol for decolorization, and Bismarck brown for counterstain. He published his method in 1884, and included in his short report the observation that the typhus bacillus did not retain the stain. Gram did not initially make the distinction between Gram-negative and Gram-positive bacteria using his procedure. Uses Gram staining is a bacteriological laboratory technique used to differentiate bacterial species into two large groups (gram-positive and gram-negative) based on the physical properties of their cell walls. Gram staining can also be used to diagnose a fungal infection. Gram staining is not used to classify archaea, since these microorganisms yield widely varying responses that do not follow their phylogenetic groups. Some organisms are gram-variable (meaning they may stain either negative or positive); some are not stained with either dye used in the Gram technique and are not seen. Medical Gram stains are performed on body fluid or biopsy when infection is suspected. Gram stains yield results much more quickly than culturing, and are especially important when infection would make an important difference in the patient's treatment and prognosis; examples are cerebrospinal fluid for meningitis and synovial fluid for septic arthritis. Staining mechanism Gram-positive bacteria have a thick mesh-like cell wall made of peptidoglycan (50–90% of cell envelope), and as a result are stained purple by crystal violet, whereas gram-negative bacteria have a thinner layer (10% of cell envelope), so do not retain the purple stain and are counter-stained pink by safranin. There are four basic steps of the Gram stain: Applying a primary stain (crystal violet) to a heat-fixed smear of a bacterial culture. Heat fixation kills some bacteria but is mostly used to affix the bacteria to the slide so that they do not rinse out during the staining procedure. The addition of iodine, which binds to crystal violet and traps it in the cell Rapid decolorization with ethanol or acetone Counterstaining with safranin. Carbol fuchsin is sometimes substituted for safranin since it more intensely stains anaerobic bacteria, but it is less commonly used as a counterstain. Crystal violet (CV) dissociates in aqueous solutions into and chloride () ions. These ions penetrate the cell wall of both gram-positive and gram-negative cells. The ion interacts with negatively charged components of bacterial cells and stains the cells purple. Iodide ( or ) interacts with and forms large complexes of crystal violet and iodine (CV–I) within the inner and outer layers of the cell. Iodine is often referred to as a mordant, but is a trapping agent that prevents the removal of the CV–I complex and, therefore, colors the cell. When a decolorizer such as alcohol or acetone is added, it interacts with the lipids of the cell membrane. A gram-negative cell loses its outer lipopolysaccharide membrane, and the inner peptidoglycan layer is left exposed. The CV–I complexes are washed from the gram-negative cell along with the outer membrane. In contrast, a gram-positive cell becomes dehydrated from an ethanol treatment. The large CV–I complexes become trapped within the gram-positive cell due to the multilayered nature of its peptidoglycan. The decolorization step is critical and must be timed correctly; the crystal violet stain is removed from both gram-positive and negative cells if the decolorizing agent is left on too long (a matter of seconds). After decolorization, the gram-positive cell remains purple and the gram-negative cell loses its purple color. Counterstain, which is usually positively charged safranin or basic fuchsine, is applied last to give decolorized gram-negative bacteria a pink or red color. Both gram-positive bacteria and gram-negative bacteria pick up the counterstain. The counterstain, however, is unseen on gram-positive bacteria because of the darker crystal violet stain. Examples Gram-positive bacteria Gram-positive bacteria generally have a single membrane (monoderm) surrounded by a thick peptidoglycan. This rule is followed by two phyla: Bacillota (except for the classes Mollicutes and Negativicutes) and the Actinomycetota. In contrast, members of the Chloroflexota (green non-sulfur bacteria) are monoderms but possess a thin or absent (class Dehalococcoidetes) peptidoglycan and can stain negative, positive or indeterminate; members of the Deinococcota stain positive but are diderms with a thick peptidoglycan. The cell wall's strength is enhanced by teichoic acids, glycopolymeric substances embedded within the peptidoglycan. Teichoic acids play multiple roles, such as generating the cell's net negative charge, contributing to cell wall rigidity and shape maintenance, and aiding in cell division and resistance to various stressors, including heat and salt. Despite the density of the peptidoglycan layer, it remains relatively porous, allowing most substances to permeate. For larger nutrients, Gram-positive bacteria utilize exoenzymes, secreted extracellularly to break down macromolecules outside the cell. Historically, the gram-positive forms made up the phylum Firmicutes, a name now used for the largest group. It includes many well-known genera such as Lactobacillus, Bacillus, Listeria, Staphylococcus, Streptococcus, Enterococcus, and Clostridium. It has also been expanded to include the Mollicutes, bacteria such as Mycoplasma and Thermoplasma that lack cell walls and so cannot be Gram-stained, but are derived from such forms. Some bacteria have cell walls which are particularly adept at retaining stains. These will appear positive by Gram stain even though they are not closely related to other gram-positive bacteria. These are called acid-fast bacteria, and can only be differentiated from other gram-positive bacteria by special staining procedures. Gram-negative bacteria Gram-negative bacteria generally possess a thin layer of peptidoglycan between two membranes (diderm). Lipopolysaccharide (LPS) is the most abundant antigen on the cell surface of most gram-negative bacteria, contributing up to 80% of the outer membrane of E. coli and Salmonella. These LPS molecules, consisting of the O-antigen or O-polysaccharide, core polysaccharide, and lipid A, serve multiple functions including contributing to the cell's negative charge and protecting against certain chemicals. LPS's role is critical in host-pathogen interactions, with the O-antigen eliciting an immune response and lipid A acting as an endotoxin. Additionally, the outer membrane acts as a selective barrier, regulated by porins, transmembrane proteins forming pores that allow specific molecules to pass. The space between the cell membrane and the outer membrane, known as the periplasm, contains periplasmic enzymes for nutrient processing. A significant structural component linking the peptidoglycan layer and the outer membrane is Braun's lipoprotein, which provides additional stability and strength to the bacterial cell wall. Most bacterial phyla are gram-negative, including the cyanobacteria, green sulfur bacteria, and most Pseudomonadota (exceptions being some members of the Rickettsiales and the insect-endosymbionts of the Enterobacteriales). Gram-variable and gram-indeterminate bacteria Some bacteria, after staining with the Gram stain, yield a gram-variable pattern: a mix of pink and purple cells are seen. In cultures of Bacillus, Butyrivibrio, and Clostridium, a decrease in peptidoglycan thickness during growth coincides with an increase in the number of cells that stain gram-negative. In addition, in all bacteria stained using the Gram stain, the age of the culture may influence the results of the stain. Gram-indeterminate bacteria do not respond predictably to Gram staining and, therefore, cannot be determined as either gram-positive or gram-negative. Examples include many species of Mycobacterium, including Mycobacterium bovis, Mycobacterium leprae and Mycobacterium tuberculosis, the latter two of which are the causative agents of leprosy and tuberculosis, respectively. Bacteria of the genus Mycoplasma lack a cell wall around their cell membranes, which means they do not stain by Gram's method and are resistant to the antibiotics that target cell wall synthesis. Orthographic note The term Gram staining is derived from the surname of Hans Christian Gram; the eponym (Gram) is therefore capitalized but not the common noun (stain) as is usual for scientific terms. The initial letters of gram-positive and gram-negative, which are eponymous adjectives, can be either capital G or lowercase g, depending on what style guide (if any) governs the document being written. Lowercase style is used by the US Centers for Disease Control and Prevention and other style regimens such as the AMA style. Dictionaries may use lowercase, uppercase, or both. Uppercase Gram-positive or Gram-negative usage is also common in many scientific journal articles and publications. When articles are submitted to journals, each journal may or may not apply house style to the postprint version. Preprint versions contain whichever style the author happened to use. Even style regimens that use lowercase for the adjectives gram-positive and gram-negative still typically use capital for Gram stain.
Biology and health sciences
Basics_3
Biology
12936
https://en.wikipedia.org/wiki/Gram-positive%20bacteria
Gram-positive bacteria
In bacteriology, gram-positive bacteria are bacteria that give a positive result in the Gram stain test, which is traditionally used to quickly classify bacteria into two broad categories according to their type of cell wall. The Gram stain is used by microbiologists to place bacteria into two main categories, Gram-positive (+) and Gram-negative (-). Gram-positive bacteria have a thick layer of peptidoglycan within the cell wall, and Gram-negative bacteria have a thin layer of peptidoglycan. Gram-positive bacteria take up the crystal violet stain used in the test, and then appear to be purple-coloured when seen through an optical microscope. This is because the thick layer of peptidoglycan in the bacterial cell wall retains the stain after it is washed away from the rest of the sample, in the decolorization stage of the test. Conversely, gram-negative bacteria cannot retain the violet stain after the decolorization step; alcohol used in this stage degrades the outer membrane of gram-negative cells, making the cell wall more porous and incapable of retaining the crystal violet stain. Their peptidoglycan layer is much thinner and sandwiched between an inner cell membrane and a bacterial outer membrane, causing them to take up the counterstain (safranin or fuchsine) and appear red or pink. Despite their thicker peptidoglycan layer, gram-positive bacteria are more receptive to certain cell wall–targeting antibiotics than gram-negative bacteria, due to the absence of the outer membrane. Characteristics In general, the following characteristics are present in gram-positive bacteria: Cytoplasmic lipid membrane Thick peptidoglycan layer Teichoic acids and lipoids are present, forming lipoteichoic acids, which serve as chelating agents, and also for certain types of adherence. Peptidoglycan chains are cross-linked to form rigid cell walls by a bacterial enzyme DD-transpeptidase. A much smaller volume of periplasm than that in gram-negative bacteria. Only some species have a capsule, usually consisting of polysaccharides. Also, only some species are flagellates, and when they do have flagella, have only two basal body rings to support them, whereas gram-negative have four. Both gram-positive and gram-negative bacteria commonly have a surface layer called an S-layer. In gram-positive bacteria, the S-layer is attached to the peptidoglycan layer. Gram-negative bacteria's S-layer is attached directly to the outer membrane. Specific to gram-positive bacteria is the presence of teichoic acids in the cell wall. Some of these are lipoteichoic acids, which have a lipid component in the cell membrane that can assist in anchoring the peptidoglycan. Classification Along with cell shape, Gram staining is a rapid method used to differentiate bacterial species. Such staining, together with growth requirement and antibiotic susceptibility testing, and other macroscopic and physiologic tests, forms a basis for practical classification and subdivision of the bacteria (e.g., see figure and pre-1990 versions of Bergey's Manual of Systematic Bacteriology). Historically, the kingdom Monera was divided into four divisions based primarily on Gram staining: Bacillota (positive in staining), Gracilicutes (negative in staining), Mollicutes (neutral in staining) and Mendocutes (variable in staining). Based on 16S ribosomal RNA phylogenetic studies of the late microbiologist Carl Woese and collaborators and colleagues at the University of Illinois, the monophyly of the gram-positive bacteria was challenged, with major implications for the therapeutic and general study of these organisms. Based on molecular studies of the 16S sequences, Woese recognised twelve bacterial phyla. Two of these were gram-positive and were divided on the proportion of the guanine and cytosine content in their DNA. The high G + C phylum was made up of the Actinobacteria, and the low G + C phylum contained the Firmicutes. The Actinomycetota include the Corynebacterium, Mycobacterium, Nocardia and Streptomyces genera. The (low G + C) Bacillota, have a 45–60% GC content, but this is lower than that of the Actinomycetota. Importance of the outer cell membrane in bacterial classification Although bacteria are traditionally divided into two main groups, gram-positive and gram-negative, based on their Gram stain retention property, this classification system is ambiguous as it refers to three distinct aspects (staining result, envelope organization, taxonomic group), which do not necessarily coalesce for some bacterial species. The gram-positive and gram-negative staining response is also not a reliable characteristic as these two kinds of bacteria do not form phylogenetic coherent groups. However, although Gram staining response is an empirical criterion, its basis lies in the marked differences in the ultrastructure and chemical composition of the bacterial cell wall, marked by the absence or presence of an outer lipid membrane. All gram-positive bacteria are bounded by a single-unit lipid membrane, and, in general, they contain a thick layer (20–80 nm) of peptidoglycan responsible for retaining the Gram stain. A number of other bacteria—that are bounded by a single membrane, but stain gram-negative due to either lack of the peptidoglycan layer, as in the mycoplasmas, or their inability to retain the Gram stain because of their cell wall composition—also show close relationship to the gram-positive bacteria. For the bacterial cells bounded by a single cell membrane, the term monoderm bacteria has been proposed. In contrast to gram-positive bacteria, all typical gram-negative bacteria are bounded by a cytoplasmic membrane and an outer cell membrane; they contain only a thin layer of peptidoglycan (2–3 nm) between these membranes. The presence of inner and outer cell membranes defines a new compartment in these cells: the periplasmic space or the periplasmic compartment. These bacteria have been designated as diderm bacteria. The distinction between the monoderm and diderm bacteria is supported by conserved signature indels in a number of important proteins (viz. DnaK, GroEL). Of these two structurally distinct groups of bacteria, monoderms are indicated to be ancestral. Based upon a number of observations including that the gram-positive bacteria are the major producers of antibiotics and that, in general, gram-negative bacteria are resistant to them, it has been proposed that the outer cell membrane in gram-negative bacteria (diderms) has evolved as a protective mechanism against antibiotic selection pressure. Some bacteria, such as Deinococcus, which stain gram-positive due to the presence of a thick peptidoglycan layer and also possess an outer cell membrane are suggested as intermediates in the transition between monoderm (gram-positive) and diderm (gram-negative) bacteria. The diderm bacteria can also be further differentiated between simple diderms lacking lipopolysaccharide, the archetypical diderm bacteria where the outer cell membrane contains lipopolysaccharide, and the diderm bacteria where outer cell membrane is made up of mycolic acid. Exceptions In general, gram-positive bacteria are monoderms and have a single lipid bilayer whereas gram-negative bacteria are diderms and have two bilayers. Exceptions include: Some taxa lack peptidoglycan (such as the class Mollicutes, some members of the Rickettsiales, and the insect-endosymbionts of the Enterobacteriales) and are gram-indeterminate. The Deinococcota have gram-positive stains, although they are structurally similar to gram-negative bacteria with two layers. The Chloroflexota have a single layer, yet (with some exceptions) stain negative. Two related phyla to the Chloroflexi, the TM7 clade and the Ktedonobacteria, are also monoderms. Some Bacillota species are not gram-positive. The class Negativicutes, which includes Selenomonas, are diderm and stain gram-negative. Additionally, a number of bacterial taxa (viz. Negativicutes, Fusobacteriota, Synergistota, and Elusimicrobiota) that are either part of the phylum Bacillota or branch in its proximity are found to possess a diderm cell structure. However, a conserved signature indel (CSI) in the HSP60 (GroEL) protein distinguishes all traditional phyla of gram-negative bacteria (e.g., Pseudomonadota, Aquificota, Chlamydiota, Bacteroidota, Chlorobiota, "Cyanobacteria", Fibrobacterota, Verrucomicrobiota, Planctomycetota, Spirochaetota, Acidobacteriota, etc.) from these other atypical diderm bacteria, as well as other phyla of monoderm bacteria (e.g., Actinomycetota, Bacillota, Thermotogota, Chloroflexota, etc.). The presence of this CSI in all sequenced species of conventional LPS (lipopolysaccharide)-containing gram-negative bacterial phyla provides evidence that these phyla of bacteria form a monophyletic clade and that no loss of the outer membrane from any species from this group has occurred. Pathogenicity In the classical sense, six gram-positive genera are typically pathogenic in humans. Two of these, Streptococcus and Staphylococcus, are cocci (sphere-shaped). The remaining organisms are bacilli (rod-shaped) and can be subdivided based on their ability to form spores. The non-spore formers are Corynebacterium and Listeria (a coccobacillus), whereas Bacillus and Clostridium produce spores. The spore-forming bacteria can again be divided based on their respiration: Bacillus is a facultative anaerobe, while Clostridium is an obligate anaerobe. Also, Rathybacter, Leifsonia, and Clavibacter are three gram-positive genera that cause plant disease. Gram-positive bacteria are capable of causing serious and sometimes fatal infections in newborn infants. Novel species of clinically relevant gram-positive bacteria also include Catabacter hongkongensis, which is an emerging pathogen belonging to Bacillota. Bacterial transformation Transformation is one of three processes for horizontal gene transfer, in which exogenous genetic material passes from a donor bacterium to a recipient bacterium, the other two processes being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of donor bacterial DNA by a bacteriophage virus into a recipient host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium. As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between gram-positive and gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers. Transformation among gram-positive bacteria has been studied in medically important species such as Streptococcus pneumoniae, Streptococcus mutans, Staphylococcus aureus and Streptococcus sanguinis and in gram-positive soil bacteria Bacillus subtilis and Bacillus cereus. Orthographic note The adjectives gram-positive and gram-negative derive from the surname of Hans Christian Gram; as eponymous adjectives, their initial letter can be either capital G or lower-case g, depending on which style guide (e.g., that of the CDC), if any, governs the document being written.
Biology and health sciences
Gram-positive bacteria
Plants
12937
https://en.wikipedia.org/wiki/Gram-negative%20bacteria
Gram-negative bacteria
Gram-negative bacteria are bacteria that, unlike gram-positive bacteria, do not retain the crystal violet stain used in the Gram staining method of bacterial differentiation. Their defining characteristic is that their cell envelope consists of a thin peptidoglycan cell wall sandwiched between an inner (cytoplasmic) membrane and an outer membrane. These bacteria are found in all environments that support life on Earth. Within this category, notable species include the model organism Escherichia coli, along with various pathogenic bacteria, such as Pseudomonas aeruginosa, Chlamydia trachomatis, and Yersinia pestis. They pose significant challenges in the medical field due to their outer membrane, which acts as a protective barrier against numerous antibiotics (including penicillin), detergents that would normally damage the inner cell membrane, and the antimicrobial enzyme lysozyme produced by animals as part of their innate immune system. Furthermore, the outer leaflet of this membrane contains a complex lipopolysaccharide (LPS) whose lipid A component can trigger a toxic reaction when the bacteria are lysed by immune cells. This reaction may lead to septic shock, resulting in low blood pressure, respiratory failure, reduced oxygen delivery, and lactic acidosis. Several classes of antibiotics have been developed to target gram-negative bacteria, including aminopenicillins, ureidopenicillins, cephalosporins, beta-lactam-betalactamase inhibitor combinations (such as piperacillin-tazobactam), folate antagonists, quinolones, and carbapenems. Many of these antibiotics also cover gram-positive bacteria. The antibiotics that specifically target gram-negative organisms include aminoglycosides, monobactams (such as aztreonam), and ciprofloxacin. Characteristics Conventional gram-negative (LPS-diderm) bacteria display : An inner cell membrane is present (cytoplasmic) A thin peptidoglycan layer is present (this is much thicker in gram-positive bacteria) Has outer membrane containing lipopolysaccharides (LPS, which consists of lipid A, core polysaccharide, and O antigen) in its outer leaflet and phospholipids in the inner leaflet Porins exist in the outer membrane, which act like pores for particular molecules Between the outer membrane and the cytoplasmic membrane there is a space filled with a concentrated gel-like substance called periplasm The S-layer is directly attached to the outer membrane rather than to the peptidoglycan If present, flagella have four supporting rings instead of two Teichoic acids or lipoteichoic acids are absent Lipoproteins are attached to the polysaccharide backbone Some contain Braun's lipoprotein, which serves as a link between the outer membrane and the peptidoglycan chain by a covalent bond Most, with few exceptions, do not form spores Classification Along with cell shape, Gram staining is a rapid diagnostic tool and once was used to group species at the subdivision of Bacteria. Historically, the kingdom Monera was divided into four divisions based on Gram staining: Firmicutes (+), Gracillicutes (−), Mollicutes (0) and Mendocutes (var.). Since 1987, the monophyly of the gram-negative bacteria has been disproven with molecular studies. However some authors, such as Cavalier-Smith still treat them as a monophyletic taxon (though not a clade; his definition of monophyly requires a single common ancestor but does not require holophyly, the property that all descendants be encompassed by the taxon) and refer to the group as a subkingdom "Negibacteria". Taxonomy Bacteria are traditionally classified based on their Gram-staining response into the gram-positive and gram-negative bacteria. Having just one membrane, the gram-positive bacteria are also known as monoderm bacteria, while gram-negative bacteria, having two membranes, are also known as diderm bacteria. It was traditionally thought that the groups represent lineages, i.e., the extra membrane only evolved once, such that gram-negative bacteria are more closely related to one another than to any gram-positive bacteria. While this is often true, the classification system breaks down in some cases, with lineage groupings not matching the staining result. Thus, Gram staining cannot be reliably used to assess familial relationships of bacteria. Nevertheless, staining often gives reliable information about the composition of the cell membrane, distinguishing between the presence or absence of an outer lipid membrane. Of these two structurally distinct groups of prokaryotic organisms, monoderm prokaryotes are thought to be ancestral. Based upon a number of different observations, including that the gram-positive bacteria are the most sensitive to antibiotics and that the gram-negative bacteria are, in general, resistant to antibiotics, it has been proposed that the outer cell membrane in gram-negative bacteria (diderms) evolved as a protective mechanism against antibiotic selection pressure. Some bacteria such as Deinococcus, which stain gram-positive due to the presence of a thick peptidoglycan layer, but also possess an outer cell membrane are suggested as intermediates in the transition between monoderm (gram-positive) and diderm (gram-negative) bacteria. The diderm bacteria can also be further differentiated between simple diderms lacking lipopolysaccharide (LPS); the archetypical diderm bacteria, in which the outer cell membrane contains lipopolysaccharide; and the diderm bacteria in which the outer cell membrane is made up of mycolic acid (e. g. Mycobacterium). The conventional LPS-diderm group of gram-negative bacteria (e.g., Pseudomonadota, Aquificota, Chlamydiota, Bacteroidota, Chlorobiota, "Cyanobacteria", Fibrobacterota, Verrucomicrobiota, Planctomycetota, Spirochaetota, Acidobacteriota; "Hydrobacteria") are uniquely identified by a few conserved signature indel (CSI) in the HSP60 (GroEL) protein. In addition, a number of bacterial taxa (including Negativicutes, Fusobacteriota, Synergistota, and Elusimicrobiota) that are either part of the phylum Bacillota (a monoderm group) or branches in its proximity are also found to possess a diderm cell structure. They lack the GroEL signature. The presence of this CSI in all sequenced species of conventional lipopolysaccharide-containing gram-negative bacterial phyla provides evidence that these phyla of bacteria form a monophyletic clade and that no loss of the outer membrane from any species from this group has occurred. Example species The proteobacteria are a major superphylum of gram-negative bacteria, including E. coli, Salmonella, Shigella, and other Enterobacteriaceae, Pseudomonas, Moraxella, Helicobacter, Stenotrophomonas, Bdellovibrio, acetic acid bacteria, Legionella etc. Other notable groups of gram-negative bacteria include the cyanobacteria, spirochaetes, green sulfur, and green non-sulfur bacteria. Medically-relevant gram-negative diplococci include the four types that cause a sexually transmitted disease (Neisseria gonorrhoeae), a meningitis (Neisseria meningitidis), and respiratory symptoms (Moraxella catarrhalis, A coccobacillus Haemophilus influenzae is another medically relevant coccal type. Medically relevant gram-negative bacilli include a multitude of species. Some of them cause primarily respiratory problems (Klebsiella pneumoniae, Legionella pneumophila, Pseudomonas aeruginosa), primarily urinary problems (Escherichia coli, Proteus mirabilis, Enterobacter cloacae, Serratia marcescens), and primarily gastrointestinal problems (Helicobacter pylori, Salmonella enteritidis, Salmonella typhi). Gram-negative bacteria associated with hospital-acquired infections include Acinetobacter baumannii, which cause bacteremia, secondary meningitis, and ventilator-associated pneumonia in hospital intensive-care units. Bacterial transformation Transformation is one of three processes for horizontal gene transfer, in which exogenous genetic material passes from one bacterium to another, the other two being conjugation (transfer of genetic material between two bacterial cells in direct contact) and transduction (injection of foreign DNA by a bacteriophage virus into the host bacterium). In transformation, the genetic material passes through the intervening medium, and uptake is completely dependent on the recipient bacterium. As of 2014 about 80 species of bacteria were known to be capable of transformation, about evenly divided between gram-positive and gram-negative bacteria; the number might be an overestimate since several of the reports are supported by single papers. Transformation has been studied in medically important gram-negative bacteria species such as Helicobacter pylori, Legionella pneumophila, Neisseria meningitidis, Neisseria gonorrhoeae, Haemophilus influenzae and Vibrio cholerae. It has also been studied in gram-negative species found in soil such as Pseudomonas stutzeri, Acinetobacter baylyi, and gram-negative plant pathogens such as Ralstonia solanacearum and Xylella fastidiosa. Role in disease One of the several unique characteristics of gram-negative bacteria is the structure of the bacterial outer membrane. The outer leaflet of this membrane contains lipopolysaccharide (LPS), whose lipid A portion acts as an endotoxin. If gram-negative bacteria enter the circulatory system, LPS can trigger an innate immune response, activating the immune system and producing cytokines (hormonal regulators). This leads to inflammation and can cause a toxic reaction, resulting in fever, an increased respiratory rate, and low blood pressure. That is why some infections with gram-negative bacteria can lead to life-threatening septic shock. The outer membrane protects the bacteria from several antibiotics, dyes, and detergents that would normally damage either the inner membrane or the cell wall (made of peptidoglycan). The outer membrane provides these bacteria with resistance to lysozyme and penicillin. The periplasmic space (space between the two cell membranes) also contains enzymes which break down or modify antibiotics. Drugs commonly used to treat gram negative infections include amino, carboxy and ureido penicillins (ampicillin, amoxicillin, pipercillin, ticarcillin). These drugs may be combined with beta-lactamase inhibitors to combat the presence of enzymes that can digest these drugs (known as beta-lactamases) in the peri-plasmic space. Other classes of drugs that have gram negative spectrum include cephalosporins, monobactams (aztreonam), aminoglycosides, quinolones, macrolides, chloramphenicol, folate antagonists, and carbapenems. Orthographic note The adjectives gram-positive and gram-negative derive from the surname of Hans Christian Gram, a Danish bacteriologist; as eponymous adjectives, their initial letter can be either capital G or lower-case g, depending on which style guide (e.g., that of the CDC), if any, governs the document being written. This is further explained at Gram staining § Orthographic note.
Biology and health sciences
Gram-negative bacteria
Plants
12938
https://en.wikipedia.org/wiki/Greyhound
Greyhound
The English Greyhound, or simply the Greyhound, is a breed of dog, a sighthound which has been bred for coursing, greyhound racing and hunting. Since the rise in large-scale adoption of retired racing Greyhounds, the breed has seen a resurgence in popularity as a family pet. Greyhounds are defined as a tall, muscular, smooth-coated, "S-shaped" type of sighthound with a long tail and tough feet. Greyhounds are a separate breed from other related sighthounds, such as the Italian greyhound. The Greyhound's combination of long, powerful legs, deep chest, flexible spine, and slim build allows it to reach average race speeds exceeding . A racing greyhound can reach a full speed of at least 47 mph. However, the most common speeds at which they usually win races are 58–61 Km/h (16–17 m/s). Its maximal speed is attained whether both running in a straight or bending track. Appearance Males are usually tall at the withers, and weigh on average . Females tend to be smaller, with shoulder heights ranging from and weights from , although weights can be above and below these average weights. Greyhounds have very short fur, which is easy to maintain. There are approximately 30 recognized color forms, of which variations of white, brindle, fawn (pale tan to dark deer-red), black, red, and blue (gray) can appear uniquely or in combination. Greyhounds are dolichocephalic, with a skull which is relatively long in comparison to its breadth, and an elongated muzzle. Temperament Greyhounds live most happily as pets in quiet environments. They do well in families with children, as long as the children are taught to treat the dog properly with politeness and appropriate respect. Greyhounds have a sensitive nature, and gentle commands work best as training methods. Occasionally, a Greyhound may bark; however, they are generally not barkers, which is beneficial in suburban environments, and they are usually as friendly to strangers as they are with their own families. A 2008 University of Pennsylvania study found that Greyhounds are one of the least aggressive dog breeds towards strangers, owners, and other dogs. However, Greyhounds can be prone to sleep startle/sleep aggression if suddenly disturbed while napping. Owners can encounter this problem, as many greyhounds sleep with their eyes at least partially open, appearing awake. A survey of those adopting rescue Greyhounds found that Greyhound adoptions have higher short term adoption success than shelters. The survey also found reported hyperactivity levels to be below that of shelter dogs. Greyhounds tend to be outgoing, happy and sociable with people and seem to relish human contact, even following owners from room to room at home (known colloquially as being a "Velcro dog"). Small animals including cats may be the subject of prey-driven behaviour by Greyhounds. Sport Coursing The original primary use of Greyhounds, both in the British Isles and on the Continent of Europe, was in the coursing of deer for meat and sport; later, specifically in Britain, they specialized in competition hare coursing. Some Greyhounds are still used for coursing, although artificial lure sports like lure coursing and racing are far more common and popular. Many leading 300- to 550-yard sprinters have bloodlines traceable back through Irish sires, within a few generations of racers that won events such as the Irish Coursing Derby or the Irish Cup. Racing Until the early 20th century, Greyhounds were principally bred and trained for hunting and coursing. During the 1920s, modern greyhound racing was introduced into the United States, England (1926), Northern Ireland (1927), Scotland (1927), and the Republic of Ireland (1927). Australia also has a significant racing culture. In the United States, aside from professional racing, many Greyhounds enjoy success on the amateur race track. Organizations like the Large Gazehound Racing Association (LGRA) and the National Oval Track Racing Association (NOTRA) provide opportunities for Greyhounds to compete. Companion Historically, the Greyhound has, since its first appearance as a hunting type and breed, enjoyed a specific degree of fame and definition in Western literature, heraldry and art as the most elegant or noble companion and hunter of the canine world. In modern times, the professional racing industry, with its large numbers of track-bred greyhounds, as well as international adoption programs aimed at re-homing dogs has redefined the breed as a sporting dog that will supply friendly companionship in its retirement. This has been prevalent in recent years due to track closures in the United States. Outside the racing industry and coursing community, the Kennel Clubs' registered breed still enjoys a modest following as a show dog and pet. Health and physiology A 2024 UK study found a life expectancy of 11.5 years for the breed compared to an average of 12.7 for purebreeds and 12 for crossbreeds. A 2005 Swedish study of insurance data found 60% of Greyhounds died by the age of 10, higher than the overall rate of 35% of dogs dying by the age of 10. The speed of a Greyhound is due to its light but muscular build, large heart, highest percentage of oxidative–glycolytic fast twitch muscle fibers (Type IIa) of any breed, double suspension gallop, and extreme flexibility of its spine. "Double suspension rotary gallop" describes the fastest running gait of the Greyhound in which all four feet are free from the ground in two phases, contracted and extended, during each full stride. The musculature of both hindlimbs constitutes more than 18 % of their body mass. The proportion of both forelimbs muscle mass is very similar. The proportion of back musculature is 12 % of their body mass. Due to the Greyhound's unique physiology and anatomy, a veterinarian who understands the issues relevant to the breed is generally needed when the dogs need treatment, particularly when anesthesia is required. Greyhounds cannot metabolize barbiturate-based anesthesia in the same way that other breeds can because their livers have lower amounts of oxidative enzymes. Greyhounds demonstrate unusual blood chemistry, which can be misread by veterinarians not familiar with the breed and can result in an incorrect diagnosis. Greyhounds are very sensitive to insecticides. Many vets do not recommend the use of flea collars or flea spray on Greyhounds if the product is pyrethrin-based. Products like Advantage, Frontline, Lufenuron, and Amitraz are safe for use on Greyhounds, however, and are very effective in controlling fleas and ticks. Greyhounds have higher levels of red blood cells than other breeds. Since red blood cells carry oxygen to the muscles, this higher level allows the hound to move larger quantities of oxygen faster from the lungs to the muscles. Conversely, Greyhounds have lower levels of platelets than other breeds. Delayed haemorrhage following trauma or routine surgery is more common in Greyhounds, with one study reporting significant haemorrhage in 26% of Greyhounds following routine gonadectomy, compared to 0-2% in other dog breeds. This is often termed greyhound fibrinolytic syndrome or breed-associated hyperfibrinolysis, where in there is a disorder of the fibrinolysis system without derangement of the primary or secondary coagulation systems, and is also not related to platelet count. In this syndrome there is initial adequate hemostasis following trauma or routine surgical procedures, however 36–48 hours later the site undergoes inappropriate hyperfibrinolysis. This results in delayed bleeding which can result in significant morbidity and mortality. Standard pre-operative blood work does not identify those at risk It is distinct from common bleeding disorders in other breeds such as von Willebrand's disease, which is uncommon in Greyhounds. Although high-quality research data are lacking, it is thought that this condition can be prevented and treated by administering antifibrinolytic medication such as tranexamic acid via the oral or parenteral route. Intensive care and blood product administration may also be required in severe cases. Greyhounds do not have undercoats and thus are less likely to trigger dog allergies in humans (they are sometimes incorrectly referred to as "hypoallergenic"). The lack of an undercoat, coupled with a general lack of body fat, also makes Greyhounds more susceptible to extreme temperatures (both hot and cold); because of this, they must be housed inside. Some Greyhounds are susceptible to corns on their paw pads; a variety of methods are used to treat them. Thyroxine levels in the Greyhound are below the normal reference range for dogs; thyroxine response to thyroid-stimulating hormone is also lowered. This can impact testing for thyroid disease but it is not a concern for health. History Origins "The true origin of the greyhound is unsure, but drawings of findings from the Çatalhöyük site in Turkey (6000 BC), the finding of a greyhound-like dog in a funeral vase in the town of Fusa in Iran (4200 BC) or in rock art in Tassili (dated at 5000 – 2000 BC) indicate that the greyhound is indeed one of the oldest breeds of dog.". The ancient skeletal remains of a dog identified as being of the greyhound/saluki form were excavated at Tell Brak in modern Syria, and dated as being approximately 4,000 years old. Dogs that look similar to Salukis and Greyhounds were increasingly depicted on Egyptian tombs from the Middle Kingdom (2134 BC–1785 BC) onward. Historical literature by Arrian on the vertragus (from the Latin , a word of Celtic origin), the first recorded sighthound in Europe and possible antecedent of the Greyhound, suggested that its origin lies with the Celts from Eastern Europe or Eurasia. Systematic archaeozoology of Britain conducted in 1974 ruled out the existence of a true greyhound-type in Britain prior to the Roman occupation, which was further confirmed in 2000. Written evidence from the early period of Roman occupation, the Vindolanda tablets (No. 594), demonstrate that the occupying troops from Continental Europe either had with them in the North of England, or certainly knew of, the vertragus and its hunting use. During the Middle Ages, greyhounds could only be owned by rulers and nobles, having long been associated with heraldic symbols of the ruling class in England, France, and the Czech lands. The earliest archaeological discovery found conclusively to be a greyhound specifically was at the Chotěbuz fort in the Czech Republic. This comprised sighthound type "gracile" bones dating from the 8th to 9th century AD. These bones matched those of a high "greyhound", and were also genetically compared with the modern Greyhound and other sighthounds, and found to be almost completely identical with the modern Greyhound breed, with the exception of only four deletions and one substitution in the DNA sequences, which were interpreted as differences probably arising from 11 centuries of breeding of this type of dog. All modern pedigree Greyhounds derive from the Greyhound stock recorded and registered first in private studbooks in the 18th century, then in public studbooks in the 19th century, which ultimately were registered with coursing, racing, and kennel club authorities of the United Kingdom. Historically, these sighthounds were used primarily for hunting in the open where their pursuit speed and keen eyesight were essential. Etymology The name "Greyhound" is generally believed to come from the Old English . is the antecedent of the modern "hound", but the meaning of is undetermined, other than in reference to dogs in Old English and Old Norse. The word "hund" is still used for dogs in general in Scandinavian languages today. Its origin does not appear to have any common root with the modern word "grey" for color, and indeed the Greyhound is seen with a wide variety of coat colors. The lighter colors, patch-like markings and white appeared in the breed that was once ordinarily grey in color. The Greyhound is the only dog mentioned by name in the Bible (, zarir mosna'im) in . Many versions, including the Jewish Publication Society and King James Version, name the Greyhound as one of the "three that are stately of stride". However, some newer biblical translations, including the New International Version, have changed this to 'strutting rooster', which appears to be an alternative translation. According to Pokorny, the English term 'Greyhound' does not mean "grey dog/hound", but simply "fair dog". Subsequent words have been derived from the Proto-Indo-European root *g'her- "shine, twinkle": English 'grey', Old High German "grey, old", Old Icelandic "piglet, pig", Old Icelandic "to dawn", "morning twilight", Old Irish "sun", Old Church Slavonic "morning twilight, brightness". The common sense of these words is "to shine; bright". In 1928, the first winner of Best in Show at Crufts was breeder/owner Mr. H. Whitley's Greyhound Primley Sceptre. Greyhounds have won the award three times in total, the most recent being in 1956. Historically, English Greyhounds were grouped: two for coursing, as a "Brace", three for hunting, as a "Leash", otherwise known as a "couple and a half".
Biology and health sciences
Dogs
Animals
12950
https://en.wikipedia.org/wiki/Glucose
Glucose
Glucose is a sugar with the molecular formula . It is overall the most abundant monosaccharide, a subcategory of carbohydrates. It is mainly made by plants and most algae during photosynthesis from water and carbon dioxide, using energy from sunlight. It is used by plants to make cellulose, the most abundant carbohydrate in the world, for use in cell walls, and by all living organisms to make adenosine triphosphate (ATP), which is used by the cell as energy. In energy metabolism, glucose is the most important source of energy in all organisms. Glucose for metabolism is stored as a polymer, in plants mainly as amylose and amylopectin, and in animals as glycogen. Glucose circulates in the blood of animals as blood sugar. The naturally occurring form is -glucose, while its stereoisomer -glucose is produced synthetically in comparatively small amounts and is less biologically active. Glucose is a monosaccharide containing six carbon atoms and an aldehyde group, and is therefore an aldohexose. The glucose molecule can exist in an open-chain (acyclic) as well as ring (cyclic) form. Glucose is naturally occurring and is found in its free state in fruits and other parts of plants. In animals, it is released from the breakdown of glycogen in a process known as glycogenolysis. Glucose, as intravenous sugar solution, is on the World Health Organization's List of Essential Medicines. It is also on the list in combination with sodium chloride (table salt). The name glucose is derived from Ancient Greek () 'wine, must', from () 'sweet'. The suffix -ose is a chemical classifier denoting a sugar. History Glucose was first isolated from raisins in 1747 by the German chemist Andreas Marggraf. Glucose was discovered in grapes by another German chemistJohann Tobias Lowitzin 1792, and distinguished as being different from cane sugar (sucrose). Glucose is the term coined by Jean Baptiste Dumas in 1838, which has prevailed in the chemical literature. Friedrich August Kekulé proposed the term dextrose (from the Latin , meaning "right"), because in aqueous solution of glucose, the plane of linearly polarized light is turned to the right. In contrast, l-fructose (usually referred to as -fructose) (a ketohexose) and l-glucose (-glucose) turn linearly polarized light to the left. The earlier notation according to the rotation of the plane of linearly polarized light (d and l-nomenclature) was later abandoned in favor of the - and -notation, which refers to the absolute configuration of the asymmetric center farthest from the carbonyl group, and in concordance with the configuration of - or -glyceraldehyde. Since glucose is a basic necessity of many organisms, a correct understanding of its chemical makeup and structure contributed greatly to a general advancement in organic chemistry. This understanding occurred largely as a result of the investigations of Emil Fischer, a German chemist who received the 1902 Nobel Prize in Chemistry for his findings. The synthesis of glucose established the structure of organic material and consequently formed the first definitive validation of Jacobus Henricus van 't Hoff's theories of chemical kinetics and the arrangements of chemical bonds in carbon-bearing molecules. Between 1891 and 1894, Fischer established the stereochemical configuration of all the known sugars and correctly predicted the possible isomers, applying Van 't Hoff equation of asymmetrical carbon atoms. The names initially referred to the natural substances. Their enantiomers were given the same name with the introduction of systematic nomenclatures, taking into account absolute stereochemistry (e.g. Fischer nomenclature, / nomenclature). For the discovery of the metabolism of glucose Otto Meyerhof received the Nobel Prize in Physiology or Medicine in 1922. Hans von Euler-Chelpin was awarded the Nobel Prize in Chemistry along with Arthur Harden in 1929 for their "research on the fermentation of sugar and their share of enzymes in this process". In 1947, Bernardo Houssay (for his discovery of the role of the pituitary gland in the metabolism of glucose and the derived carbohydrates) as well as Carl and Gerty Cori (for their discovery of the conversion of glycogen from glucose) received the Nobel Prize in Physiology or Medicine. In 1970, Luis Leloir was awarded the Nobel Prize in Chemistry for the discovery of glucose-derived sugar nucleotides in the biosynthesis of carbohydrates. Chemical and physical properties Glucose forms white or colorless solids that are highly soluble in water and acetic acid but poorly soluble in methanol and ethanol. They melt at (α) and (beta), decompose starting at with release of various volatile products, ultimately leaving a residue of carbon. Glucose has a pKa value of 12.16 at in water. With six carbon atoms, it is classed as a hexose, a subcategory of the monosaccharides. -Glucose is one of the sixteen aldohexose stereoisomers. The -isomer, -glucose, also known as dextrose, occurs widely in nature, but the -isomer, -glucose, does not. Glucose can be obtained by hydrolysis of carbohydrates such as milk sugar (lactose), cane sugar (sucrose), maltose, cellulose, glycogen, etc. Dextrose is commonly commercially manufactured from starches, such as corn starch in the US and Japan, from potato and wheat starch in Europe, and from tapioca starch in tropical areas. The manufacturing process uses hydrolysis via pressurized steaming at controlled pH in a jet followed by further enzymatic depolymerization. Unbonded glucose is one of the main ingredients of honey. The term dextrose is often used in a clinical (related to patient's health status) or nutritional context (related to dietary intake, such as food labels or dietary guidelines), while "glucose" is used in a biological or physiological context (chemical processes and molecular interactions), but both terms refer to the same molecule, specifically D-glucose. Dextrose monohydrate is the hydrated form of D-glucose, meaning that it is a glucose molecule with an additional water molecule attached. Its chemical formula is  · . Dextrose monohydrate is also called hydrated D-glucose, and commonly manufactured from plant starches. Dextrose monohydrate is utilized as the predominant type of dextrose in food applications, such as beverage mixes—it is a common form of glucose widely used as a nutrition supplement in production of foodstuffs. Dextrose monohydrate is primarily consumed in North America as a corn syrup or high-fructose corn syrup. Anhydrous dextrose, on the other hand, is glucose that does not have any water molecules attached to it. Anhydrous chemical substances are commonly produced by eliminating water from a hydrated substance through methods such as heating or drying up (desiccation). Dextrose monohydrate can be dehydrated to anhydrous dextrose in industrial setting. Dextrose monohydrate is composed of approximately 9.5% water by mass; through the process of dehydration, this water content is eliminated to yield anhydrous (dry) dextrose. Anhydrous dextrose has the chemical formula , without any water molecule attached which is the same as glucose. Anhydrous dextrose on open air tends to absorb moisture and transform to the monohydrate, and it is more expensive to produce. Anhydrous dextrose (anhydrous D-glucose) has increased stability and increased shelf life, has medical applications, such as in oral glucose tolerance test. Whereas molecular weight (molar mass) for D-glucose monohydrate is 198.17 g/mol, that for anhydrous D-glucose is 180.16 g/mol The density of these two forms of glucose is also different. In terms of chemical structure, glucose is a monosaccharide, that is, a simple sugar. Glucose contains six carbon atoms and an aldehyde group, and is therefore an aldohexose. The glucose molecule can exist in an open-chain (acyclic) as well as ring (cyclic) form—due to the presence of alcohol and aldehyde or ketone functional groups, the form having the straight chain can easily convert into a chair-like hemiacetal ring structure commonly found in carbohydrates. Structure and nomenclature Glucose is present in solid form as a monohydrate with a closed pyran ring (α-D-glucopyranose monohydrate, sometimes known less precisely by dextrose hydrate). In aqueous solution, on the other hand, it is an open-chain to a small extent and is present predominantly as α- or β-pyranose, which interconvert. From aqueous solutions, the three known forms can be crystallized: α-glucopyranose, β-glucopyranose and α-glucopyranose monohydrate. Glucose is a building block of the disaccharides lactose and sucrose (cane or beet sugar), of oligosaccharides such as raffinose and of polysaccharides such as starch, amylopectin, glycogen, and cellulose. The glass transition temperature of glucose is and the Gordon–Taylor constant (an experimentally determined constant for the prediction of the glass transition temperature for different mass fractions of a mixture of two substances) is 4.5. Open-chain form A open-chain form of glucose makes up less than 0.02% of the glucose molecules in an aqueous solution at equilibrium. The rest is one of two cyclic hemiacetal forms. In its open-chain form, the glucose molecule has an open (as opposed to cyclic) unbranched backbone of six carbon atoms, where C-1 is part of an aldehyde group . Therefore, glucose is also classified as an aldose, or an aldohexose. The aldehyde group makes glucose a reducing sugar giving a positive reaction with the Fehling test. Cyclic forms In solutions, the open-chain form of glucose (either "-" or "-") exists in equilibrium with several cyclic isomers, each containing a ring of carbons closed by one oxygen atom. In aqueous solution, however, more than 99% of glucose molecules exist as pyranose forms. The open-chain form is limited to about 0.25%, and furanose forms exist in negligible amounts. The terms "glucose" and "-glucose" are generally used for these cyclic forms as well. The ring arises from the open-chain form by an intramolecular nucleophilic addition reaction between the aldehyde group (at C-1) and either the C-4 or C-5 hydroxyl group, forming a hemiacetal linkage, . The reaction between C-1 and C-5 yields a six-membered heterocyclic system called a pyranose, which is a monosaccharide sugar (hence "-ose") containing a derivatised pyran skeleton. The (much rarer) reaction between C-1 and C-4 yields a five-membered furanose ring, named after the cyclic ether furan. In either case, each carbon in the ring has one hydrogen and one hydroxyl attached, except for the last carbon (C-4 or C-5) where the hydroxyl is replaced by the remainder of the open molecule (which is or respectively). The ring-closing reaction can give two products, denoted "α-" and "β-". When a glucopyranose molecule is drawn in the Haworth projection, the designation "α-" means that the hydroxyl group attached to C-1 and the group at C-5 lies on opposite sides of the ring's plane (a trans arrangement), while "β-" means that they are on the same side of the plane (a cis arrangement). Therefore, the open-chain isomer -glucose gives rise to four distinct cyclic isomers: α--glucopyranose, β--glucopyranose, α--glucofuranose, and β--glucofuranose. These five structures exist in equilibrium and interconvert, and the interconversion is much more rapid with acid catalysis. The other open-chain isomer -glucose similarly gives rise to four distinct cyclic forms of -glucose, each the mirror image of the corresponding -glucose. The glucopyranose ring (α or β) can assume several non-planar shapes, analogous to the "chair" and "boat" conformations of cyclohexane. Similarly, the glucofuranose ring may assume several shapes, analogous to the "envelope" conformations of cyclopentane. In the solid state, only the glucopyranose forms are observed. Some derivatives of glucofuranose, such as 1,2-O-isopropylidene--glucofuranose are stable and can be obtained pure as crystalline solids. For example, reaction of α-D-glucose with para-tolylboronic acid reforms the normal pyranose ring to yield the 4-fold ester α-D-glucofuranose-1,2:3,5-bis(p-tolylboronate). Mutarotation Mutarotation consists of a temporary reversal of the ring-forming reaction, resulting in the open-chain form, followed by a reforming of the ring. The ring closure step may use a different group than the one recreated by the opening step (thus switching between pyranose and furanose forms), or the new hemiacetal group created on C-1 may have the same or opposite handedness as the original one (thus switching between the α and β forms). Thus, though the open-chain form is barely detectable in solution, it is an essential component of the equilibrium. The open-chain form is thermodynamically unstable, and it spontaneously isomerizes to the cyclic forms. (Although the ring closure reaction could in theory create four- or three-atom rings, these would be highly strained, and are not observed in practice.) In solutions at room temperature, the four cyclic isomers interconvert over a time scale of hours, in a process called mutarotation. Starting from any proportions, the mixture converges to a stable ratio of α:β 36:64. The ratio would be α:β 11:89 if it were not for the influence of the anomeric effect. Mutarotation is considerably slower at temperatures close to . Optical activity Whether in water or the solid form, -(+)-glucose is dextrorotatory, meaning it will rotate the direction of polarized light clockwise as seen looking toward the light source. The effect is due to the chirality of the molecules, and indeed the mirror-image isomer, -(−)-glucose, is levorotatory (rotates polarized light counterclockwise) by the same amount. The strength of the effect is different for each of the five tautomers. The - prefix does not refer directly to the optical properties of the compound. It indicates that the C-5 chiral centre has the same handedness as that of -glyceraldehyde (which was so labelled because it is dextrorotatory). The fact that -glucose is dextrorotatory is a combined effect of its four chiral centres, not just of C-5; some of the other -aldohexoses are levorotatory. The conversion between the two anomers can be observed in a polarimeter since pure α--glucose has a specific rotation angle of +112.2° mL/(dm·g), pure β--glucose of +17.5° mL/(dm·g). When equilibrium has been reached after a certain time due to mutarotation, the angle of rotation is +52.7° mL/(dm·g). By adding acid or base, this transformation is much accelerated. The equilibration takes place via the open-chain aldehyde form. Isomerisation In dilute sodium hydroxide or other dilute bases, the monosaccharides mannose, glucose and fructose interconvert (via a Lobry de Bruyn–Alberda–Van Ekenstein transformation), so that a balance between these isomers is formed. This reaction proceeds via an enediol: Biochemical properties Glucose is the most abundant monosaccharide. Glucose is also the most widely used aldohexose in most living organisms. One possible explanation for this is that glucose has a lower tendency than other aldohexoses to react nonspecifically with the amine groups of proteins. This reaction—glycation—impairs or destroys the function of many proteins, e.g. in glycated hemoglobin. Glucose's low rate of glycation can be attributed to its having a more stable cyclic form compared to other aldohexoses, which means it spends less time than they do in its reactive open-chain form. The reason for glucose having the most stable cyclic form of all the aldohexoses is that its hydroxy groups (with the exception of the hydroxy group on the anomeric carbon of -glucose) are in the equatorial position. Presumably, glucose is the most abundant natural monosaccharide because it is less glycated with proteins than other monosaccharides. Another hypothesis is that glucose, being the only -aldohexose that has all five hydroxy substituents in the equatorial position in the form of β--glucose, is more readily accessible to chemical reactions, for example, for esterification or acetal formation. For this reason, -glucose is also a highly preferred building block in natural polysaccharides (glycans). Polysaccharides that are composed solely of glucose are termed glucans. Glucose is produced by plants through photosynthesis using sunlight, water and carbon dioxide and can be used by all living organisms as an energy and carbon source. However, most glucose does not occur in its free form, but in the form of its polymers, i.e. lactose, sucrose, starch and others which are energy reserve substances, and cellulose and chitin, which are components of the cell wall in plants or fungi and arthropods, respectively. These polymers, when consumed by animals, fungi and bacteria, are degraded to glucose using enzymes. All animals are also able to produce glucose themselves from certain precursors as the need arises. Neurons, cells of the renal medulla and erythrocytes depend on glucose for their energy production. In adult humans, there is about of glucose, of which about is present in the blood. Approximately of glucose is produced in the liver of an adult in 24 hours. Many of the long-term complications of diabetes (e.g., blindness, kidney failure, and peripheral neuropathy) are probably due to the glycation of proteins or lipids. In contrast, enzyme-regulated addition of sugars to protein is called glycosylation and is essential for the function of many proteins. Uptake Ingested glucose initially binds to the receptor for sweet taste on the tongue in humans. This complex of the proteins T1R2 and T1R3 makes it possible to identify glucose-containing food sources. Glucose mainly comes from food—about per day is produced by conversion of food, but it is also synthesized from other metabolites in the body's cells. In humans, the breakdown of glucose-containing polysaccharides happens in part already during chewing by means of amylase, which is contained in saliva, as well as by maltase, lactase, and sucrase on the brush border of the small intestine. Glucose is a building block of many carbohydrates and can be split off from them using certain enzymes. Glucosidases, a subgroup of the glycosidases, first catalyze the hydrolysis of long-chain glucose-containing polysaccharides, removing terminal glucose. In turn, disaccharides are mostly degraded by specific glycosidases to glucose. The names of the degrading enzymes are often derived from the particular poly- and disaccharide; inter alia, for the degradation of polysaccharide chains there are amylases (named after amylose, a component of starch), cellulases (named after cellulose), chitinases (named after chitin), and more. Furthermore, for the cleavage of disaccharides, there are maltase, lactase, sucrase, trehalase, and others. In humans, about 70 genes are known that code for glycosidases. They have functions in the digestion and degradation of glycogen, sphingolipids, mucopolysaccharides, and poly(ADP-ribose). Humans do not produce cellulases, chitinases, or trehalases, but the bacteria in the gut microbiota do. In order to get into or out of cell membranes of cells and membranes of cell compartments, glucose requires special transport proteins from the major facilitator superfamily. In the small intestine (more precisely, in the jejunum), glucose is taken up into the intestinal epithelium with the help of glucose transporters via a secondary active transport mechanism called sodium ion-glucose symport via sodium/glucose cotransporter 1 (SGLT1). Further transfer occurs on the basolateral side of the intestinal epithelial cells via the glucose transporter GLUT2, as well uptake into liver cells, kidney cells, cells of the islets of Langerhans, neurons, astrocytes, and tanycytes. Glucose enters the liver via the portal vein and is stored there as a cellular glycogen. In the liver cell, it is phosphorylated by glucokinase at position 6 to form glucose 6-phosphate, which cannot leave the cell. Glucose 6-phosphatase can convert glucose 6-phosphate back into glucose exclusively in the liver, so the body can maintain a sufficient blood glucose concentration. In other cells, uptake happens by passive transport through one of the 14 GLUT proteins. In the other cell types, phosphorylation occurs through a hexokinase, whereupon glucose can no longer diffuse out of the cell. The glucose transporter GLUT1 is produced by most cell types and is of particular importance for nerve cells and pancreatic β-cells. GLUT3 is highly expressed in nerve cells. Glucose from the bloodstream is taken up by GLUT4 from muscle cells (of the skeletal muscle and heart muscle) and fat cells. GLUT14 is expressed exclusively in testicles. Excess glucose is broken down and converted into fatty acids, which are stored as triglycerides. In the kidneys, glucose in the urine is absorbed via SGLT1 and SGLT2 in the apical cell membranes and transmitted via GLUT2 in the basolateral cell membranes. About 90% of kidney glucose reabsorption is via SGLT2 and about 3% via SGLT1. Biosynthesis In plants and some prokaryotes, glucose is a product of photosynthesis. Glucose is also formed by the breakdown of polymeric forms of glucose like glycogen (in animals and mushrooms) or starch (in plants). The cleavage of glycogen is termed glycogenolysis, the cleavage of starch is called starch degradation. The metabolic pathway that begins with molecules containing two to four carbon atoms (C) and ends in the glucose molecule containing six carbon atoms is called gluconeogenesis and occurs in all living organisms. The smaller starting materials are the result of other metabolic pathways. Ultimately almost all biomolecules come from the assimilation of carbon dioxide in plants and microbes during photosynthesis. The free energy of formation of α--glucose is 917.2 kilojoules per mole. In humans, gluconeogenesis occurs in the liver and kidney, but also in other cell types. In the liver about of glycogen are stored, in skeletal muscle about . However, the glucose released in muscle cells upon cleavage of the glycogen can not be delivered to the circulation because glucose is phosphorylated by the hexokinase, and a glucose-6-phosphatase is not expressed to remove the phosphate group. Unlike for glucose, there is no transport protein for glucose-6-phosphate. Gluconeogenesis allows the organism to build up glucose from other metabolites, including lactate or certain amino acids, while consuming energy. The renal tubular cells can also produce glucose. Glucose also can be found outside of living organisms in the ambient environment. Glucose concentrations in the atmosphere are detected via collection of samples by aircraft and are known to vary from location to location. For example, glucose concentrations in atmospheric air from inland China range from 0.8 to 20.1 pg/L, whereas east coastal China glucose concentrations range from 10.3 to 142 pg/L. Glucose degradation In humans, glucose is metabolized by glycolysis and the pentose phosphate pathway. Glycolysis is used by all living organisms, with small variations, and all organisms generate energy from the breakdown of monosaccharides. In the further course of the metabolism, it can be completely degraded via oxidative decarboxylation, the citric acid cycle (synonym Krebs cycle) and the respiratory chain to water and carbon dioxide. If there is not enough oxygen available for this, the glucose degradation in animals occurs anaerobic to lactate via lactic acid fermentation and releases much less energy. Muscular lactate enters the liver through the bloodstream in mammals, where gluconeogenesis occurs (Cori cycle). With a high supply of glucose, the metabolite acetyl-CoA from the Krebs cycle can also be used for fatty acid synthesis. Glucose is also used to replenish the body's glycogen stores, which are mainly found in liver and skeletal muscle. These processes are hormonally regulated. In other living organisms, other forms of fermentation can occur. The bacterium Escherichia coli can grow on nutrient media containing glucose as the sole carbon source. In some bacteria and, in modified form, also in archaea, glucose is degraded via the Entner-Doudoroff pathway. With Glucose, a mechanism for gene regulation was discovered in E. coli, the catabolite repression (formerly known as glucose effect). Use of glucose as an energy source in cells is by either aerobic respiration, anaerobic respiration, or fermentation. The first step of glycolysis is the phosphorylation of glucose by a hexokinase to form glucose 6-phosphate. The main reason for the immediate phosphorylation of glucose is to prevent its diffusion out of the cell as the charged phosphate group prevents glucose 6-phosphate from easily crossing the cell membrane. Furthermore, addition of the high-energy phosphate group activates glucose for subsequent breakdown in later steps of glycolysis. In anaerobic respiration, one glucose molecule produces a net gain of two ATP molecules (four ATP molecules are produced during glycolysis through substrate-level phosphorylation, but two are required by enzymes used during the process). In aerobic respiration, a molecule of glucose is much more profitable in that a maximum net production of 30 or 32 ATP molecules (depending on the organism) is generated. Tumor cells often grow comparatively quickly and consume an above-average amount of glucose by glycolysis, which leads to the formation of lactate, the end product of fermentation in mammals, even in the presence of oxygen. This is called the Warburg effect. For the increased uptake of glucose in tumors various SGLT and GLUT are overly produced. In yeast, ethanol is fermented at high glucose concentrations, even in the presence of oxygen (which normally leads to respiration rather than fermentation). This is called the Crabtree effect. Glucose can also degrade to form carbon dioxide through abiotic means. This has been demonstrated to occur experimentally via oxidation and hydrolysis at 22 °C and a pH of 2.5. Energy source Glucose is a ubiquitous fuel in biology. It is used as an energy source in organisms, from bacteria to humans, through either aerobic respiration, anaerobic respiration (in bacteria), or fermentation. Glucose is the human body's key source of energy, through aerobic respiration, providing about 3.75 kilocalories (16 kilojoules) of food energy per gram. Breakdown of carbohydrates (e.g., starch) yields mono- and disaccharides, most of which is glucose. Through glycolysis and later in the reactions of the citric acid cycle and oxidative phosphorylation, glucose is oxidized to eventually form carbon dioxide and water, yielding energy mostly in the form of adenosine triphosphate (ATP). The insulin reaction, and other mechanisms, regulate the concentration of glucose in the blood. The physiological caloric value of glucose, depending on the source, is 16.2 kilojoules per gram or 15.7 kJ/g (3.74 kcal/g). The high availability of carbohydrates from plant biomass has led to a variety of methods during evolution, especially in microorganisms, to utilize glucose for energy and carbon storage. Differences exist in which end product can no longer be used for energy production. The presence of individual genes, and their gene products, the enzymes, determine which reactions are possible. The metabolic pathway of glycolysis is used by almost all living beings. An essential difference in the use of glycolysis is the recovery of NADPH as a reductant for anabolism that would otherwise have to be generated indirectly. Glucose and oxygen supply almost all the energy for the brain, so its availability influences psychological processes. When glucose is low, psychological processes requiring mental effort (e.g., self-control, effortful decision-making) are impaired. In the brain, which is dependent on glucose and oxygen as the major source of energy, the glucose concentration is usually 4 to 6 mM (5 mM equals 90 mg/dL), but decreases to 2 to 3 mM when fasting. Confusion occurs below 1 mM and coma at lower levels. The glucose in the blood is called blood sugar. Blood sugar levels are regulated by glucose-binding nerve cells in the hypothalamus. In addition, glucose in the brain binds to glucose receptors of the reward system in the nucleus accumbens. The binding of glucose to the sweet receptor on the tongue induces a release of various hormones of energy metabolism, either through glucose or through other sugars, leading to an increased cellular uptake and lower blood sugar levels. Artificial sweeteners do not lower blood sugar levels. The blood sugar content of a healthy person in the short-time fasting state, e.g. after overnight fasting, is about 70 to 100 mg/dL of blood (4 to 5.5 mM). In blood plasma, the measured values are about 10–15% higher. In addition, the values in the arterial blood are higher than the concentrations in the venous blood since glucose is absorbed into the tissue during the passage of the capillary bed. Also in the capillary blood, which is often used for blood sugar determination, the values are sometimes higher than in the venous blood. The glucose content of the blood is regulated by the hormones insulin, incretin and glucagon. Insulin lowers the glucose level, glucagon increases it. Furthermore, the hormones adrenaline, thyroxine, glucocorticoids, somatotropin and adrenocorticotropin lead to an increase in the glucose level. There is also a hormone-independent regulation, which is referred to as glucose autoregulation. After food intake the blood sugar concentration increases. Values over 180 mg/dL in venous whole blood are pathological and are termed hyperglycemia, values below 40 mg/dL are termed hypoglycaemia. When needed, glucose is released into the bloodstream by glucose-6-phosphatase from glucose-6-phosphate originating from liver and kidney glycogen, thereby regulating the homeostasis of blood glucose concentration. In ruminants, the blood glucose concentration is lower (60 mg/dL in cattle and 40 mg/dL in sheep), because the carbohydrates are converted more by their gut microbiota into short-chain fatty acids. Some glucose is converted to lactic acid by astrocytes, which is then utilized as an energy source by brain cells; some glucose is used by intestinal cells and red blood cells, while the rest reaches the liver, adipose tissue and muscle cells, where it is absorbed and stored as glycogen (under the influence of insulin). Liver cell glycogen can be converted to glucose and returned to the blood when insulin is low or absent; muscle cell glycogen is not returned to the blood because of a lack of enzymes. In fat cells, glucose is used to power reactions that synthesize some fat types and have other purposes. Glycogen is the body's "glucose energy storage" mechanism, because it is much more "space efficient" and less reactive than glucose itself. As a result of its importance in human health, glucose is an analyte in glucose tests that are common medical blood tests. Eating or fasting prior to taking a blood sample has an effect on analyses for glucose in the blood; a high fasting glucose blood sugar level may be a sign of prediabetes or diabetes mellitus. The glycemic index is an indicator of the speed of resorption and conversion to blood glucose levels from ingested carbohydrates, measured as the area under the curve of blood glucose levels after consumption in comparison to glucose (glucose is defined as 100). The clinical importance of the glycemic index is controversial, as foods with high fat contents slow the resorption of carbohydrates and lower the glycemic index, e.g. ice cream. An alternative indicator is the insulin index, measured as the impact of carbohydrate consumption on the blood insulin levels. The glycemic load is an indicator for the amount of glucose added to blood glucose levels after consumption, based on the glycemic index and the amount of consumed food. Precursor Organisms use glucose as a precursor for the synthesis of several important substances. Starch, cellulose, and glycogen ("animal starch") are common glucose polymers (polysaccharides). Some of these polymers (starch or glycogen) serve as energy stores, while others (cellulose and chitin, which is made from a derivative of glucose) have structural roles. Oligosaccharides of glucose combined with other sugars serve as important energy stores. These include lactose, the predominant sugar in milk, which is a glucose-galactose disaccharide, and sucrose, another disaccharide which is composed of glucose and fructose. Glucose is also added onto certain proteins and lipids in a process called glycosylation. This is often critical for their functioning. The enzymes that join glucose to other molecules usually use phosphorylated glucose to power the formation of the new bond by coupling it with the breaking of the glucose-phosphate bond. Other than its direct use as a monomer, glucose can be broken down to synthesize a wide variety of other biomolecules. This is important, as glucose serves both as a primary store of energy and as a source of organic carbon. Glucose can be broken down and converted into lipids. It is also a precursor for the synthesis of other important molecules such as vitamin C (ascorbic acid). In living organisms, glucose is converted to several other chemical compounds that are the starting material for various metabolic pathways. Among them, all other monosaccharides such as fructose (via the polyol pathway), mannose (the epimer of glucose at position 2), galactose (the epimer at position 4), fucose, various uronic acids and the amino sugars are produced from glucose. In addition to the phosphorylation to glucose-6-phosphate, which is part of the glycolysis, glucose can be oxidized during its degradation to glucono-1,5-lactone. Glucose is used in some bacteria as a building block in the trehalose or the dextran biosynthesis and in animals as a building block of glycogen. Glucose can also be converted from bacterial xylose isomerase to fructose. In addition, glucose metabolites produce all nonessential amino acids, sugar alcohols such as mannitol and sorbitol, fatty acids, cholesterol and nucleic acids. Finally, glucose is used as a building block in the glycosylation of proteins to glycoproteins, glycolipids, peptidoglycans, glycosides and other substances (catalyzed by glycosyltransferases) and can be cleaved from them by glycosidases. Pathology Diabetes Diabetes is a metabolic disorder where the body is unable to regulate levels of glucose in the blood either because of a lack of insulin in the body or the failure, by cells in the body, to respond properly to insulin. Each of these situations can be caused by persistently high elevations of blood glucose levels, through pancreatic burnout and insulin resistance. The pancreas is the organ responsible for the secretion of the hormones insulin and glucagon. Insulin is a hormone that regulates glucose levels, allowing the body's cells to absorb and use glucose. Without it, glucose cannot enter the cell and therefore cannot be used as fuel for the body's functions. If the pancreas is exposed to persistently high elevations of blood glucose levels, the insulin-producing cells in the pancreas could be damaged, causing a lack of insulin in the body. Insulin resistance occurs when the pancreas tries to produce more and more insulin in response to persistently elevated blood glucose levels. Eventually, the rest of the body becomes resistant to the insulin that the pancreas is producing, thereby requiring more insulin to achieve the same blood glucose-lowering effect, and forcing the pancreas to produce even more insulin to compete with the resistance. This negative spiral contributes to pancreatic burnout, and the disease progression of diabetes. To monitor the body's response to blood glucose-lowering therapy, glucose levels can be measured. Blood glucose monitoring can be performed by multiple methods, such as the fasting glucose test which measures the level of glucose in the blood after 8 hours of fasting. Another test is the 2-hour glucose tolerance test (GTT)for this test, the person has a fasting glucose test done, then drinks a 75-gram glucose drink and is retested. This test measures the ability of the person's body to process glucose. Over time the blood glucose levels should decrease as insulin allows it to be taken up by cells and exit the blood stream. Hypoglycemia management Individuals with diabetes or other conditions that result in low blood sugar often carry small amounts of sugar in various forms. One sugar commonly used is glucose, often in the form of glucose tablets (glucose pressed into a tablet shape sometimes with one or more other ingredients as a binder), hard candy, or sugar packet. Sources Most dietary carbohydrates contain glucose, either as their only building block (as in the polysaccharides starch and glycogen), or together with another monosaccharide (as in the hetero-polysaccharides sucrose and lactose). Unbound glucose is one of the main ingredients of honey. Glucose is extremely abundant and has been isolated from a variety of natural sources across the world, including male cones of the coniferous tree Wollemia nobilis in Rome, the roots of Ilex asprella plants in China, and straws from rice in California. Commercial production Glucose is produced industrially from starch by enzymatic hydrolysis using glucose amylase or by the use of acids. Enzymatic hydrolysis has largely displaced acid-catalyzed hydrolysis reactions. The result is glucose syrup (enzymatically with more than 90% glucose in the dry matter) with an annual worldwide production volume of 20 million tonnes (as of 2011). This is the reason for the former common name "starch sugar". The amylases most often come from Bacillus licheniformis or Bacillus subtilis (strain MN-385), which are more thermostable than the originally used enzymes. Starting in 1982, pullulanases from Aspergillus niger were used in the production of glucose syrup to convert amylopectin to starch (amylose), thereby increasing the yield of glucose. The reaction is carried out at a pH = 4.6–5.2 and a temperature of 55–60 °C. Corn syrup has between 20% and 95% glucose in the dry matter. The Japanese form of the glucose syrup, Mizuame, is made from sweet potato or rice starch. Maltodextrin contains about 20% glucose. Many crops can be used as the source of starch. Maize, rice, wheat, cassava, potato, barley, sweet potato, corn husk and sago are all used in various parts of the world. In the United States, corn starch (from maize) is used almost exclusively. Some commercial glucose occurs as a component of invert sugar, a roughly 1:1 mixture of glucose and fructose that is produced from sucrose. In principle, cellulose could be hydrolyzed to glucose, but this process is not yet commercially practical. Conversion to fructose In the US, almost exclusively corn (more precisely, corn syrup) is used as glucose source for the production of isoglucose, which is a mixture of glucose and fructose, since fructose has a higher sweetening powerwith same physiological calorific value of 374 kilocalories per 100 g. The annual world production of isoglucose is 8 million tonnes (as of 2011). When made from corn syrup, the final product is high-fructose corn syrup (HFCS). Commercial usage Glucose is mainly used for the production of fructose and of glucose-containing foods. In foods, it is used as a sweetener, humectant, to increase the volume and to create a softer mouthfeel. Various sources of glucose, such as grape juice (for wine) or malt (for beer), are used for fermentation to ethanol during the production of alcoholic beverages. Most soft drinks in the US use HFCS-55 (with a fructose content of 55% in the dry mass), while most other HFCS-sweetened foods in the US use HFCS-42 (with a fructose content of 42% in the dry mass). In Mexico, on the other hand, soft drinks are sweetened by cane sugar, which has a higher sweetening power. In addition, glucose syrup is used, inter alia, in the production of confectionery such as candies, toffee and fondant. Typical chemical reactions of glucose when heated under water-free conditions are caramelization and, in presence of amino acids, the Maillard reaction. In addition, various organic acids can be biotechnologically produced from glucose, for example by fermentation with Clostridium thermoaceticum to produce acetic acid, with Penicillium notatum for the production of araboascorbic acid, with Rhizopus delemar for the production of fumaric acid, with Aspergillus niger for the production of gluconic acid, with Candida brumptii to produce isocitric acid, with Aspergillus terreus for the production of itaconic acid, with Pseudomonas fluorescens for the production of 2-ketogluconic acid, with Gluconobacter suboxydans for the production of 5-ketogluconic acid, with Aspergillus oryzae for the production of kojic acid, with Lactobacillus delbrueckii for the production of lactic acid, with Lactobacillus brevis for the production of malic acid, with Propionibacter shermanii for the production of propionic acid, with Pseudomonas aeruginosa for the production of pyruvic acid and with Gluconobacter suboxydans for the production of tartaric acid. Potent, bioactive natural products like triptolide that inhibit mammalian transcription via inhibition of the XPB subunit of the general transcription factor TFIIH has been recently reported as a glucose conjugate for targeting hypoxic cancer cells with increased glucose transporter expression. Recently, glucose has been gaining commercial use as a key component of "kits" containing lactic acid and insulin intended to induce hypoglycemia and hyperlactatemia to combat different cancers and infections. Analysis When a glucose molecule is to be detected at a certain position in a larger molecule, nuclear magnetic resonance spectroscopy, X-ray crystallography analysis or lectin immunostaining is performed with concanavalin A reporter enzyme conjugate, which binds only glucose or mannose. Classical qualitative detection reactions These reactions have only historical significance: Fehling test The Fehling test is a classic method for the detection of aldoses. Due to mutarotation, glucose is always present to a small extent as an open-chain aldehyde. By adding the Fehling reagents (Fehling (I) solution and Fehling (II) solution), the aldehyde group is oxidized to a carboxylic acid, while the Cu2+ tartrate complex is reduced to Cu+ and forms a brick red precipitate (Cu2O). Tollens test In the Tollens test, after addition of ammoniacal AgNO3 to the sample solution, glucose reduces Ag+ to elemental silver. Barfoed test In Barfoed's test, a solution of dissolved copper acetate, sodium acetate and acetic acid is added to the solution of the sugar to be tested and subsequently heated in a water bath for a few minutes. Glucose and other monosaccharides rapidly produce a reddish color and reddish brown copper(I) oxide (Cu2O). Nylander's test As a reducing sugar, glucose reacts in the Nylander's test. Other tests Upon heating a dilute potassium hydroxide solution with glucose to 100 °C, a strong reddish browning and a caramel-like odor develops. Concentrated sulfuric acid dissolves dry glucose without blackening at room temperature forming sugar sulfuric acid. In a yeast solution, alcoholic fermentation produces carbon dioxide in the ratio of 2.0454 molecules of glucose to one molecule of CO2. Glucose forms a black mass with stannous chloride. In an ammoniacal silver solution, glucose (as well as lactose and dextrin) leads to the deposition of silver. In an ammoniacal lead acetate solution, white lead glycoside is formed in the presence of glucose, which becomes less soluble on cooking and turns brown. In an ammoniacal copper solution, yellow copper oxide hydrate is formed with glucose at room temperature, while red copper oxide is formed during boiling (same with dextrin, except for with an ammoniacal copper acetate solution). With Hager's reagent, glucose forms mercury oxide during boiling. An alkaline bismuth solution is used to precipitate elemental, black-brown bismuth with glucose. Glucose boiled in an ammonium molybdate solution turns the solution blue. A solution with indigo carmine and sodium carbonate destains when boiled with glucose. Instrumental quantification Refractometry and polarimetry In concentrated solutions of glucose with a low proportion of other carbohydrates, its concentration can be determined with a polarimeter. For sugar mixtures, the concentration can be determined with a refractometer, for example in the Oechsle determination in the course of the production of wine. Photometric enzymatic methods in solution The enzyme glucose oxidase (GOx) converts glucose into gluconic acid and hydrogen peroxide while consuming oxygen. Another enzyme, peroxidase, catalyzes a chromogenic reaction (Trinder reaction) of phenol with 4-aminoantipyrine to a purple dye. Photometric test-strip method The test-strip method employs the above-mentioned enzymatic conversion of glucose to gluconic acid to form hydrogen peroxide. The reagents are immobilised on a polymer matrix, the so-called test strip, which assumes a more or less intense color. This can be measured reflectometrically at 510 nm with the aid of an LED-based handheld photometer. This allows routine blood sugar determination by nonscientists. In addition to the reaction of phenol with 4-aminoantipyrine, new chromogenic reactions have been developed that allow photometry at higher wavelengths (550 nm, 750 nm). Amperometric glucose sensor The electroanalysis of glucose is also based on the enzymatic reaction mentioned above. The produced hydrogen peroxide can be amperometrically quantified by anodic oxidation at a potential of 600 mV. The GOx is immobilized on the electrode surface or in a membrane placed close to the electrode. Precious metals such as platinum or gold are used in electrodes, as well as carbon nanotube electrodes, which e.g. are doped with boron. Cu–CuO nanowires are also used as enzyme-free amperometric electrodes, reaching a detection limit of 50 μmol/L. A particularly promising method is the so-called "enzyme wiring", where the electron flowing during the oxidation is transferred via a molecular wire directly from the enzyme to the electrode. Other sensory methods There are a variety of other chemical sensors for measuring glucose. Given the importance of glucose analysis in the life sciences, numerous optical probes have also been developed for saccharides based on the use of boronic acids, which are particularly useful for intracellular sensory applications where other (optical) methods are not or only conditionally usable. In addition to the organic boronic acid derivatives, which often bind highly specifically to the 1,2-diol groups of sugars, there are also other probe concepts classified by functional mechanisms which use selective glucose-binding proteins (e.g. concanavalin A) as a receptor. Furthermore, methods were developed which indirectly detect the glucose concentration via the concentration of metabolized products, e.g. by the consumption of oxygen using fluorescence-optical sensors. Finally, there are enzyme-based concepts that use the intrinsic absorbance or fluorescence of (fluorescence-labeled) enzymes as reporters. Copper iodometry Glucose can be quantified by copper iodometry. Chromatographic methods In particular, for the analysis of complex mixtures containing glucose, e.g. in honey, chromatographic methods such as high performance liquid chromatography and gas chromatography are often used in combination with mass spectrometry. Taking into account the isotope ratios, it is also possible to reliably detect honey adulteration by added sugars with these methods. Derivatization using silylation reagents is commonly used. Also, the proportions of di- and trisaccharides can be quantified. In vivo analysis Glucose uptake in cells of organisms is measured with 2-deoxy-D-glucose or fluorodeoxyglucose. (18F)fluorodeoxyglucose is used as a tracer in positron emission tomography in oncology and neurology, where it is by far the most commonly used diagnostic agent.
Biology and health sciences
Biochemistry and molecular biology
null
12967
https://en.wikipedia.org/wiki/Geologic%20time%20scale
Geologic time scale
The geologic time scale or geological time scale (GTS) is a representation of time based on the rock record of Earth. It is a system of chronological dating that uses chronostratigraphy (the process of relating strata to time) and geochronology (a scientific branch of geology that aims to determine the age of rocks). It is used primarily by Earth scientists (including geologists, paleontologists, geophysicists, geochemists, and paleoclimatologists) to describe the timing and relationships of events in geologic history. The time scale has been developed through the study of rock layers and the observation of their relationships and identifying features such as lithologies, paleomagnetic properties, and fossils. The definition of standardised international units of geological time is the responsibility of the International Commission on Stratigraphy (ICS), a constituent body of the International Union of Geological Sciences (IUGS), whose primary objective is to precisely define global chronostratigraphic units of the International Chronostratigraphic Chart (ICC) that are used to define divisions of geological time. The chronostratigraphic divisions are in turn used to define geochronologic units. Principles The geologic time scale is a way of representing deep time based on events that have occurred throughout Earth's history, a time span of about 4.54 ± 0.05 Ga (4.54 billion years). It chronologically organises strata, and subsequently time, by observing fundamental changes in stratigraphy that correspond to major geological or paleontological events. For example, the Cretaceous–Paleogene extinction event, marks the lower boundary of the Paleogene System/Period and thus the boundary between the Cretaceous and Paleogene systems/periods. For divisions prior to the Cryogenian, arbitrary numeric boundary definitions (Global Standard Stratigraphic Ages, GSSAs) are used to divide geologic time. Proposals have been made to better reconcile these divisions with the rock record. Historically, regional geologic time scales were used due to the litho- and biostratigraphic differences around the world in time equivalent rocks. The ICS has long worked to reconcile conflicting terminology by standardising globally significant and identifiable stratigraphic horizons that can be used to define the lower boundaries of chronostratigraphic units. Defining chronostratigraphic units in such a manner allows for the use of global, standardised nomenclature. The International Chronostratigraphic Chart represents this ongoing effort. Several key principles are used to determine the relative relationships of rocks and thus their chronostratigraphic position. The law of superposition that states that in undeformed stratigraphic sequences the oldest strata will lie at the bottom of the sequence, while newer material stacks upon the surface. In practice, this means a younger rock will lie on top of an older rock unless there is evidence to suggest otherwise. The principle of original horizontality that states layers of sediments will originally be deposited horizontally under the action of gravity. However, it is now known that not all sedimentary layers are deposited purely horizontally, but this principle is still a useful concept. The principle of lateral continuity that states layers of sediments extend laterally in all directions until either thinning out or being cut off by a different rock layer, i.e. they are laterally continuous. Layers do not extend indefinitely; their limits are controlled by the amount and type of sediment in a sedimentary basin, and the geometry of that basin. The principle of cross-cutting relationships that states a rock that cuts across another rock must be younger than the rock it cuts across. The law of included fragments that states small fragments of one type of rock that are embedded in a second type of rock must have formed first, and were included when the second rock was forming. The relationships of unconformities which are geologic features representing a gap in the geologic record. Unconformities are formed during periods of erosion or non-deposition, indicating non-continuous sediment deposition. Observing the type and relationships of unconformities in strata allows geologist to understand the relative timing the strata. The principle of faunal succession (where applicable) that states rock strata contain distinctive sets of fossils that succeed each other vertically in a specific and reliable order. This allows for a correlation of strata even when the horizon between them is not continuous. Divisions of geologic time The geologic time scale is divided into chronostratigraphic units and their corresponding geochronologic units. An is the largest geochronologic time unit and is equivalent to a chronostratigraphic eonothem. There are four formally defined eons: the Hadean, Archean, Proterozoic and Phanerozoic. An is the second largest geochronologic time unit and is equivalent to a chronostratigraphic erathem. There are ten defined eras: the Eoarchean, Paleoarchean, Mesoarchean, Neoarchean, Paleoproterozoic, Mesoproterozoic, Neoproterozoic, Paleozoic, Mesozoic and Cenozoic, with none from the Hadean eon. A is equivalent to a chronostratigraphic system. There are 22 defined periods, with the current being the Quaternary period. As an exception, two subperiods are used for the Carboniferous Period. An is the second smallest geochronologic unit. It is equivalent to a chronostratigraphic series. There are 37 defined epochs and one informal one. The current epoch is the Holocene. There are also 11 subepochs which are all within the Neogene and Quaternary. The use of subepochs as formal units in international chronostratigraphy was ratified in 2022. An is the smallest hierarchical geochronologic unit. It is equivalent to a chronostratigraphic stage. There are 96 formal and five informal ages. The current age is the Meghalayan. A is a non-hierarchical formal geochronology unit of unspecified rank and is equivalent to a chronostratigraphic chronozone. These correlate with magnetostratigraphic, lithostratigraphic, or biostratigraphic units as they are based on previously defined stratigraphic units or geologic features. The subdivisions and are used as the geochronologic equivalents of the chronostratigraphic and , e.g., Early Triassic Period (geochronologic unit) is used in place of Lower Triassic System (chronostratigraphic unit). Rocks representing a given chronostratigraphic unit are that chronostratigraphic unit, and the time they were laid down in is the geochronologic unit, e.g., the rocks that represent the Silurian System the Silurian System and they were deposited the Silurian Period. This definition means the numeric age of a geochronologic unit can be changed (and is more often subject to change) when refined by geochronometry while the equivalent chronostratigraphic unit (the revision of which is less frequent) remains unchanged. For example, in early 2022, the boundary between the Ediacaran and Cambrian periods (geochronologic units) was revised from 541 Ma to 538.8 Ma but the rock definition of the boundary (GSSP) at the base of the Cambrian, and thus the boundary between the Ediacaran and Cambrian systems (chronostratigraphic units) has not been changed; rather, the absolute age has merely been refined. Terminology is the element of stratigraphy that deals with the relation between rock bodies and the relative measurement of geological time. It is the process where distinct strata between defined stratigraphic horizons are assigned to represent a relative interval of geologic time. A is a body of rock, layered or unlayered, that is defined between specified stratigraphic horizons which represent specified intervals of geologic time. They include all rocks representative of a specific interval of geologic time, and only this time span. Eonothem, erathem, system, series, subseries, stage, and substage are the hierarchical chronostratigraphic units. A is a subdivision of geologic time. It is a numeric representation of an intangible property (time). These units are arranged in a hierarchy: eon, era, period, epoch, subepoch, age, and subage. is the scientific branch of geology that aims to determine the age of rocks, fossils, and sediments either through absolute (e.g., radiometric dating) or relative means (e.g., stratigraphic position, paleomagnetism, stable isotope ratios). is the field of geochronology that numerically quantifies geologic time. A (GSSP) is an internationally agreed-upon reference point on a stratigraphic section that defines the lower boundaries of stages on the geologic time scale. (Recently this has been used to define the base of a system) A (GSSA) is a numeric-only, chronologic reference point used to define the base of geochronologic units prior to the Cryogenian. These points are arbitrarily defined. They are used where GSSPs have not yet been established. Research is ongoing to define GSSPs for the base of all units that are currently defined by GSSAs. The standard international units of the geologic time scale are published by the International Commission on Stratigraphy on the International Chronostratigraphic Chart; however, regional terms are still in use in some areas. The numeric values on the International Chronostratigrahpic Chart are represented by the unit Ma (megaannum, for 'million years'). For example, Ma, the lower boundary of the Jurassic Period, is defined as 201,400,000 years old with an uncertainty of 200,000 years. Other SI prefix units commonly used by geologists are Ga (gigaannum, billion years), and ka (kiloannum, thousand years), with the latter often represented in calibrated units (before present). Naming of geologic time The names of geologic time units are defined for chronostratigraphic units with the corresponding geochronologic unit sharing the same name with a change to the suffix (e.g. Phanerozoic Eonothem becomes the Phanerozoic Eon). Names of erathems in the Phanerozoic were chosen to reflect major changes in the history of life on Earth: Paleozoic (old life), Mesozoic (middle life), and Cenozoic (new life). Names of systems are diverse in origin, with some indicating chronologic position (e.g., Paleogene), while others are named for lithology (e.g., Cretaceous), geography (e.g., Permian), or are tribal (e.g., Ordovician) in origin. Most currently recognised series and subseries are named for their position within a system/series (early/middle/late); however, the International Commission on Stratigraphy advocates for all new series and subseries to be named for a geographic feature in the vicinity of its stratotype or type locality. The name of stages should also be derived from a geographic feature in the locality of its stratotype or type locality. Informally, the time before the Cambrian is often referred to as the Precambrian or pre-Cambrian (Supereon). History of the geologic time scale Early history The most modern geological time scale was not formulated until 1911 by Arthur Holmes (1890 – 1965), who drew inspiration from James Hutton (1726–1797), a Scottish Geologist who presented the idea of uniformitarianism or the theory that changes to the Earth's crust resulted from continuous and uniform processes. The broader concept of the relation between rocks and time are can be traced back to (at least) the philosophers of Ancient Greece from 1200 BC to 600 AD. Xenophanes of Colophon (c. 570–487 BCE) observed rock beds with fossils of seashells located above the sea-level, viewed them as once living organisms, and used this to imply an unstable relationship in which the sea had at times transgressed over the land and at other times had regressed. This view was shared by a few of Xenophanes's scholars and those that followed, including Aristotle (384–322 BC) who (with additional observations) reasoned that the positions of land and sea had changed over long periods of time. The concept of deep time was also recognized by Chinese naturalist Shen Kuo (1031–1095) and Islamic scientist-philosophers, notably the Brothers of Purity, who wrote on the processes of stratification over the passage of time in their treatises. Their work likely inspired that of the 11th-century Persian polymath Avicenna (Ibn Sînâ, 980–1037) who wrote in The Book of Healing (1027) on the concept of stratification and superposition, pre-dating Nicolas Steno by more than six centuries. Avicenna also recognized fossils as "petrifications of the bodies of plants and animals", with the 13th-century Dominican bishop Albertus Magnus (c. 1200–1280), who drew from Aristotle's natural philosophy, extending this into a theory of a petrifying fluid. These works appeared to have little influence on scholars in Medieval Europe who looked to the Bible to explain the origins of fossils and sea-level changes, often attributing these to the 'Deluge', including Ristoro d'Arezzo in 1282. It was not until the Italian Renaissance when Leonardo da Vinci (1452–1519) would reinvigorate the relationships between stratification, relative sea-level change, and time, denouncing attribution of fossils to the 'Deluge': These views of da Vinci remained unpublished, and thus lacked influence at the time; however, questions of fossils and their significance were pursued and, while views against Genesis were not readily accepted and dissent from religious doctrine was in some places unwise, scholars such as Girolamo Fracastoro shared da Vinci's views, and found the attribution of fossils to the 'Deluge' absurd. Although many theories surrounding philosophy and concepts of rocks were developed in earlier years, "the first serious attempts to formulate a geological time scale that could be applied anywhere on Earth were made in the late 18th century." Later, in the 19th century, academics further developed theories on stratification. William Smith, often referred to as the "Father of Geology" developed theories through observations rather than drawing from the scholars that came before him. Smith's work was primarily based on his detailed study of rock layers and fossils during his time and he created "the first map to depict so many rock formations over such a large area”. After studying rock layers and the fossils they contained, Smith concluded that each layer of rock contained distinct material that could be used to identify and correlate rock layers across different regions of the world. Smith developed the concept of faunal succession or the idea that fossils can serve as a marker for the age of the strata they are found in and published his ideas in his 1816 book, "Strata identified by organized fossils." Establishment of primary principles Niels Stensen, more commonly known as Nicolas Steno (1638–1686), is credited with establishing four of the guiding principles of stratigraphy. In De solido intra solidum naturaliter contento dissertationis prodromus Steno states: When any given stratum was being formed, all the matter resting on it was fluid and, therefore, when the lowest stratum was being formed, none of the upper strata existed. ... strata which are either perpendicular to the horizon or inclined to it were at one time parallel to the horizon. When any given stratum was being formed, it was either encompassed at its edges by another solid substance or it covered the whole globe of the earth. Hence, it follows that wherever bared edges of strata are seen, either a continuation of the same strata must be looked for or another solid substance must be found that kept the material of the strata from being dispersed. If a body or discontinuity cuts across a stratum, it must have formed after that stratum. Respectively, these are the principles of superposition, original horizontality, lateral continuity, and cross-cutting relationships. From this Steno reasoned that strata were laid down in succession and inferred relative time (in Steno's belief, time from Creation). While Steno's principles were simple and attracted much attention, applying them proved challenging. These basic principles, albeit with improved and more nuanced interpretations, still form the foundational principles of determining the correlation of strata relative to geologic time. Over the course of the 18th-century geologists realised that: Sequences of strata often become eroded, distorted, tilted, or even inverted after deposition Strata laid down at the same time in different areas could have entirely different appearances The strata of any given area represented only part of Earth's long history Formulation of a modern geologic time scale The apparent, earliest formal division of the geologic record with respect to time was introduced during the era of Biblical models by Thomas Burnet who applied a two-fold terminology to mountains by identifying "montes primarii" for rock formed at the time of the 'Deluge', and younger "monticulos secundarios" formed later from the debris of the "primarii". Anton Moro (1687–1784) also used primary and secondary divisions for rock units but his mechanism was volcanic. In this early version of the Plutonism theory, the interior of Earth was seen as hot, and this drove the creation of primary igneous and metamorphic rocks and secondary rocks formed contorted and fossiliferous sediments. These primary and secondary divisions were expanded on by Giovanni Targioni Tozzetti (1712–1783) and Giovanni Arduino (1713–1795) to include tertiary and quaternary divisions. These divisions were used to describe both the time during which the rocks were laid down, and the collection of rocks themselves (i.e., it was correct to say Tertiary rocks, and Tertiary Period). Only the Quaternary division is retained in the modern geologic time scale, while the Tertiary division was in use until the early 21st century. The Neptunism and Plutonism theories would compete into the early 19th century with a key driver for resolution of this debate being the work of James Hutton (1726–1797), in particular his Theory of the Earth, first presented before the Royal Society of Edinburgh in 1785. Hutton's theory would later become known as uniformitarianism, popularised by John Playfair (1748–1819) and later Charles Lyell (1797–1875) in his Principles of Geology. Their theories strongly contested the 6,000 year age of the Earth as suggested determined by James Ussher via Biblical chronology that was accepted at the time by western religion. Instead, using geological evidence, they contested Earth to be much older, cementing the concept of deep time. During the early 19th century William Smith, Georges Cuvier, Jean d'Omalius d'Halloy, and Alexandre Brongniart pioneered the systematic division of rocks by stratigraphy and fossil assemblages. These geologists began to use the local names given to rock units in a wider sense, correlating strata across national and continental boundaries based on their similarity to each other. Many of the names below erathem/era rank in use on the modern ICC/GTS were determined during the early to mid-19th century. The advent of geochronometry During the 19th century, the debate regarding Earth's age was renewed, with geologists estimating ages based on denudation rates and sedimentary thicknesses or ocean chemistry, and physicists determining ages for the cooling of the Earth or the Sun using basic thermodynamics or orbital physics. These estimations varied from 15,000 million years to 0.075 million years depending on method and author, but the estimations of Lord Kelvin and Clarence King were held in high regard at the time due to their pre-eminence in physics and geology. All of these early geochronometric determinations would later prove to be incorrect. The discovery of radioactive decay by Henri Becquerel, Marie Curie, and Pierre Curie laid the ground work for radiometric dating, but the knowledge and tools required for accurate determination of radiometric ages would not be in place until the mid-1950s. Early attempts at determining ages of uranium minerals and rocks by Ernest Rutherford, Bertram Boltwood, Robert Strutt, and Arthur Holmes, would culminate in what are considered the first international geological time scales by Holmes in 1911 and 1913. The discovery of isotopes in 1913 by Frederick Soddy, and the developments in mass spectrometry pioneered by Francis William Aston, Arthur Jeffrey Dempster, and Alfred O. C. Nier during the early to mid-20th century would finally allow for the accurate determination of radiometric ages, with Holmes publishing several revisions to his geological time-scale with his final version in 1960. Modern international geological time scale The establishment of the IUGS in 1961 and acceptance of the Commission on Stratigraphy (applied in 1965) to become a member commission of IUGS led to the founding of the ICS. One of the primary objectives of the ICS is "the establishment, publication and revision of the ICS International Chronostratigraphic Chart which is the standard, reference global Geological Time Scale to include the ratified Commission decisions". Following on from Holmes, several A Geological Time Scale books were published in 1982, 1989, 2004, 2008, 2012, 2016, and 2020. However, since 2013, the ICS has taken responsibility for producing and distributing the ICC citing the commercial nature, independent creation, and lack of oversight by the ICS on the prior published GTS versions (GTS books prior to 2013) although these versions were published in close association with the ICS. Subsequent Geologic Time Scale books (2016 and 2020) are commercial publications with no oversight from the ICS, and do not entirely conform to the chart produced by the ICS. The ICS produced GTS charts are versioned (year/month) beginning at v2013/01. At least one new version is published each year incorporating any changes ratified by the ICS since the prior version. Major proposed revisions to the ICC Proposed Anthropocene Series/Epoch First suggested in 2000, the Anthropocene is a proposed epoch/series for the most recent time in Earth's history. While still informal, it is a widely used term to denote the present geologic time interval, in which many conditions and processes on Earth are profoundly altered by human impact. the Anthropocene has not been ratified by the ICS; however, in May 2019 the Anthropocene Working Group voted in favour of submitting a formal proposal to the ICS for the establishment of the Anthropocene Series/Epoch. Nevertheless, the definition of the Anthropocene as a geologic time period rather than a geologic event remains controversial and difficult. Proposals for revisions to pre-Cryogenian timeline Shields et al. 2021 An international working group of the ICS on pre-Cryogenian chronostratigraphic subdivision have outlined a template to improve the pre-Cryogenian geologic time scale based on the rock record to bring it in line with the post-Tonian geologic time scale. This work assessed the geologic history of the currently defined eons and eras of the pre-Cambrian, and the proposals in the "Geological Time Scale" books 2004, 2012, and 2020. Their recommend revisions of the pre-Cryogenian geologic time scale were (changes from the current scale [v2023/09] are italicised): Three divisions of the Archean instead of four by dropping Eoarchean, and revisions to their geochronometric definition, along with the repositioning of the Siderian into the latest Neoarchean, and a potential Kratian division in the Neoarchean. Archean (4000–2450 Ma) Paleoarchean (4000–3500 Ma) Mesoarchean (3500–3000 Ma) Neoarchean (3000–2450 Ma) Kratian (no fixed time given, prior to the Siderian) – from Greek κράτος (krátos) 'strength'. Siderian (?–2450 Ma) – moved from Proterozoic to end of Archean, no start time given, base of Paleoproterozoic defines the end of the Siderian Refinement of geochronometric divisions of the Proterozoic, Paleoproterozoic, repositioning of the Statherian into the Mesoproterozoic, new Skourian period/system in the Paleoproterozoic, new Kleisian or Syndian period/system in the Neoproterozoic. Paleoproterozoic (2450–1800 Ma) Skourian (2450–2300 Ma) – from Greek σκουριά (skouriá) 'rust'. Rhyacian (2300–2050 Ma) Orosirian (2050–1800 Ma) Mesoproterozoic (1800–1000 Ma) Statherian (1800–1600 Ma) Calymmian (1600–1400 Ma) Ectasian (1400–1200 Ma) Stenian (1200–1000 Ma) Neoproterozoic (1000–538.8 Ma) Kleisian or Syndian (1000–800 Ma) – respectively from Greek κλείσιμο (kleísimo) 'closure' and σύνδεση (sýndesi) 'connection'. Tonian (800–720 Ma) Cryogenian (720–635 Ma) Ediacaran (635–538.8 Ma) Proposed pre-Cambrian timeline (Shield et al. 2021, ICS working group on pre-Cryogenian chronostratigraphy), shown to scale: Current ICC pre-Cambrian timeline (v2023/09), shown to scale: Van Kranendonk et al. 2012 (GTS2012) The book, Geologic Time Scale 2012, was the last commercial publication of an international chronostratigraphic chart that was closely associated with the ICS. It included a proposal to substantially revise the pre-Cryogenian time scale to reflect important events such as the formation of the Solar System and the Great Oxidation Event, among others, while at the same time maintaining most of the previous chronostratigraphic nomenclature for the pertinent time span. these proposed changes have not been accepted by the ICS. The proposed changes (changes from the current scale [v2023/09]) are italicised: Hadean Eon (4567–4030 Ma) Chaotian Era/Erathem (4567–4404 Ma) – the name alluding both to the mythological Chaos and the chaotic phase of planet formation. Jack Hillsian or Zirconian Era/Erathem (4404–4030 Ma) – both names allude to the Jack Hills Greenstone Belt which provided the oldest mineral grains on Earth, zircons. Archean Eon/Eonothem (4030–2420 Ma) Paleoarchean Era/Erathem (4030–3490 Ma) Acastan Period/System (4030–3810 Ma) – named after the Acasta Gneiss, one of the oldest preserved pieces of continental crust. Isuan Period/System (3810–3490 Ma) – named after the Isua Greenstone Belt. Mesoarchean Era/Erathem (3490–2780 Ma) Vaalbaran Period/System (3490–3020 Ma) – based on the names of the Kaapvaal (Southern Africa) and Pilbara (Western Australia) cratons, to reflect the growth of stable continental nuclei or proto-cratonic kernels. Pongolan Period/System (3020–2780 Ma) – named after the Pongola Supergroup, in reference to the well preserved evidence of terrestrial microbial communities in those rocks. Neoarchean Era/Erathem (2780–2420 Ma) Methanian Period/System (2780–2630 Ma) – named for the inferred predominance of methanotrophic prokaryotes Siderian Period/System (2630–2420 Ma) – named for the voluminous banded iron formations formed within its duration. Proterozoic Eon/Eonothem (2420–538.8 Ma) Paleoproterozoic Era/Erathem (2420–1780 Ma) Oxygenian Period/System (2420–2250 Ma) – named for displaying the first evidence for a global oxidising atmosphere. Jatulian or Eukaryian Period/System (2250–2060 Ma) – names are respectively for the Lomagundi–Jatuli δ13C isotopic excursion event spanning its duration, and for the (proposed) first fossil appearance of eukaryotes. Columbian Period/System (2060–1780 Ma) – named after the supercontinent Columbia. Mesoproterozoic Era/Erathem (1780–850 Ma) Rodinian Period/System (1780–850 Ma) – named after the supercontinent Rodinia, stable environment. Proposed pre-Cambrian timeline (GTS2012), shown to scale: Current ICC pre-Cambrian timeline (v2023/09), shown to scale: Table of geologic time The following table summarises the major events and characteristics of the divisions making up the geologic time scale of Earth. This table is arranged with the most recent geologic periods at the top, and the oldest at the bottom. The height of each table entry does not correspond to the duration of each subdivision of time. As such, this table is not to scale and does not accurately represent the relative time-spans of each geochronologic unit. While the Phanerozoic Eon looks longer than the rest, it merely spans ~539 million years (~12% of Earth's history), whilst the previous three eons collectively span ~3,461 million years (~76% of Earth's history). This bias toward the most recent eon is in part due to the relative lack of information about events that occurred during the first three eons compared to the current eon (the Phanerozoic). The use of subseries/subepochs has been ratified by the ICS. While some regional terms are still in use, the table of geologic time conforms to the nomenclature, ages, and colour codes set forth by the International Commission on Stratigraphy in the official International Chronostratigraphic Chart. The International Commission on Stratigraphy also provide an online interactive version of this chart. The interactive version is based on a service delivering a machine-readable Resource Description Framework/Web Ontology Language representation of the time scale, which is available through the Commission for the Management and Application of Geoscience Information GeoSciML project as a service and at a SPARQL end-point. Non-Earth based geologic time scales Some other planets and satellites in the Solar System have sufficiently rigid structures to have preserved records of their own histories, for example, Venus, Mars and the Earth's Moon. Dominantly fluid planets, such as the giant planets, do not comparably preserve their history. Apart from the Late Heavy Bombardment, events on other planets probably had little direct influence on the Earth, and events on Earth had correspondingly little effect on those planets. Construction of a time scale that links the planets is, therefore, of only limited relevance to the Earth's time scale, except in a Solar System context. The existence, timing, and terrestrial effects of the Late Heavy Bombardment are still a matter of debate. Lunar (selenological) time scale The geologic history of Earth's Moon has been divided into a time scale based on geomorphological markers, namely impact cratering, volcanism, and erosion. This process of dividing the Moon's history in this manner means that the time scale boundaries do not imply fundamental changes in geological processes, unlike Earth's geologic time scale. Five geologic systems/periods (Pre-Nectarian, Nectarian, Imbrian, Eratosthenian, Copernican), with the Imbrian divided into two series/epochs (Early and Late) were defined in the latest Lunar geologic time scale. The Moon is unique in the Solar System in that it is the only other body from which humans have rock samples with a known geological context. Martian geologic time scale The geological history of Mars has been divided into two alternate time scales. The first time scale for Mars was developed by studying the impact crater densities on the Martian surface. Through this method four periods have been defined, the Pre-Noachian (~4,500–4,100 Ma), Noachian (~4,100–3,700 Ma), Hesperian (~3,700–3,000 Ma), and Amazonian (~3,000 Ma to present). A second time scale based on mineral alteration observed by the OMEGA spectrometer on board the Mars Express. Using this method, three periods were defined, the Phyllocian (~4,500–4,000 Ma), Theiikian (~4,000–3,500 Ma), and Siderikian (~3,500 Ma to present).
Physical sciences
Geological history
null
12976
https://en.wikipedia.org/wiki/Gastroenterology
Gastroenterology
Gastroenterology (from the Greek gastḗr- "belly", -énteron "intestine", and -logía "study of") is the branch of medicine focused on the digestive system and its disorders. The digestive system consists of the gastrointestinal tract, sometimes referred to as the GI tract, which includes the esophagus, stomach, small intestine and large intestine as well as the accessory organs of digestion which include the pancreas, gallbladder, and liver. The digestive system functions to move material through the GI tract via peristalsis, break down that material via digestion, absorb nutrients for use throughout the body, and remove waste from the body via defecation. Physicians who specialize in the medical specialty of gastroenterology are called gastroenterologists or sometimes GI doctors. Some of the most common conditions managed by gastroenterologists include gastroesophageal reflux disease, gastrointestinal bleeding, irritable bowel syndrome, inflammatory bowel disease (IBD) which includes Crohn's disease and ulcerative colitis, peptic ulcer disease, gallbladder and biliary tract disease, hepatitis, pancreatitis, colitis, colon polyps and cancer, nutritional problems, and many more. History Citing from Egyptian papyri, John F. Nunn identified significant knowledge of gastrointestinal diseases among practicing physicians during the periods of the pharaohs. Irynakhty, of the tenth dynasty, 2125 B.C., was a court physician specializing in gastroenterology, sleeping, and proctology. Among ancient Greeks, Hippocrates attributed digestion to concoction. Galen's concept of the stomach having four faculties was widely accepted up to modernity in the seventeenth century. 18th century Italian Lazzaro Spallanzani (1729–99) was among early physicians to disregard Galen's theories, and in 1780 he gave experimental proof on the action of gastric juice on foodstuffs. In 1767, German Johann von Zimmermann wrote an important work on dysentery. In 1777, Maximilian Stoll of Vienna described cancer of the gallbladder. 19th century In 1805, Philipp Bozzini made the first attempt to observe inside the living human body using a tube he named Lichtleiter (light-guiding instrument) to examine the urinary tract, the rectum, and the pharynx. This is the earliest description of endoscopy. Charles Emile Troisier described enlargement of lymph nodes in abdominal cancer. In 1823, William Prout discovered that stomach juices contain hydrochloric acid. In 1833, William Beaumont published Experiments and Observations on the Gastric Juice and the Physiology of Digestion following years of experimenting on test subject Alexis St. Martin. In 1868, Adolf Kussmaul, a well-known German physician, developed the gastroscope. He perfected the technique on a sword swallower. In 1871, at the society of physicians in Vienna, Carl Stoerk demonstrated an esophagoscope made of two telescopic metal tubes, initially devised by Waldenburg in 1870. In 1876, Karl Wilhelm von Kupffer described the properties of some liver cells now called Kupffer cells. In 1883, Hugo Kronecker and Samuel James Meltzer studied oesophageal manometry in humans. 20th century In 1915, Jesse McClendon tested acidity of human stomach in situ. In 1921–22, Walter Alvarez did the first electrogastrography research. Rudolf Schindler described many important diseases involving the human digestive system during World War I in his illustrated textbook and is portrayed by some as the "father of gastroscopy". He and Georg Wolf developed a semiflexible gastroscope in 1932. In 1932, Burrill Bernard Crohn described Crohn's disease. In 1957, Basil Hirschowitz introduced the first prototype of a fibreoptic gastroscope. 21st century In 2005, Barry Marshall and Robin Warren of Australia were awarded the Nobel Prize in Physiology or Medicine for their discovery of Helicobacter pylori (1982/1983) and its role in peptic ulcer disease. James Leavitt assisted in their research, but the Nobel Prize is not awarded posthumously so he was not included in the award. Disease classification 1. International Classification of Disease (ICD 2007)/WHO classification: Chapter XI, Diseases of the digestive system,(K00-K93) 2. MeSH subject Heading: Gastroenterology (G02.403.776.409.405) Gastroenterological diseases(C06.405) 3. National Library of Medicine Catalogue (NLM classification 2006): Digestive system(W1) Procedures Colonoscopy A procedure using a long thin tube with a camera that is passed through the anus to visualize the rectum and the entire length of the colon. The procedure is performed either to look for colon polyps and/or colon cancer in somebody without symptoms, referred to as screening, or to further evaluate symptoms including rectal bleeding, dark tarry stools, change in bowel habits or stool consistency (diarrhea, pencil-thin stool), abdominal pain, and unexplained weight loss. Before the procedure, the physician might ask the patient to stop taking certain medications including blood thinners, aspirin, diabetes medications, or nonsteroidal anti-inflammatory drugs. A bowel prep is usually taken the night before and into the morning of the procedure which consists of an enema or laxatives, either pills or powder dissolved in liquid, that will cause diarrhea. The procedure might need to be stopped and rescheduled if there is stool remaining in the colon due to an incomplete bowel prep because the physician can not adequately visualize the colon. During the procedure, the patient is sedated and the scope is used to examine the entire length of the colon looking for polyps, bleeding, or abnormal tissue. A biopsy or polyp removal can then be performed and the tissue sent to the lab for evaluation. The procedure usually takes thirty minutes to an hour followed by a one to two hour observation period. Complications include bloating, cramping, a reaction to anesthesia, bleeding, and a hole through the wall of the colon that may require repeat colonoscopy or surgery. Signs of a serious complication requiring urgent or emergent medical attention include severe pain in the abdomen, fever, bleeding that does not improve, dizziness, and weakness. Sigmoidoscopy A procedure similar to a colonoscopy using a long thin tube with a camera (scope) passed through the anus but only intended to visualize the rectum and the last part of the colon closest to the rectum. All aspects of the procedure are the same as for a colonoscopy with the exception that this procedure only lasts ten to twenty minutes and is done without sedation. This usually allows for the patient to return to normal activities immediately after the procedure is finished. Esophagogastroduodenoscopy (EGD) A procedure using a long thin tube with a camera that is passed through the mouth to view the esophagus ("esophago-"), stomach ("gastro-"), and the duodenum ("duodeno-"). It is also referred to as upper endoscopy or just endoscopy. The procedure is performed for further evaluation of symptoms including persistent heartburn, indigestion, vomiting blood, dark tarry stools, persistent nausea and vomiting, pain, difficulty swallowing, painful swallowing, and unexplained weight loss. It is also performed for further testing following a lab test that shows low hemoglobin levels without a known cause or an abnormal barium swallow. The procedure can be used to diagnose many disorders through direct visualization or tissue biopsy including esophageal varices, esophageal strictures, gastroesophageal reflux disease, Barrett's esophagus, cancer, celiac disease, gastritis, peptic ulcer disease, and a H. pylori infection. Intra-operative techniques can then be used for treatment of certain disorders like banding esophageal varices or dilating esophageal strictures. The patient will likely be required to not eat or drink anything starting 4 hours prior to the procedure. Sedation is usually required for patient comfort. This procedure usually lasts around thirty minutes followed by a one to two hour observation period. Side effects include bloating, nausea, and a sore throat for 1 to 2 days. Complications are rare but include reaction to the anesthesia, bleeding, and a hole through the wall of the esophagus, stomach, or small intestine which could require surgery. Signs of a serious complication requiring urgent or emergent medical attention include chest pain, problems breathing, problems swallowing, throat pain that gets worse, vomiting with blood or the appearance of "coffee-grounds", worsening abdominal pain, bloody or black tarry stool, and fever. Endoscopic Retrograde Cholangiopancreatography (ERCP) A procedure using a long thin tube with a camera passed through the mouth into the first part of the small intestine to locate, diagnose, and treat disorders related to the bile and pancreatic ducts. These ducts carry fluids that help with digesting food from the liver, gallbladder, and pancreas and can become narrowed or blocked as a result of gallstones, infection, inflammation, pancreatic pseudocysts, and tumors of the bile ducts or pancreas. As a result, one may experience back pain, yellowing of the skin, and an abnormal lab test showing an elevated bilirubin level which could necessitate this procedure. However, the procedure is not recommended if the patient has acute pancreatitis unless the level of bilirubin remains high or is increasing which could suggest the blockage is still present. The patient will likely be required to not eat or drink anything starting 8 hours prior to the procedure. After the patient is sedated, the physician will pass the scope through the mouth, esophagus, stomach, and into the duodenum to locate the opening where the ducts drain into the small intestine. The physician can then inject dye into these ducts and take X-rays which show a real time view, via fluoroscopy, allowing the physician to locate and relieve the blockage. This is done through multiple techniques including cutting the opening and creating a bigger hole for drainage, removing gallstones and other debris, dilating narrow parts of the ducts, or placing a stent which keeps the ducts open. The physician can also take a biopsy of the ducts to evaluate for cancer, infection, or inflammation. Side effects include bloating, nausea, or a sore throat for one to two days. Complications include pancreatitis, infection of the bile ducts or gallbladder, bleeding, reaction to the anesthesia, and perforation of any structures that the scope or its instruments pass but particularly the duodenum, bile duct, and pancreatic duct. Signs of a serious complication requiring urgent or emergent medical attention include bloody or black tarry stool, chest pain, fever, worsening abdominal pain, worsening throat pain, problems breathing, problems swallowing, vomit that is bloody or looks like coffee-grounds. Most of the time complications from this procedure require hospitalization for treatment. Disorders Esophagus Gastroesophageal reflux disease (GERD) A condition that is a result of stomach contents consistently coming back up into the esophagus causing troublesome symptoms or complications. Symptoms are considered troublesome based on how disruptive they are to a patient's daily life and well-being. This definition was standardized by the Montreal Consensus in 2006. Symptoms include a painful feeling in the middle of the chest and feeling stomach contents coming back up into the mouth. Other symptoms include chest pain, nausea, difficulty swallowing, painful swallowing, coughing, and hoarseness. Risk factors include obesity, pregnancy, smoking, hiatal hernia, certain medications, and certain foods. Diagnosis is usually based on symptoms and medical history, with further testing only after treatment has been ineffective. Further diagnosis can be achieved by measuring how much acid enters the esophagus or looking into the esophagus with a scope. Treatment and management options include lifestyle modifications, medications, and surgery if there is no improvement with other interventions. Lifestyle modifications include not lying down for three hours after eating, lying down on the left side, elevating head while laying by elevating head of the bed or using extra pillows, losing weight, stopping smoking, and avoiding coffee, mint, alcohol, chocolate, fatty foods, acidic foods, and spicy foods. Medications include antacids, proton pump inhibitors, H2 receptor blockers. Surgery is usually a Nissen fundoplication and is performed by a surgeon. Complications of longstanding GERD can include inflammation of the esophagus that may cause bleeding or ulcer formation, narrowing of the esophagus leading to swallowing issues, a change in the lining of the esophagus that can increase the chances of developing cancer (Barrett's esophagus), chronic cough, asthma, inflammation of the larynx leading to hoarseness, and wearing away of tooth enamel leading to dental issues. Barrett's esophagus A condition in which the lining of the esophagus changes to look more like the lining of the intestine and increases the risk of developing esophageal cancer. There are no specific symptoms although symptoms of GERD may be present for years prior as it is associated with a 10–15% risk of Barrett's esophagus. Risk factors include chronic GERD for more than 5 years, being age 50 or older, being non-Hispanic white, being male, having a family history of this disorder, belly fat, and a history of smoking. Protective factors include H. pylori infection, frequent use of aspirin or other non-steroidal anti-inflammatory drugs, and diets high in fruits and vegetables. Diagnosis can be made by looking into the esophagus with a scope and possibly taking a biopsy of the lining of the esophagus. Treatment includes managing GERD, destroying abnormal parts of the esophagus, removing abnormal tissue in the esophagus, and removing part of the esophagus as performed by a general surgeon. Further management could include periodic surveillance with repeat scopes at certain intervals determined by the physician, likely not more frequently than every three to five years. Complications from this disorder can result in a type of cancer called esophageal adenocarcinoma. Education and training United States Gastroenterology is a subspecialty of internal medicine and therefore requires three years of internal medicine residency training followed by three additional years in a dedicated gastroenterology fellowship. This training is certified by the American Board of Internal Medicine (ABIM) and the American Osteopathic Board of Internal Medicine (AOBIM) and must be completed at a program accredited by the Accreditation Council for Graduate Medical Education (ACGME). Other national societies that oversee training include the American College of Gastroenterology (ACG), the American Gastroenterological Association (AGA), and the American Society for Gastrointestinal Endoscopy (ASGE). Scope of practice Gastroenterologists see patients both in the clinic and the hospital setting. They can order diagnostic tests, prescribe medications, and perform a number of diagnostic and therapeutic procedures including colonoscopy, esophagogastroduodenoscopy (EGD), endoscopic retrograde cholangiopancreatography (ERCP), endoscopic ultrasound (EUS), and liver biopsy. Subspecialties Some gastroenterology trainees will complete a "fourth-year" (although this is often their seventh year of graduate medical education) in transplant hepatology, advanced interventional endoscopy, inflammatory bowel disease, motility, or other topics. Advanced endoscopy, sometimes called interventional or surgical endoscopy, is a sub-specialty of gastroenterology that focuses on advanced endoscopic techniques for the treatment of pancreatic, hepatobiliary, and gastrointestinal disease. Interventional gastroenterologists typically undergo an additional year of rigorous training in advanced endoscopic techniques including endoscopic retrograde cholangiopancreatography, endoscopic ultrasound-guided diagnostic and interventional procedures, and advanced resection techniques including endoscopic mucosal resection and endoscopic submucosal dissection. Additionally, the performance of endoscopic bariatric procedures is also performed by some advanced endoscopists. Hepatology, or hepatobiliary medicine, encompasses the study of the liver, pancreas, and biliary tree, and is traditionally considered a sub-specialty of gastroenterology, while proctology encompasses disorders of the anus, rectum, and colon and is considered a sub-specialty of general surgery. Professional organizations American College of Gastroenterology (ACG) - was founded in 1932 by a group of 10 gastroenterologists in New York City and now consists of over 16,000 gastroenterologists from 86 countries. The ACG sponsors conferences regionally and nationally, publishes several journals including The American Journal of Gastroenterology, Clinical and Translational Gastroenterology, and ACG Case Reports Journal, hosts continuing medical education (CME) programs, supports initiatives for fellows-in-training, develops and promotes evidence-based guidelines, supports advocacy and public policy, and provides clinical research funding consisting of $27 million in research grants and career development awards ($2.2 million in 2022). American Gastroenterological Association (AGA) - was founded in 1897 and now includes over 16,000 members worldwide. Their mission statement reads "Empowering clinicians and researchers to improve digestive health." The AGA publishes two journals monthly titled Gastroenterology and Clinical Gastroenterology and Hepatology, sponsors an annual meeting called Digestive Disease Week (DDW), provides more than $3 million each year in research grants to over 50 investigators through the AGA Research Foundation Awards Program ($2.56 million to 61 investigators in 2022), develops and promotes evidence-based guidelines, influences public policy through AGA's Congressional Advocates Program and the AGA political action committee (PAC), and supports a variety of educational opportunities including those that qualify for continuing medical education (CME) and maintenance of certification (MOC) credits. American Society for Gastrointestinal Endoscopy (ASGE) - was founded in 1941 and now includes around 15,000 members worldwide. Their mission statement reads "The American Society for Gastrointestinal Endoscopy is the global leader in advancing digestive care through education, advocacy and promotion of excellence and innovation in endoscopy." The ASGE publishes a monthly journal titled Gastrointestinal Endoscopy (GIE), develops and promotes evidence-based guidelines, offers educational resources for its members, and provides advocacy resources for influencing public policy. World Gastroenterology Organisation (WGO) - was founded in 1958 and consists of 119 Member Societies and 4 regional affiliated associations from around the world which represents a combined 60,000 individuals. The WGO mission statement reads "To promote, to the general public and healthcare professional alike, an awareness of the worldwide prevalence and optimal care of gastrointestinal and liver disorders, and to improve care of these disorders, through the provision of high quality, accessible and independent education and training." The WGO publishes a newsletter titled the electronic World Gastroenterology News (e-WGN), develops global guidelines, engages in advocacy through World Digestive Health Day (WDHD) held yearly on 29 May, and provides educational resources including 23 training centers around the world and a Train the Trainers (TTT) program. British Society of Gastroenterology United European Gastroenterology Academic journals The American Journal of Gastroenterology Clinical Gastroenterology and Hepatology Endoscopy Gastroenterology Gastrointestinal Endoscopy Gut Inflammatory Bowel Diseases Journal of Clinical Gastroenterology Journal of Crohn's and Colitis Neurogastroenterology & Motility World Journal of Gastroenterology
Biology and health sciences
Fields of medicine
null
12984
https://en.wikipedia.org/wiki/Geiger%20counter
Geiger counter
A Geiger counter (, ; also known as a Geiger–Müller counter or G-M counter) is an electronic instrument used for detecting and measuring ionizing radiation. It is widely used in applications such as radiation dosimetry, radiological protection, experimental physics and the nuclear industry. "Geiger counter" is often used generically to refer to any form of dosimeter (or, radiation-measuring device), but scientifically, a Geiger counter is only one specific type of dosimeter. It detects ionizing radiation such as alpha particles, beta particles, and gamma rays using the ionization effect produced in a Geiger–Müller tube, which gives its name to the instrument. In wide and prominent use as a hand-held radiation survey instrument, it is perhaps one of the world's best-known radiation detection instruments. The original detection principle was realized in 1908 at the University of Manchester, but it was not until the development of the Geiger–Müller tube in 1928 that the Geiger counter could be produced as a practical instrument. Since then, it has been very popular due to its robust sensing element and relatively low cost. However, there are limitations in measuring high radiation rates and the energy of incident radiation. The Geiger counter is one of the first examples of data sonification. Principle of operation A Geiger counter consists of a Geiger–Müller tube (the sensing element which detects the radiation) and the processing electronics, which display the result. The Geiger–Müller tube is filled with an inert gas such as helium, neon, or argon at low pressure, to which a high voltage is applied. The tube briefly conducts electrical charge when high energy particles or gamma radiation make the gas conductive by ionization. The ionization is considerably amplified within the tube by the Townsend discharge effect to produce an easily measured detection pulse, which is fed to the processing and display electronics. This large pulse from the tube makes the Geiger counter relatively cheap to manufacture, as the subsequent electronics are greatly simplified. The electronics also generate the high voltage, typically 400–900 volts, that has to be applied to the Geiger–Müller tube to enable its operation. This voltage must be carefully selected, as too high a voltage will allow for continuous discharge, damaging the instrument and invalidating the results. Conversely, too low a voltage will result in an electric field that is too weak to generate a current pulse. The correct voltage is usually specified by the manufacturer. To help quickly terminate each discharge in the tube a small amount of halogen gas or organic material known as a quenching mixture is added to the fill gas. Readout There are two types of detected radiation readout: counts and radiation dose. The counts display is the simplest, and shows the number of ionizing events detected, displayed either as a count rate, such as "counts per minute" or "counts per second", or as a total number of counts over a set time period (an integrated total). The counts readout is normally used when alpha or beta particles are being detected. More complex to achieve is a display of radiation dose rate, displayed in units such as the sievert, which is normally used for measuring gamma or X-ray dose rates. A Geiger–Müller tube can detect the presence of radiation, but not its energy, which influences the radiation's ionizing effect. Consequently, instruments measuring dose rate require the use of an energy compensated Geiger–Müller tube, so that the dose displayed relates to the counts detected. The electronics will apply known factors to make this conversion, which is specific to each instrument and is determined by design and calibration. The readout can be analog or digital, and modern instruments offer serial communications with a host computer or network. There is usually an option to produce audible clicks representing the number of ionization events detected. This is the distinctive sound associated with handheld or portable Geiger counters. The purpose of this is to allow the user to concentrate on manipulation of the instrument while retaining auditory feedback on the radiation rate. Limitations There are two main limitations of the Geiger counter: Because the output pulse from a Geiger–Müller tube is always of the same magnitude (regardless of the energy of the incident radiation), the tube cannot differentiate between radiation types or measure radiation energy, which prevents it from correctly measuring dose rate. The tube is less accurate at high radiation rates, because each ionization event is followed by a "dead time", an insensitive period during which any further incident radiation does not result in a count. Typically, the dead time will reduce indicated count rates above about 104 to 105 counts per second, depending on the characteristic of the tube being used. While some counters have circuitry which can compensate for this, for measuring very high dose rates, ion chamber instruments are preferred for high radiation rates. Types and applications The intended detection application of a Geiger counter dictates the tube design used. Consequently, there are a great many designs, but they can be generally categorized as "end-window", windowless "thin-walled", "thick-walled", and sometimes hybrids of these types. Particle detection The first historical uses of the Geiger principle were to detect α- and β-particles, and the instrument is still used for this purpose today. For α-particles and low energy β-particles, the "end-window" type of a Geiger–Müller tube has to be used, as these particles have a limited range and are easily stopped by a solid material. Therefore, the tube requires a window which is thin enough to allow as many as possible of these particles through to the fill gas. The window is usually made of mica with a density of about 1.5–2.0 mg/cm2. α-particles have the shortest range, and to detect these the window should ideally be within 10 mm of the radiation source due to α-particle attenuation. However, the Geiger–Müller tube produces a pulse output which is the same magnitude for all detected radiation, so a Geiger counter with an end window tube cannot distinguish between α- and β-particles. A skilled operator can use varying distance from a radiation source to differentiate between α- and high energy β-particles. The "pancake" Geiger–Müller tube is a variant of the end-window probe, but designed with a larger detection area to make checking quicker. However, the pressure of the atmosphere against the low pressure of the fill gas limits the window size due to the limited strength of the window membrane. Some β-particles can also be detected by a thin-walled "windowless" Geiger–Müller tube, which has no end-window, but allows high energy β-particles to pass through the tube walls. Although the tube walls have a greater stopping power than a thin end-window, they still allow these more energetic particles to reach the fill gas. End-window Geiger counters are still used as a general purpose, portable, radioactive contamination measurement and detection instrument, owing to their relatively low cost, robustness and relatively high detection efficiency; particularly with high energy β-particles. However, for discrimination between α- and β-particles or provision of particle energy information, scintillation counters or proportional counters should be used. Those instrument types are manufactured with much larger detector areas, which means that checking for surface contamination is quicker than with a Geiger counter. Gamma and X-ray detection Geiger counters are widely used to detect gamma radiation and X-rays, collectively known as photons, and for this the windowless tube is used. However, detection efficiency is low compared to alpha and beta particles. The article on the Geiger–Müller tube carries a more detailed account of the techniques used to detect photon radiation. For high energy photons, the tube relies on the interaction of the radiation with the tube wall, usually a material with a high atomic number such as stainless steel of 1–2 mm thickness, to produce free electrons within the tube wall, due to the photoelectric effect. If these migrate out of the tube wall, they enter and ionize the fill gas. This effect increases the detection efficiency because the low-pressure gas in the tube has poorer interaction with higher energy photons than a steel tube. However, as photon energies decrease to low levels, there is greater gas interaction, and the contribution of direct gas interaction increases. At very low energies (less than 25 keV), direct gas ionisation dominates, and a steel tube attenuates the incident photons. Consequently, at these energies, a typical tube design is a long tube with a thin wall which has a larger gas volume, to give an increased chance of direct interaction of a particle with the fill gas. Above these low energy levels, there is a considerable variance in response to different photon energies of the same intensity, and a steel-walled tube employs what is known as "energy compensation" in the form of filter rings around the naked tube, which attempts to compensate for these variations over a large energy range. A steel-walled Geiger–Müller tube is about 1% efficient over a wide range of energies. Neutron detection A variation of the Geiger tube known as a Bonner sphere can be used to exclusively measure radiation dosage from neutrons rather than from gammas by the process of neutron capture. The tube, which can contain the fill gas boron trifluoride or helium-3, is surrounded by a plastic moderator that reduces neutron energies prior to capture. When a capture occurs in the fill gas, the energy released is registered in the detector. Gamma measurement—personnel protection and process control While "Geiger counter" is practically synonymous with the hand-held variety, the Geiger principle is in wide use in installed "area gamma" alarms for personnel protection, as well as in process measurement and interlock applications. The processing electronics of such installations have a higher degree of sophistication and reliability than those of hand-held meters. Physical design For hand-held units there are two fundamental physical configurations: the "integral" unit with both detector and electronics in the same unit, and the "two-piece" design which has a separate detector probe and an electronics module connected by a short cable. In the 1930s a mica window was added to the cylindrical design allowing low-penetration radiation to pass through with ease. The integral unit allows single-handed operation, so the operator can use the other hand for personal security in challenging monitoring positions, but the two piece design allows easier manipulation of the detector, and is commonly used for alpha and beta surface contamination monitoring where careful manipulation of the probe is required or the weight of the electronics module would make operation unwieldy. A number of different sized detectors are available to suit particular situations, such as placing the probe in small apertures or confined spaces. Gamma and X-Ray detectors generally use an "integral" design so the Geiger–Müller tube is conveniently within the electronics enclosure. This can easily be achieved because the casing usually has little attenuation, and is employed in ambient gamma measurements where distance from the source of radiation is not a significant factor. However, to facilitate more localised measurements such as "surface dose", the position of the tube in the enclosure is sometimes indicated by targets on the enclosure so an accurate measurement can be made with the tube at the correct orientation and a known distance from the surface. There is a particular type of gamma instrument known as a "hot spot" detector which has the detector tube on the end of a long pole or flexible conduit. These are used to measure high radiation gamma locations whilst protecting the operator by means of distance shielding. Particle detection of alpha and beta can be used in both integral and two-piece designs. A pancake probe (for alpha/beta) is generally used to increase the area of detection in two-piece instruments whilst being relatively light weight. In integral instruments using an end window tube there is a window in the body of the casing to prevent shielding of particles. There are also hybrid instruments which have a separate probe for particle detection and a gamma detection tube within the electronics module. The detectors are switchable by the operator, depending the radiation type that is being measured. Guidance on application use In the United Kingdom the National Radiological Protection Board issued a user guidance note on selecting the best portable instrument type for the radiation measurement application concerned. This covers all radiation protection instrument technologies and includes a guide to the use of G-M detectors. History In 1908 Hans Geiger, under the supervision of Ernest Rutherford at the Victoria University of Manchester (now the University of Manchester), developed an experimental technique for detecting alpha particles that would later be used to develop the Geiger–Müller tube in 1928. This early counter was only capable of detecting alpha particles and was part of a larger experimental apparatus. The fundamental ionization mechanism used was discovered by John Sealy Townsend between 1897 and 1901, and is known as the Townsend discharge, which is the ionization of molecules by ion impact. It was not until 1928 that Geiger and Walther Müller (a PhD student of Geiger) developed the sealed Geiger–Müller tube which used basic ionization principles previously used experimentally. Small and rugged, not only could it detect alpha and beta radiation as prior models had done, but also gamma radiation. Now a practical radiation instrument could be produced relatively cheaply, and so the Geiger counter was born. As the tube output required little electronic processing, a distinct advantage in the thermionic valve era due to minimal valve count and low power consumption, the instrument achieved great popularity as a portable radiation detector. Modern versions of the Geiger counter use halogen quench gases, a technique invented in 1947 by Sidney H. Liebson. Halogen compounds have superseded the organic quench gases because of their much longer life and lower operating voltages; typically 400-900 volts. Gallery
Technology
Measuring instruments
null
13009
https://en.wikipedia.org/wiki/Galileo%20%28satellite%20navigation%29
Galileo (satellite navigation)
Galileo is a global navigation satellite system (GNSS) created by the European Union through the European Space Agency (ESA) and operated by the European Union Agency for the Space Programme (EUSPA). It is headquartered in Prague, Czechia, with two ground operations centres in Oberpfaffenhofen, Germany (mostly responsible for the control of the satellites), and in Fucino, Italy, (mostly responsible for providing the navigation data). The €10 billion project went live in 2016. It is named after the Italian astronomer Galileo Galilei. One of the aims of Galileo is to provide an independent high-precision positioning system so European political and military authorities do not have to rely on the US GPS, or the Russian GLONASS systems, which could be disabled or degraded by their operators at any time. The use of basic (lower-precision) Galileo services is free and open to everyone. A fully encrypted higher-precision service is available for free to government-authorized users. Galileo is also to provide a new global search and rescue (SAR) function as part of the MEOSAR system. The first Galileo test satellite GIOVE-A was launched 28 December 2005, while the first satellite to be part of the operational system was launched on 21 October 2011. Galileo started offering Early Operational Capability (EOC) on 15 December 2016, providing initial services with a weak signal. In October 2018, four more Galileo satellites were brought online, increasing the number of active satellites to 18. In November 2018, the FCC approved use of Galileo in the US. As of September 2024, there are 25 launched satellites that operate in the constellation. It is expected that the next generation of satellites will begin to become operational after 2026 to replace the first generation, which can then be used for backup capabilities. The Galileo system has a greater accuracy than GPS, having an accuracy of less than 1 m when using broadcast ephemeris (GPS: 3 m) and a signal-in-space ranging error (SISRE) of 1.6 cm (GPS: 2.3 cm) when using real-time corrections for satellite orbits and clocks. History Main objectives In 1999, the different concepts of the three main contributors of the European Space Agency (ESA) (Germany, France and Italy) for Galileo were compared and reduced to one by a joint team of engineers from all three countries. The first stage of the Galileo programme was agreed upon officially on 26 May 2003 by the European Union and the ESA. The system is intended primarily for civilian use, unlike the more military-focused systems of the United States (GPS), Russia (GLONASS) and China (BeiDou) in that Galileo doesn't limit accuracy for non-military applications. The European system could be subject to shutdown for military purposes in extreme circumstances (such as an armed conflict). Italy and Germany led the development of the first generation of the Galileo programme, while France is playing a more prominent role in the development of the Galileo Second Generation (G2G). Funding The European Commission had some difficulty funding the project's next stage, after several allegedly "per annum" sales projection graphs for the project were exposed in November 2001 as "cumulative" projections, which for each year projected included all previous years of sales. The attention that was brought to this multi-billion euro growing error in sales forecasts resulted in a general awareness in the commission and elsewhere that it was unlikely that the programme would yield the return on investment that had previously been suggested to investors and decision-makers. On 17 January 2002, a spokesman for the project stated that, as a result of US pressure and economic difficulties, "Galileo is almost dead". A few months later, however, the situation changed dramatically. European Union member states decided it was important to have a satellite-based positioning and timing infrastructure that the US could not easily turn off in times of political conflict. The European Union and the European Space Agency agreed in March 2002 to fund the project, pending a review in 2003 (which was completed on 26 May 2003). The starting cost for the period ending in 2005 is estimated at €1.1 billion. The required satellites (the planned number is 30) were to be launched between 2011 and 2014, with the system up and running and under civilian control from 2019. The final cost is estimated at €3 billion, including the infrastructure on Earth, constructed in 2006 and 2007. The plan was for private companies and investors to invest at least two-thirds of the cost of implementation, with the EU and ESA dividing the remaining cost. The base Open Service is to be available without charge to anyone with a Galileo-compatible receiver, with an encrypted higher-bandwidth improved-precision Commercial Service originally planned to be available at a cost, but in February 2018 the high accuracy service (HAS) (providing Precise Point Positioning data on the E6 frequency) was agreed to be made freely available, with the authentication service remaining commercial. By early 2011 costs for the project had run 50% over initial estimates. Tension with the United States Galileo is intended to be an EU civilian GNSS that allows all users access to it. Initially GPS reserved the highest quality signal for military use, and the signal available for civilian use was intentionally degraded (Selective Availability). This changed with President Bill Clinton signing a policy directive in 1996 to turn off Selective Availability. Since May 2000 the same precision signal has been provided to both civilians and the military. Since Galileo was designed to provide the highest possible precision (greater than GPS) to anyone, the US was concerned that an enemy could use Galileo signals in military strikes against the US and its allies (some weapons like missiles use GNSSs for guidance). The frequency initially chosen for Galileo would have made it impossible for the US to block the Galileo signals without also interfering with its own GPS signals. The US did not want to lose their GNSS capability with GPS while denying enemies the use of GNSS. Some US officials became especially concerned when Chinese interest in Galileo was reported. An anonymous EU official claimed that the US officials implied that they might consider shooting down Galileo satellites in the event of a major conflict in which Galileo was used in attacks against American forces.The EU's stance is that Galileo is a neutral technology, available to all countries and everyone. At first, EU officials did not want to change their original plans for Galileo, but they have since reached the compromise that Galileo is to use different frequencies. This allows the blocking or jamming of either GNSS without affecting the other. GPS and Galileo One of the reasons given for developing Galileo as an independent system was that position information from GPS can be made significantly inaccurate by the deliberate application of universal selective availability (SA) by the US military. GPS is widely used worldwide for civilian applications; Galileo's proponents argued that civil infrastructure, including aircraft navigation and landing, should not rely solely upon a system with this vulnerability. On 2 May 2000, the selective availability was disabled by the President of the United States, Bill Clinton; in late 2001 the entity managing the GPS confirmed that it did not intend to enable selective availability ever again. Though Selective Availability capability still exists, on 19 September 2007 the US Department of Defense announced that newer GPS satellites would not be capable of implementing Selective Availability; the wave of Block IIF satellites launched in 2009, and all subsequent GPS satellites, are stated not to support selective availability. As old satellites are replaced in the GPS Block III programme, selective availability will cease to be an option. The modernisation programme also contains standardised features that allow GPS III and Galileo systems to inter-operate, allowing receivers to be developed to utilise GPS and Galileo together to create an even more accurate GNSS. Cooperation with the United States In June 2004, in a signed agreement with the United States, the European Union agreed to switch to a binary offset carrier modulation 1.1, or BOC(1,1), allowing the coexistence of both GPS and Galileo, and the future combined use of both systems. The European Union also agreed to address the "mutual concerns related to the protection of allied and US national security capabilities". First experimental satellites: GIOVE-A and GIOVE-B The first experimental satellite, GIOVE-A, was launched in December 2005 and was followed by a second test satellite, GIOVE-B, launched in April 2008. After successful completion of the In-Orbit Validation (IOV) phase, additional satellites were launched. On 30 November 2007, the 27 EU transport ministers involved reached an agreement that Galileo should be operational by 2013, but later press releases suggest it was delayed to 2014. Funding again, governance issues In mid-2006, the public-private partnership fell apart, and the European Commission decided to nationalise the Galileo programme. In early 2007, the EU had yet to decide how to pay for the system and the project was said to be "in deep crisis" due to lack of more public funds. German Transport Minister Wolfgang Tiefensee was particularly doubtful about the consortium's ability to end the infighting at a time when only one testbed satellite had been successfully launched. Although a decision was yet to be reached, on 13 July 2007 EU countries discussed cutting €548 million (US$755 million, £370 million) from the union's competitiveness budget for the following year and shifting some of these funds to other parts of the financing pot, a move that could meet part of the cost of the union's Galileo satellite navigation system. European Union research and development projects could be scrapped to overcome a funding shortfall. In November 2007, it was agreed to reallocate funds from the EU's agriculture and administration budgets and to soften the tendering process in order to invite more EU companies. In April 2008, the EU transport ministers approved the Galileo Implementation Regulation. This allowed the €3.4 billion to be released from the EU's agriculture and administration budgets to allow the issuing of contracts to start construction of the ground station and the satellites. In June 2009, the European Court of Auditors published a report, pointing out governance issues, substantial delays and budget overruns that led to project stalling in 2007, leading to further delays and failures. In October 2009, the European Commission cut the number of satellites definitively planned from 28 to 22, with plans to order the remaining six at a later time. It also announced that the first OS, PRS and SoL signal would be available in 2013, and the CS and SOL some time later. The €3.4 billion budget for the 2006–2013 period was considered insufficient. In 2010, the think-tank Open Europe estimated the total cost of Galileo from start to 20 years after completion at €22.2 billion, borne entirely by taxpayers. Under the original estimates made in 2000, this cost would have been €7.7 billion, with €2.6 billion borne by taxpayers and the rest by private investors. In November 2009, a ground station for Galileo was inaugurated near Kourou (French Guiana). The launch of the first four in-orbit validation (IOV) satellites was planned for the second half of 2011, and the launch of full operational capability (FOC) satellites was planned to start in late 2012. In March 2010, it was verified that the budget for Galileo would only be available to provide the 4 IOV and 14 FOC satellites by 2014, with no funds then committed to bring the constellation above this 60% capacity. Paul Verhoef, the satellite navigation program manager at the European Commission, indicated that this limited funding would have serious consequences commenting at one point "To give you an idea, that would mean that for three weeks in the year you will not have satellite navigation" in reference to the proposed 18-vehicle constellation. In July 2010, the European Commission estimated further delays and additional costs of the project to grow up to €1.5–1.7 billion, and moved the estimated date of completion to 2018. After completion the system will need to be subsidised by governments at €750 million per year. An additional €1.9 billion was planned to be spent bringing the system up to the full complement of 30 satellites (27 operational + 3 active spares). In December 2010, EU ministers in Brussels voted Prague, in the Czech Republic, as the headquarters of the Galileo project. In January 2011, infrastructure costs up to 2020 were estimated at €5.3 billion. In that same month, Wikileaks revealed that Berry Smutny, the CEO of the German satellite company OHB-System, said that Galileo "is a stupid idea that primarily serves French interests". The BBC learned in 2011 that €500 million (£440 million) would become available to make the extra purchase, taking Galileo within a few years from 18 operational satellites to 24. The first two Galileo In-Orbit Validation satellites were launched by Soyuz ST-B flown from Centre Spatial Guyanais on 21 October 2011, and the remaining two on 12 October 2012. As of 2017, the satellites are fully useful for precise positioning and geodesy with a limited usability in navigation. Twenty-two further satellites with Full Operational Capability (FOC) were on order . The first four pairs of satellites were launched on 22 August 2014, 27 March 2015, 11 September 2015 and 17 December 2015. Clock failures In January 2017, news agencies reported that six of the passive hydrogen masers (PHM) and three of the rubidium atomic clocks (RAFS) had failed. Four of the full operational satellites have each lost at least one clock; but no satellite has lost more than two. The operation has not been affected as each satellite is launched with four clocks (2 PHM and 2 RAFS). The possibility of a systemic flaw is being considered. SpectraTime, the Swiss producer of both on-board clock types, declined to comment. According to ESA, they concluded with their industrial partners for the rubidium atomic clocks that some implemented testing and operational measures were required. Additionally some refurbishment is required for the rubidium atomic clocks that still have to be launched. For the passive hydrogen masers operational measures are being studied to reduce the risk of failure. China and India use the same SpectraTime-built atomic clocks in their satellite navigation systems. ESA has contacted the Indian Space Research Organisation (ISRO) who initially reported not having experienced similar failures. However, at the end of January 2017, Indian news outlets reported that all three clocks aboard the IRNSS-1A satellite (launched in July 2013 with a 10-year life expectancy) had failed and that a replacement satellite would be launched in the second half of 2017: these atomic clocks were said to be supplied under a four-million-euro deal. In July 2017, the European Commission reported that the main causes of the malfunctions have been identified and measures have been put in place to reduce the possibility of further malfunctions of the satellites already in space. According to European sources, ESA took measures to correct both identified sets of problems by replacing a faulty component that can cause a short circuit in the rubidium clocks and improve the passive hydrogen maser clocks as well on satellites still to be launched. Outages 2019 From 11 July till 18 July 2019, the whole constellation experienced an "unexplained" signal outage with all active satellites showing "NOT USABLE" status on the Galileo status page. The cause of the incident was an equipment malfunction in the Galileo ground infrastructure that affected the calculation of time and orbit predictions. 2020 On 14 December 2020, starting at 0:00 UTC, Galileo experienced a system-wide performance degradation lasting for 6 hours. GNSS receivers ignoring a 'marginal' status flag in the Galileo data could have experienced a pseudorange error of up to almost 80 km. The problem was related to an abnormal behaviour of a ground segment atomic clock in the time determination function of the system. The system uses parallel functioning Precise Timing Facilities in the Fucino and Oberpfaffenhofen Galileo Control Centres, and an issue occurred in Fucino whilst maintenance was performed on the parallel system in Oberpfaffenhofen. International involvement In September 2003, China joined the Galileo project. China was to invest €230 million (US$302 million, £155 million, CNY 2.34 billion) in the project over the following years. In July 2004, Israel signed an agreement with the EU to become a partner in the Galileo project. On 3 June 2005, the European Union and Ukraine signed an agreement for Ukraine to join the project, as noted in a press release. As of November 2005, Morocco also joined the programme. In September 2005, India signed an agreement with the EU to join the project. In mid-2006, the public–private partnership fell apart and the European Commission decided to nationalise Galileo as an EU programme. In November 2006, China opted instead to upgrade BeiDou navigation system, its then-regional satellite navigation system. The decision was due to security concerns and issues with Galileo financing. On 30 November 2007, the 27 member states of the European Union unanimously agreed to move forward with the project, with plans for bases in Germany and Italy. Spain did not approve during the initial vote, but approved it later that day. This greatly improved the viability of the Galileo project: "The EU's executive had previously said that if agreement was not reached by January 2008, the long-troubled project would essentially be dead". On 3 April 2009, Norway too joined the programme pledging €68.9 million toward development costs and allowing its companies to bid for the construction contracts. Norway, while not a member of the EU, is a member of ESA. On 18 December 2013, Switzerland signed a cooperation agreement to fully participate in the program, and retroactively contributed €80 million for the period 2008–2013. As a member of ESA, it already collaborated in the development of the Galileo satellites, contributing the hydrogen-maser clocks. Switzerland's financial commitment for the period 2014–2020 will be calculated in accordance with the standard formula applied for the Swiss participation in the EU research Framework Programme. In March 2018, the European Commission announced that the United Kingdom may be excluded from parts of the project (especially relating to the secured service (PRS) following its exit from the European Union (EU). As a result, Airbus was to relocate work on the Ground Control Segment (GCS) from its Portsmouth premises to an EU state. British officials have been reported to be seeking legal advice on whether they can reclaim the €1.4 billion invested by the United Kingdom, of the €10 billion spent to date. In a speech at the EU Institute for Security Studies conference, the EU Chief Negotiator in charge of the Brexit negotiations, Michel Barnier, stressed the EU position that the UK had decided to leave the EU and thus all EU programmes, including Galileo. In August 2018, the UK stated that it would look into creating a competing satellite navigation system to Galileo post-Brexit. In December 2018, British Prime Minister Theresa May announced that the UK would no longer seek to reclaim the investment, and Science Minister Sam Gyimah resigned over the matter. System description Space segment As of 2012, the system was scheduled to have 15 satellites operational in 2015 and reach full operation in 2020 with the following specifications: 30 in-orbit spacecraft (24 in full service and 6 spares) Orbital altitude: (MEO) Orbital period: 14 hours and 5 minutes (every 17 revolutions, done in 10 sidereal days, a satellite passes over the same location) 3 orbital planes, 56.0° inclination, ascending nodes separated by 120.0° longitude (8 operational satellites and 2 active spares per orbital plane) Satellite lifetime: >12 years Satellite mass: Satellite body dimensions: Span of solar arrays: Power of solar arrays: 1.5 kW (end of life) Power of navigation antennas: 155–265 W Ground segment The system's orbit and signal accuracy is controlled by a ground segment consisting of: Two ground control centres, located in Oberpfaffenhofen and Fucino for Satellite and Mission Control Seven telemetry, tracking & control (TT&C) stations, located in Kiruna, 2x Kourou, Nouméa, Réunion, Redu and Papeete Ten mission data uplink stations (ULS), two per site, located in Svalbard, Kourou, Papeete, Sainte-Marie, Réunion and Nouméa Several worldwide distributed reference sensor stations (GSS), including one in the Kerguelen Islands A data dissemination network between all geographically distributed locations One service centre, located in Madrid, to help Galileo users. Signals The system transmits three signals: E1 (1575.42 MHz), E5 (1191.795 MHz) consisting of E5a (1176.45 MHz) and E5b (1207.14 MHz), and E6 (1278.75 MHz): Services The Galileo system will have four main services: Open Service (OS)This will be available without charge for use by anyone with appropriate mass-market equipment; simple timing, and positioning down to 1 m for a double frequency receiver, best case. High Accuracy Service (HAS; resulting from the re-scope of the former Galileo Commercial Service) Accuracy to 20 cm free of charge. Public Regulated Service (PRS; encrypted) Designed to be more robust, with anti-jamming mechanisms and reliable problem detection. Limited to authorized governmental bodies. Search and Rescue Service (SAR) The Galileo SAR Service is a Medium Earth Orbiting Search and Rescue (MEOSAR) service and part of the International Cospas-Sarsat Programme. Quarterly Service Performance Reports The European GNSS Service Centre provides public quarterly performance reports regarding the Open Service and Search and Rescue Service since 2017. Generally, the reported performance parameters measurements surpass the target values. The Galileo April, May, June 2021 Quarterly Open Service Performance Report by the European GNSS Service Centre reported the UTC Time Dissemination Service Accuracy was ≤ 4.3 nanoseconds, computed by accumulating samples over the previous 12 months and exceeding the ≤ 30 ns target value. The Signal In Space Error (SISE) was also well within the ≤  target value for Single and (more accurate) Dual Frequency receivers. The Galileo navigation message includes the differences between Galileo System Time (GST), UTC and GPS Time (GPST) (to promote interoperability). The Galileo April, May, June 2021 Quarterly Search and Rescue Service Performance Report by the European GNSS Service Centre reported the various performance parameters measurements surpassed their target values. Concept Each Galileo satellite has two master passive hydrogen maser atomic clocks and two secondary rubidium atomic clocks which are independent of one other. As precise and stable space-qualified atomic clocks are critical components to any satellite-navigation system, the employed quadruple redundancy keeps Galileo functioning when onboard atomic clocks fail in space. The onboard passive hydrogen maser clocks' precision is four times better than the onboard rubidium atomic clocks and estimated at 1 second per 3 million years (a timing error of a nanosecond or 1 billionth of a second (10 or second) translates into a 30 cm positional error on Earth's surface), and will provide an accurate timing signal to allow a receiver to calculate the time that it takes the signal to reach it. The Galileo satellites are configured to run one hydrogen maser clock in primary mode and a rubidium clock as hot backup. Under normal conditions, the operating hydrogen maser clock produces the reference frequency from which the navigation signal is generated. Should the hydrogen maser encounter any problem, an instantaneous switchover to the rubidium clock would be performed. In case of a failure of the primary hydrogen maser the secondary hydrogen maser could be activated by the ground segment to take over within a period of days as part of the redundant system. A clock monitoring and control unit provides the interface between the four clocks and the navigation signal generator unit (NSU). It passes the signal from the active hydrogen master clock to the NSU and also ensures that the frequencies produced by the master clock and the active spare are in phase, so that the spare can take over instantly should the master clock fail. The NSU information is used to calculate the position of the receiver by trilaterating the difference in received signals from multiple satellites. The onboard passive hydrogen maser and rubidium clocks are very stable over a few hours. If they were left to run indefinitely, though, their timekeeping would drift, so they need to be synchronized regularly with a network of even more stable ground-based reference clocks. These include active hydrogen maser clocks and clocks based on the caesium frequency standard, which show a far better medium and long-term stability than rubidium or passive hydrogen maser clocks. These clocks on the ground are gathered together within the parallel functioning Precise Timing Facilities in the Fucino and Oberpfaffenhofen Galileo Control Centres. The ground based clocks also generate a worldwide time reference called Galileo System Time (GST), the standard for the Galileo system and are routinely compared to the local realisations of UTC, the UTC(k) of the European frequency and time laboratories. For more information of the concept of global satellite navigation systems, see GNSS and GNSS positioning calculation. European GNSS Service Centre The European GNSS Service Centre (GSC), located in Madrid, is an integral part of Galileo and provides the single interface between the Galileo system and Galileo users. GSC publishes Galileo official documentation, promotes Galileo current and future services worldwide, supports standardisation and distributes Galileo almanacs, ephemeris and metadata. The GSC User Helpdesk is the point of contact for Galileo user's assistance. GSC answers queries and gathers incident notifications from users on Galileo. The helpdesk is continuously available for all worldwide Galileo users through the GSC web portal. GSC provides updated Galileo constellation status and informs on planned and unplanned events through Notice Advisory to Galileo Users (NAGU). GSC publishes Galileo reference documentation and general information on Galileo services and signals description and Galileo performance reports. Search and rescue Galileo provides a global search and rescue (SAR) function as part of the MEOSAR system. Like Russia's Glonass, the United States' Global Positioning System (GPS) satellites, and some Chinese BeiDou satellites, Galileo satellites are equipped with a transponder which relays 406 MHz distress frequency signals from emergency beacons by a Forward Link Service (FLS) to the Rescue coordination centre, which will then initiate a rescue operation. After receipt of an emergency beacon signal, the Galileo SAR system provides a signal, the Return Link Message (RLM), to the emergency beacon, informing the person(s) in distress that the activated beacon has been detected and help is on the way. This return message feature is new in a satellite constellation and is considered a major upgrade compared to the existing Cospas-Sarsat system, which up to then did not provide feedback to the user. Tests in February 2014 found that for Galileo's search and rescue function, operating as part of the existing International Cospas-Sarsat Programme, 77% of simulated distress locations can be pinpointed within , and 95% within . The Galileo Return Link Service (RLS) went live in January 2020 for all RLS capable emergency beacons. Constellation Galileo satellite test beds: GIOVE In 2004, the Galileo System Test Bed Version 1 (GSTB-V1) project validated the on-ground algorithms for Orbit Determination and Time Synchronisation (OD&TS). This project, led by ESA and European Satellite Navigation Industries, has provided industry with fundamental knowledge to develop the mission segment of the Galileo positioning system. GIOVE-A is the first GIOVE (Galileo In-Orbit Validation Element) test satellite. It was built by Surrey Satellite Technology Ltd (SSTL), and successfully launched on 28 December 2005 by the European Space Agency and the Galileo Joint Undertaking (GJU). Operation of GIOVE-A ensured that Galileo meets the frequency-filing allocation and reservation requirements for the International Telecommunication Union (ITU), a process that was required to be complete by June 2006. GIOVE-B, built by Astrium and Thales Alenia Space, has a more advanced payload than GIOVE-A. It was successfully launched on 27 April 2008 at 22:16 UTC aboard a Soyuz-FG/Fregat rocket provided by Starsem. A third satellite, GIOVE-A2, was originally planned to be built by SSTL for launch in the second half of 2008. Construction of GIOVE-A2 was terminated due to the successful launch and in-orbit operation of GIOVE-B. The GIOVE Mission segment operated by European Satellite Navigation Industries used the GIOVE-A/B satellites to provide experimental results based on real data to be used for risk mitigation for the IOV satellites that followed on from the testbeds. ESA organised the global network of ground stations to collect the measurements of GIOVE-A/B with the use of the GETR receivers for further systematic study. GETR receivers are supplied by Septentrio as well as the first Galileo navigation receivers to be used to test the functioning of the system at further stages of its deployment. Signal analysis of GIOVE-A/B data confirmed successful operation of all the Galileo signals with the tracking performance as expected. In-Orbit Validation (IOV) satellites These testbed satellites were followed by four IOV Galileo satellites that are much closer to the final Galileo satellite design. The search and rescue (SAR) feature is also installed. The first two satellites were launched on 21 October 2011 from Centre Spatial Guyanais using a Soyuz launcher, the other two on 12 October 2012. This enables key validation tests, since earth-based receivers such as those in cars and phones need to "see" a minimum of four satellites in order to calculate their position in three dimensions. Those 4 IOV Galileo satellites were constructed by Astrium GmbH and Thales Alenia Space. On 12 March 2013, a first fix was performed using those four IOV satellites. Once this In-Orbit Validation (IOV) phase has been completed, the remaining satellites will be installed to reach the Full Operational Capability. Full Operational Capability (FOC) satellites FOC Batch 1 On 7 January 2010, it was announced that the contract to build the first 14 FOC satellites was awarded to OHB System and for the navigation payload to Surrey Satellite Technology Limited (SSTL). The first batch of Galileo First Generation satellites known as "Batch-1" consists of the Galileo-FOC FM1 to Galileo-FOC FM14 satellites. Fourteen satellites were built at a cost of €566 million (£510 million; US$811 million). Arianespace will launch the satellites for a cost of €397 million (£358 million; US$569 million). The European Commission also announced that the €85 million contract for system support covering industrial services required by ESA for integration and validation of the Galileo system had been awarded to Thales Alenia Space. Thales Alenia Space subcontract performances to Astrium GmbH and security to Thales Communications. FOC Batch 2 In February 2012, an additional order of 8 FOC satellites was awarded to OHB Systems for €250 million (US$327 million), after outbidding EADS Astrium tender offer. The second batch of Galileo First Generation satellites known as "Batch-2" consists of the Galileo-FOC FM15 to Galileo-FOC FM22 satellites. Thus bringing the total to 22 FOC satellites. The satellites were built by OHB, with the contribution of Surrey Satellite Technology (SSTL). FOC Batch 3 In June and October 2017, two additional orders for 8 and 4 FOC satellites were awarded to OHB Systems for €324 million and €157.75 million. This third and final batch of Galileo First Generation satellites known as "Batch-3" consists of the Galileo-FOC FM23 to Galileo-FOC FM34 satellites. The satellites are being built by OHB in Bremen, Germany, with the contribution of Surrey Satellite Technology (SSTL) in Guildford, United Kingdom. When completed Batch-3 brings the total to 34 FOC satellites. FOC launches On 7 May 2014, the first two FOC satellites landed in Guyana for their joint launch planned in summer Originally planned for launch during 2013, problems tooling and establishing the production line for assembly led to a delay of a year in serial production of Galileo satellites. These two satellites (Galileo satellites GSAT-201 and GSAT-202) were launched on 22 August 2014. The names of these satellites are Doresa and Milena named after European children who had previously won a drawing contest. On 23 August 2014, launch service provider Arianespace announced that the flight VS09 experienced an anomaly and the satellites were injected into an incorrect orbit. They ended up in elliptical orbits and thus could not be used for navigation. However, it was later possible to use them to perform a physics experiment, so they were not a complete loss. Satellites GSAT-203 (Adam) and GSAT-204 (Anastasia) were launched successfully on 27 March 2015 from Guiana Space Centre using a Soyuz four stage launcher. Satellites GSAT-205 (Alba) and GSAT-206 (Oriana) were launched successfully on 11 September 2015 from Guiana Space Centre using a Soyuz four stage launcher. Satellites GSAT-208 (Liene) and GSAT-209 (Andriana) were successfully launched from Kourou, French Guiana, using the Soyuz four stage launcher on 17 December 2015. Satellites GSAT-210 (Daniele) and GSAT-211 (Alizée) were launched on 24 May 2016. Starting in November 2016, deployment of the last twelve satellites will use a modified Ariane 5 launcher, named Ariane 5 ES, capable of placing four Galileo satellites into orbit per launch. Satellites GSAT-207 (Antonianna), GSAT-212 (Lisa), GSAT-213 (Kimberley), GSAT-214 (Tijmen) were successfully launched from Kourou, French Guiana, on 17 November 2016 on an Ariane 5 ES. On 15 December 2016, Galileo started offering Initial Operational Capability (IOC). The services currently offered are Open Service, Public Regulated Service and Search and Rescue Service. The first Batch-2 satellites GSAT-215 (Nicole), GSAT-216 (Zofia), GSAT-217 (Alexandre), GSAT-218 (Irina) were successfully launched from Kourou, French Guiana, on 12 December 2017 on an Ariane 5 ES. Satellites GSAT-219 (Tara), GSAT-220 (Samuel), GSAT-221 (Anna), GSAT-222 (Ellen) were successfully launched from Kourou, French Guiana, on 25 July 2018 on an Ariane 5 ES. The first Batch-3 satellites GSAT-223 (Nikolina) and GSAT-224 (Shriya) were successfully launched from Kourou, French Guiana, on 5 December 2021 on a Soyuz four stage launcher. Shriya successfully joined the constellation on 29 August 2022. Second generation (G2G) satellites During 2014, ESA and its industry partners began studies on Galileo Second Generation (G2G) satellites, which were to be presented to the EC for the late 2020s launch period. One idea was to employ electric propulsion, which would eliminate the need for an upper stage during launch and allow satellites from a single batch to be inserted into more than one orbital plane. The new generation satellites are expected to be available by 2025. and serve to augment the existing network. On 20 January 2021, the European Commission announced that it had awarded a €1.47 billion contract to Thales Alenia Space (TAS) and Airbus Defence and Space for 6 spacecraft by each manufacturer. The signing of the contracts to Thales Alenia Space and Airbus Defence and Space, scheduled on 29 January 2021, was suspended by the European Court of Justice following a protest filed by OHB SE, the losing bidder. The OHB protest at the ECJ's General Court is based on “allegations of theft of trade secrets”, and seeks both a suspension of the contract signatures and the cancellation of the contract award. In May 2021 ESA reported it signed the contracts to design and build the first batch of Galileo Second Generation (G2G) satellites with Thales Alenia Space and Airbus Defence and Space. The 12 G2G satellites will feature a fully digital navigational payload, electric propulsion, enhanced navigation signals and capabilities, inter-satellite links and reconfigurability in space. The number of atomic clocks will increase from four to six. The satellites' increase in payloads will result in a mass of approximately 2300 kg. The design life is extended from 12 years to 15 years. Applications and impact Science projects using Galileo In July 2006, an international consortium of universities and research institutions embarked on a study of potential scientific applications of the Galileo constellation. This project, named GEO6, is a broad study oriented to the general scientific community, aiming to define and implement new applications of Galileo. Among the various GNSS users identified by the Galileo Joint Undertaking, the GEO6, project addresses the Scientific User Community (UC). The GEO6 project aims at fostering possible novel applications within the scientific UC of GNSS signals, and particularly of Galileo. The AGILE project is an EU-funded project devoted to the study of the technical and commercial aspects of location-based services (LBS). It includes technical analysis of the benefits brought by Galileo (and EGNOS) and studies the hybridisation of Galileo with other positioning technologies (network-based, WLAN, etc.). Within these projects, some pilot prototypes were implemented and demonstrated. On the basis of the potential number of users, potential revenues for Galileo Operating Company or Concessionaire (GOC), international relevance, and level of innovation, a set of Priority Applications (PA) will be selected by the consortium and developed within the time-frame of the same project. These applications will help to increase and optimise the use of the EGNOS services and the opportunities offered by the Galileo Signal Test-Bed (GSTB-V2) and the Galileo (IOV) phase. All Galileo satellites are equipped with laser retroreflector arrays which allow them to be tracked by the stations of the International Laser Ranging Service. Satellite laser ranging to Galileo satellites are used for the validation of satellite orbits, determination of Earth rotation parameters and for the combined solutions incorporating laser and microwave observations. Receivers All major GNSS receiver chips support Galileo and hundreds of end-user devices are compatible with Galileo. The first, dual-frequency-GNSS-capable Android devices, which track more than one radio signal from each satellite, E1 and E5a frequencies for Galileo, were the Huawei Mate 20 line, Xiaomi Mi 8, Xiaomi Mi 9 and Xiaomi Mi MIX 3. , there were more than 140 Galileo-enabled smartphones on the market of which 9 were dual-frequency enabled. An extensive list of enabled devices, for various uses, on land, sea and in air is frequently updated at the EU website. On 24 December 2018, the European Commission passed a mandate for all new smartphones to implement Galileo for E112 support. Effective from 1st April 2018, all new vehicles sold in Europe must support eCall, an automatic emergency response system that dials 112 and transmits Galileo location data in the event of an accident. Until late 2018, Galileo was not authorized for use in the United States and, as a consequence, only variably worked on devices that could receive Galileo signals, within United States territory. The Federal Communications Commission's (FCC) position on the matter was (and remains) that non-GPS radio navigation satellite systems (RNSS) receivers must be granted a licence to receive said signals. A waiver of this requirement for Galileo was requested by the EU and submitted in 2015, and on 6 January 2017, public comment on the matter was requested. On 15 November 2018, the FCC granted the requested waiver, explicitly allowing non-federal consumer devices to access Galileo E1 and E5 frequencies. However, most devices, including smartphones still require operating system updates or similar updates to allow the use of Galileo signals within the United States (most smartphones since the Apple iPhone 6S and Samsung Galaxy S7 have the hardware capability, and simply require a software modification). Coins The European Satellite Navigation project was selected as the main motif of a very high-value collectors' coin: the Austrian European Satellite Navigation commemorative coin, minted on 1 March 2006. The coin has a silver ring and gold-brown niobium "pill". In the reverse, the niobium portion depicts navigation satellites orbiting the Earth. The ring shows different modes of transport, for which satellite navigation was developed: an aircraft, a car, a lorry, a train and a container ship.
Technology
Navigation
null
13026
https://en.wikipedia.org/wiki/Gray%20whale
Gray whale
The gray whale (Eschrichtius robustus), also known as the grey whale, is a baleen whale that migrates between feeding and breeding grounds yearly. It reaches a length of , a weight of up to and lives between 55 and 70 years, although one female was estimated to be 75–80 years of age. The common name of the whale comes from the gray patches and white mottling on its dark skin. Gray whales were once called devil fish because of their fighting behavior when hunted. The gray whale is the sole living species in the genus Eschrichtius. It is the sole living genus in the family Eschrichtiidae, however some recent studies classify it as a member of the family Balaenopteridae. This mammal is descended from filter-feeding whales that appeared during the Neogene. The gray whale is distributed in a Northeast Pacific (North American), and an endangered Northwest Pacific (Asian), population. North Atlantic populations were extirpated (perhaps by whaling) on the European coast before 500 CE, and on the American and African Atlantic coasts around the late 17th to early 18th centuries. However, in the 2010s and 2020s there have been rare sightings of gray whales in the North Atlantic, Mediterranean, and even off South Atlantic coasts. Taxonomy The gray whale is traditionally placed as the only living species in its genus and family, Eschrichtius and Eschrichtiidae, but an extinct species was discovered and placed in the genus in 2017, the Akishima whale (E. akishimaensis). Some recent studies place gray whales as being outside the rorqual clade, but as the closest relatives to the rorquals. But other recent DNA analyses have suggested that certain rorquals of the family Balaenopteridae, such as the humpback whale, Megaptera novaeangliae, and fin whale, Balaenoptera physalus, are more closely related to the gray whale than they are to some other rorquals, such as the minke whales. The American Society of Mammalogists has followed this classification. John Edward Gray placed it in its own genus in 1865, naming it in honour of physician and zoologist Daniel Frederik Eschricht. The common name of the whale comes from its coloration. The subfossil remains of now extinct gray whales from the Atlantic coasts of England and Sweden were used by Gray to make the first scientific description of a species then surviving only in Pacific waters. The living Pacific species was described by Cope as Rhachianectes glaucus in 1869. Skeletal comparisons showed the Pacific species to be identical to the Atlantic remains in the 1930s, and Gray's naming has been generally accepted since. Although identity between the Atlantic and Pacific populations cannot be proven by anatomical data, its skeleton is distinctive and easy to distinguish from that of all other living whales. Many other names have been ascribed to the gray whale, including desert whale, devilfish, gray back, mussel digger and rip sack. The name Eschrichtius gibbosus is sometimes seen; this is dependent on the acceptance of a 1777 description by Erxleben. Taxonomic history A number of 18th century authors described the gray whale as Balaena gibbosa, the "whale with six bosses", apparently based on a brief note by : The gray whale was first described as a distinct species by based on a subfossil found in the brackish Baltic Sea, apparently a specimen from the now extinct north Atlantic population. Lilljeborg, however, identified it as "Balaenoptera robusta", a species of rorqual. realized that the rib and scapula of the specimen was different from those of any known rorquals, and therefore erected a new genus for it, Eschrichtius. were convinced that the bones described by Lilljeborg could not belong to a living species but that they were similar to fossils that Van Beneden had described from the harbour of Antwerp (most of his named species are now considered nomina dubia) and therefore named the gray whale Plesiocetus robustus, reducing Lilljeborg's and Gray's names to synonyms. Charles Melville Scammon produced one of the earliest descriptions of living Pacific gray whales, and notwithstanding that he was among the whalers who nearly drove them to extinction in the lagoons of the Baja California Peninsula, they were and still are associated with him and his description of the species. At this time, however, the extinct Atlantic population was considered a separate species (Eschrischtius robustus) from the living Pacific population (Rhachianectes glaucus). Things got increasingly confused as 19th century scientists introduced new species at an alarming rate (e.g. Eschrichtius pusillus, E. expansus, E. priscus, E. mysticetoides), often based on fragmentary specimens, and taxonomists started to use several generic and specific names interchangeably and not always correctly (e.g. Agalephus gobbosus, Balaenoptera robustus, Agalephus gibbosus). Things got even worse in the 1930s when it was finally realised that the extinct Atlantic population was the same species as the extant Pacific population, and the new combination Eschrichtius gibbosus was proposed. Description The gray whale has a dark slate-gray color and is covered by characteristic gray-white patterns, which are scars left by parasites that drop off in its cold feeding grounds. Individual whales are typically identified using photographs of their dorsal surface, matching the scars and patches associated with parasites that have either fallen off or are still attached. They have two blowholes on top of their head, which can create a distinctive heart-shaped blow at the surface in calm wind conditions. Gray whales measure from in length for newborns to for adults (females tend to be slightly larger than adult males). Newborns are a darker gray to black in color. A mature gray whale can reach , with a typical range of , making them the ninth largest sized species of cetacean. Notable features that distinguish the gray whale from other mysticetes include its baleen that is variously described as cream, off-white, or blond in color and is unusually short. Small depressions on the upper jaw each contain a lone stiff hair, but are only visible on close inspection. Its head's ventral surface lacks the numerous prominent furrows of the related rorquals, instead bearing two to five shallow furrows on the throat's underside. The gray whale also lacks a dorsal fin, instead bearing 6 to 12 dorsal crenulations ("knuckles"), which are raised bumps on the midline of its rear quarter, leading to the flukes. This is known as the dorsal ridge. The tail itself is across and deeply notched at the center while its edges taper to a point. Pacific groups The two populations of Pacific gray whales (east and west) are morphologically and phylogenically different. Other than DNA structures, differences in proportions of several body parts and body colors including skeletal features, and length ratios of flippers and baleen plates have been confirmed between Eastern and Western populations, and some claims that the original eastern and western groups could have been much more distinct than previously thought, enough to be counted as subspecies. Since the original Asian and Atlantic populations have become extinct, it is difficult to determine the unique features among whales in these stocks. However, there have been observations of some whales showing distinctive, blackish body colors in recent years. This corresponds with the DNA analysis of last recorded stranding in China. Differences were also observed between Korean and Chinese specimens. Populations North Pacific Two Pacific Ocean populations are known to exist: one population that is very low, whose migratory route is presumed to be between the Sea of Okhotsk and southern Korea, and a larger one with a population of about 27,000 individuals in the eastern Pacific traveling between the waters off northernmost Alaska and Baja California Sur. Mothers make this journey accompanied by their calves, usually hugging the shore in shallow kelp beds, and fight viciously to protect their young if they are attacked, earning gray whales the moniker, devil fish. The western population has had a very slow growth rate despite heavy conservation action over the years, likely due to their very slow reproduction rate. The state of the population hit an all-time low in 2010, when no new reproductive females were recorded, resulting in a minimum of 26 reproductive females being observed since 1995. Even a very small number of additional annual female deaths will cause the subpopulation to decline. However, as of 2018, evidence has indicated that the western population is markedly increasing in number, especially off Sakhalin Island. Following this, the IUCN downlisted the population's conservation status from critically endangered to endangered. North Atlantic The gray whale became extinct in the North Atlantic in the 18th century. They had been seasonal migrants to coastal waters of both sides of Atlantic, including the Baltic Sea, the Wadden Sea, the Gulf of St. Lawrence, the Bay of Fundy, Pamlico Sound and possibly Hudson Bay. Radiocarbon dating of subfossil or fossil European (Belgium, the Netherlands, Sweden, the United Kingdom) coastal remains confirms this, with whaling the possible cause for the population's extinction. Remains dating from the Roman epoch were found in the Mediterranean during excavation of the antique harbor of Lattara near Montpellier, France, in 1997, raising the question of whether Atlantic gray whales migrated up and down the coast of Europe from the Wadden Sea to calve in the Mediterranean. A 2018 study utilizing ancient DNA barcoding and collagen peptide matrix fingerprinting confirmed that Roman era whale bones east of the Strait of Gibraltar were gray whales (and North Atlantic right whales), confirming that gray whales once ranged into the Mediterranean. Similarly, radiocarbon dating of American east coastal subfossil remains confirm that gray whales existed there at least through the 17th century. This population ranged at least from Southampton, New York, to Jupiter Island, Florida, the latest from 1675. In his 1835 history of Nantucket Island, Obed Macy wrote that in the early pre-1672 colony a whale of the kind called "scragg" entered the harbor and was pursued and killed by the settlers. A. B. Van Deinse points out that the "scrag whale", described by P. Dudley in 1725 as one of the species hunted by the early New England whalers, was almost certainly the gray whale. Since the 2010s, there have been occasional sightings of gray whales in the Atlantic Ocean and in the Mediterranean Sea, including one off the coast of Israel and one off the coast of Namibia. These were presumably migrants from the North Pacific population through the Arctic Ocean. A 2015 study of DNA from subfossil gray whales indicated that this may not be a historically unique event. That study suggested that over the past 100,000 years there have been several migrations of gray whales between the Pacific and Atlantic, with the most recent large scale migration of this sort occurring about 5,000 years ago. These migrations corresponded to times of relatively high temperatures in the Arctic Ocean. In 2021, one individual was seen in the port of Rabat, Morocco, followed by sightings in Algeria and Italy. In March 2024, New England Aquarium researchers photographed a gray whale south of Nantucket, Massachusetts. Prewhaling abundance Researchers used a genetic approach to estimate pre-whaling abundance based on samples from 42 gray whales, and reported DNA variability at 10 genetic loci consistent with a population size of 76,000–118,000 individuals, three to five times larger than the average census size as measured through 2007. The National Oceanic and Atmospheric Administration has collected surveys of gray whale population since at least the 1960s. They state that "the most recent population estimate [from 2007] was approximately 19,000 whales, with a high probability (88%) that the population is at 'optimum sustainable population' size, as defined by the Marine Mammal Protection Act." They speculate that the ocean ecosystem has likely changed since the prewhaling era, making a return to prewhaling numbers infeasible. Factors limiting or threatening current population levels include ship strikes, entanglement in fishing gear, and changes in sea-ice coverage associated with climate change. Integration and recolonization Several whales seen off Sakhalin and on Kamchatka Peninsula have been confirmed to migrate towards eastern side of Pacific and join the larger eastern population. In January 2011, a gray whale that had been tagged in the western population was tracked as far east as the eastern population range off the coast of British Columbia. Recent findings from either stranded or entangled specimens indicate that the original western population have become functionally extinct, and possibly all the whales that have appeared on Japanese and Chinese coasts in modern times are vagrants or re-colonizers from the eastern population. In mid-1980, there were three gray whale sightings in the eastern Beaufort Sea, placing them further east than their known range at the time. Recent increases in sightings are confirmed in Arctic areas of the historic range for Atlantic stocks, most notably on several locations in the Laptev Sea including the New Siberian Islands in the East Siberian Sea, and around the marine mammal sanctuary of the Franz Josef Land, indicating possible earlier pioneers of re-colonizations. These whales were darker in body color than those whales seen in Sea of Okhotsk. In May 2010, a gray whale was sighted off the Mediterranean shore of Israel. It has been speculated that this whale crossed from the Pacific to the Atlantic via the Northwest Passage, since an alternative route around Cape Horn would not be contiguous to the whale's established territory. There has been gradual melting and recession of Arctic sea ice with extreme loss in 2007 rendering the Northwest Passage "fully navigable". The same whale was sighted again on May 30, 2010, off the coast of Barcelona, Spain. In May 2013, a gray whale was sighted off Walvis Bay, Namibia. Scientists from the Namibian Dolphin Project confirmed the whale's identity and thus provides the only sighting of this species in the Southern Hemisphere. Photographic identification suggests that this is a different individual than the one spotted in the Mediterranean in 2010. As of July 2013, the Namibian whale was still being seen regularly. In March 2021, a gray whale was sighted near Rabat, the capital of Morocco. In April, additional sightings were made off Algeria and Italy. In December 2023, a gray whale was sighted off Sunny Isles Beach, Florida. Genetic analysis of fossil and prefossil gray whale remains in the Atlantic Ocean suggests several waves of dispersal from the Pacific to the Atlantic related to successive periods of climactic warming – during the Pleistocene before the last glacial period and the early Holocene immediately following the opening of the Bering Strait. This information and the recent sightings of Pacific gray whales in the Atlantic, suggest that another range expansion to the Atlantic may be starting. Life history Reproduction Breeding behavior is complex and often involves three or more animals. Both male and female whales reach puberty between the ages of 6 and 12 with an average of eight to nine years. Females show highly synchronized reproduction, undergoing oestrus in late November to early December. During the breeding season, it is common for females to have several mates. This single ovulation event is believed to coincide with the species' annual migration patterns, when births can occur in warmer waters. Most females show biennial reproduction, although annual births have been reported. Males also show seasonal changes, experiencing an increase in testes mass that correlates with the time females undergo oestrus. Currently there are no accounts of twin births, although an instance of twins in utero has been reported. The gestation period for gray whales is approximately 13  months, with females giving birth every one to three years. In the latter half of the pregnancy, the fetus experiences a rapid growth in length and mass. Similar to the narrow breeding season, most calves are born within a six-week time period in mid January. The calf is born tail first, and measures about 14–16 ft in length, and a weight of 2,000 lbs. Females lactate for approximately seven months following birth, at which point calves are weaned and maternal care begins to decrease. The shallow lagoon waters in which gray whales reproduce are believed to protect the newborn from sharks and orcas. On 7 January 2014, a pair of newborn or aborted conjoined twin gray whale calves were found dead in the Laguna Ojo de Liebre (Scammon's Lagoon), off the west coast of Mexico. They were joined by their bellies. Feeding The whale feeds mainly on benthic crustaceans (such as amphipods and ghost shrimp), which it eats by turning on its side and scooping up sediments from the sea floor. This unique feeding selection makes gray whales one of the most strongly reliant on coastal waters among baleen whales. It is classified as a baleen whale and has baleen, or whalebone, which acts like a sieve, to capture small sea animals, including amphipods taken in along with sand, water and other material. Off Vancouver Island, gray whales commonly feed on shrimp-like mysids. When mysids are abundant gray whales are present in fairly large numbers. Despite mysids being a prey of choice, gray whales are opportunistic feeders and can easily switch from feeding planktonically to benthically. When gray whales feed planktonically, they roll onto their right side while their fluke remains above the surface, or they apply the skimming method seen in other baleen whales (skimming the surface with their mouth open). This skimming behavior mainly seems to be used when gray whales are feeding on crab larvae. Other prey items include polychaete worms, herring eggs, various forms of larvae, and small fish. Gray whales feed benthically, by diving to the ocean floor and rolling on to their side, (like blue whales, gray whales seem to favor rolling onto their right side) and suck up prey from the sea floor. Gray whales seem to favor feeding planktonically in their feeding grounds, but benthically along their migration route in shallower water. Mostly, the animal feeds in the northern waters during the summer; and opportunistically feeds during its migration, depending primarily on its extensive fat reserves. Another reason for this opportunistic feeding may be the result of population increases, resulting in the whales taking advantage of whatever prey is available, due to increased competition. Feeding areas during migration seem to include the Gulf of California, Monterey Bay and Baja California Sur. Calf gray whales drink of their mothers' 53% fat milk per day. The main feeding habitat of the western Pacific subpopulation is the shallow ( depth) shelf off northeastern Sakhalin Island, particularly off the southern portion of Piltun Lagoon, where the main prey species appear to be amphipods and isopods. In some years, the whales have also used an offshore feeding ground in depth southeast of Chayvo Bay, where benthic amphipods and cumaceans are the main prey species. Some gray whales have also been seen off western Kamchatka, but to date all whales photographed there are also known from the Piltun area. Diagram of the gray whale seafloor feeding strategy Migration Predicted distribution models indicate that overall range in the last glacial period was broader or more southerly distributed, and inhabitations in waters where species presences lack in present situation, such as in southern hemisphere and south Asian waters and northern Indian Ocean were possible due to feasibility of the environment on those days. Range expansions due to recoveries and re-colonization in the future is likely to happen and the predicted range covers wider than that of today. The gray whale undergoes the longest migration of any mammal. Eastern Pacific population Each October, as the northern ice pushes southward, small groups of eastern gray whales in the eastern Pacific start a two- to three-month, trip south. Beginning in the Bering and Chukchi seas and ending in the warm-water lagoons of Mexico's Baja California Peninsula and the southern Gulf of California, they travel along the west coast of Canada, the United States and Mexico. Traveling night and day, the gray whale averages approximately per day at an average speed of . This round trip of is believed to be the longest annual migration of any mammal. By mid-December to early January, the majority are usually found between Monterey and San Diego such as at Morro bay, often visible from shore. The whale watching industry provides ecotourists and marine mammal enthusiasts the opportunity to see groups of gray whales as they migrate. By late December to early January, eastern grays begin to arrive in the calving lagoons and bays on the west coast of Baja California Sur. The three most popular are San Ignacio, Magdalena Bay to the south, and, to the north, Laguna Ojo de Liebre (formerly known in English as Scammon's Lagoon after whaleman Charles Melville Scammon, who discovered the lagoons in the 1850s and hunted the grays). Gray whales once ranged into Sea of Cortez and Pacific coasts of continental Mexico south to the Islas Marías, Bahía de Banderas, and Nayarit/Jalisco, and there were two modern calving grounds in Sonora (Tojahui or Yavaros) and Sinaloa (Bahia Santa Maria, Bahia Navachiste, La Reforma, Bahia Altata) until being abandoned in the 1980s. These first whales to arrive are usually pregnant mothers looking for the protection of the lagoons to bear their calves, along with single females seeking mates. By mid-February to mid-March, the bulk of the population has arrived in the lagoons, filling them with nursing, calving and mating gray whales. Throughout February and March, the first to leave the lagoons are males and females without new calves. Pregnant females and nursing mothers with their newborns are the last to depart, leaving only when their calves are ready for the journey, which is usually from late March to mid-April. Often, a few mothers linger with their young calves well into May. Whale watching in Baja's lagoons is particularly popular because the whales often come close enough to boats for tourists to pet them. By late March or early April, the returning animals can be seen from Puget Sound to Canada. Resident groups A population of about 200 gray whales stay along the eastern Pacific coast from Canada to California throughout the summer, not making the farther trip to Alaskan waters. This summer resident group is known as the Pacific Coast feeding group. Any historical or current presence of similar groups of residents among the western population is currently unknown, however, whalers' logbooks and scientific observations indicate that possible year-round occurrences in Chinese waters and Yellow and Bohai basins were likely to be summering grounds. Some of the better documented historical catches show that it was common for whales to stay for months in enclosed waters elsewhere, with known records in the Seto Inland Sea and the Gulf of Tosa. Former feeding areas were once spread over large portions on mid-Honshu to northern Hokkaido, and at least whales were recorded for majority of annual seasons including wintering periods at least along east coasts of Korean Peninsula and Yamaguchi Prefecture. Some recent observations indicate that historic presences of resident whales are possible: a group of two or three were observed feeding in Izu Ōshima in 1994 for almost a month, two single individuals stayed in Ise Bay for almost two months in the 1980s and in 2012, the first confirmed living individuals in Japanese EEZ in the Sea of Japan and the first of living cow-calf pairs since the end of whaling stayed for about three weeks on the coastline of Teradomari in 2014. One of the pair returned to the same coasts at the same time of the year in 2015 again. Reviewing on other cases on different locations among Japanese coasts and islands observed during 2015 indicate that spatial or seasonal residencies regardless of being temporal or permanental staying once occurred throughout many parts of Japan or on other coastal Asia. Western population The current western gray whale population summers in the Sea of Okhotsk, mainly off Piltun Bay region at the northeastern coast of Sakhalin Island (Russian Federation). There are also occasional sightings off the eastern coast of Kamchatka (Russian Federation) and in other coastal waters of the northern Okhotsk Sea. Its migration routes and wintering grounds are poorly known, the only recent information being from occasional records on both the eastern and western coasts of Japan and along the Chinese coast. Gray whale had not been observed on Commander Islands until 2016. The northwestern pacific population consists of approximately 300 individuals, based on photo identification collected off of Sakhalin Island and Kamchatka. The Sea of Japan was once thought not to have been a migration route, until several entanglements were recorded. Any records of the species had not been confirmed since after 1921 on Kyushu. However, there were numerous records of whales along the Genkai Sea off Yamaguchi Prefecture, in Ine Bay in the Gulf of Wakasa, and in Tsushima. Gray whales, along with other species such as right whales and Baird's beaked whales, were common features off the north eastern coast of Hokkaido near Teshio, Ishikari Bay near Otaru, the Shakotan Peninsula, and islands in the La Pérouse Strait such as Rebun Island and Rishiri Island. These areas may also have included feeding grounds. There are shallow, muddy areas favorable for feeding whales off Shiretoko, such as at Shibetsu, the Notsuke Peninsula, Cape Ochiishi on Nemuro Peninsula, Mutsu Bay, along the Tottori Sand Dunes, in the Suou-nada Sea, and Ōmura Bay. The historical calving grounds were unknown but might have been along southern Chinese coasts from Zhejiang and Fujian Province to Guangdong, especially south of Hailing Island and to near Hong Kong. Possibilities include Daya Bay, Wailou Harbour on Leizhou Peninsula, and possibly as far south as Hainan Province and Guangxi, particularly around Hainan Island. These areas are at the southwestern end of the known range. It is unknown whether the whales' normal range once reached further south, to the Gulf of Tonkin. In addition, the existence of historical calving ground on Taiwan and Penghu Islands (with some fossil records and captures), and any presence in other areas outside of the known ranges off Babuyan Islands in Philippines and coastal Vietnamese waters in Gulf of Tonkin are unknown. There is only one confirmed record of accidentally killing of the species in Vietnam, at Ngoc Vung Island off Ha Long Bay in 1994 and the skeleton is on exhibition at the Quang Ninh Provincial Historical Museum. Gray whales are known to occur in Taiwan Strait even in recent years. It is also unknown whether any winter breeding grounds ever existed beyond Chinese coasts. For example, it is not known if the whales visited the southern coasts of the Korean Peninsula, adjacent to the Island of Jeju), Haiyang Island, the Gulf of Shanghai, or the Zhoushan Archipelago. There is no evidence of historical presence in Japan south of Ōsumi Peninsula; only one skeleton has been discovered in Miyazaki Prefecture. once considered the Seto Inland Sea to be a historical breeding ground, but only a handful of capture records support this idea, although migrations into the sea have been confirmed. Recent studies using genetics and acoustics, suggest that there are several wintering sites for western gray whales such as Mexico and the East China sea. However, their wintering ground habits in the western North Pacific are still poorly understood and additional research is needed. Recent migration in Asian waters Even though South Korea put the most effort into conservation of the species among the Asian nations, there are no confirmed sightings along the Korean Peninsula or even in the Sea of Japan in recent years. The last confirmed record in Korean waters was the sighting of a pair off Bangeojin, Ulsan in 1977. Prior to this, the last was of catches of 5 animals off Ulsan in 1966. There was a possible sighting of a whale within the port of Samcheok in 2015. There had been 24 records along Chinese coasts including sighting, stranding, intended hunts, and bycatches since 1933. The last report of occurrence of the species in Chinese waters was of a stranded semi adult female in the Bohai Sea in 1996, and the only record in Chinese waters in the 21st century was of a fully-grown female being killed by entanglement in Pingtan, China in November, 2007. DNA studies indicated that this individual might have originated from the eastern population rather than the western. Most notable observations of living whales after the 1980s were of 17 or 18 whales along Primorsky Krai in late October, 1989 (prior to this, a pair was reported swimming in the area in 1987), followed by the record of 14 whales in La Pérouse Strait on 13th, June in 1982 (in this strait, there was another sighting of a pair in October, 1987). In 2011, presences of gray whales were acoustically detected among pelagic waters in East China Sea between Chinese and Japanese waters. Since the mid-1990s, almost all the confirmed records of living animals in Asian waters were from Japanese coasts. There have been eight to fifteen sightings and stray records including unconfirmed sightings and re-sightings of the same individual, and one later killed by net-entanglement. The most notable of these observations are listed below: The feeding activities of a group of two or three whales that stayed around Izu Ōshima in 1994 for almost a month were recorded underwater by several researchers and whale photographers. A pair of thin juveniles were sighted off Kuroshio, Kōchi, a renowned town for whale-watching tourism of resident and sub-resident populations of Bryde's whales, in 1997. This sighting was unusual because of the location on mid-latitude in summer time. Another pair of sub-adults were confirmed swimming near the mouth of Otani River in Suruga Bay in May, 2003. A sub-adult whale that stayed in the Ise and Mikawa Bay for nearly two months in 2012 was later confirmed to be the same individual as the small whale observed off Tahara near Cape Irago in 2010, making it the first confirmed constant migration out of Russian waters. The juvenile observed off Owase in Kumanonada Sea in 2009 might or might not be the same individual. The Ise and Mikawa Bay region is the only location along Japanese coasts that has several records since the 1980s (a mortal entanglement in 1968, above mentioned short-stay in 1982, self-freeing entanglement in 2005), and is also the location where the first commercial whaling started. Other areas with several sighting or stranding records in recent years are off the Kumanonada Sea in Wakayama, off Oshika Peninsula in Tōhoku, and on coastlines close to Tomakomai, Hokkaido. Possibly the first confirmed record of living animals in Japanese waters in the Sea of Japan since the end of whaling occurred on 3 April 2014 at Nodumi Beach, Teradomari, Niigata. Two individuals, measuring ten and five metres respectively, stayed near the mouth of Shinano River for three weeks. It is unknown whether this was a cow-calf pair, which would have been a first record in Asia. All of the previous modern records in the Sea of Japan were of by-catches. One of the above pair returned on the same beaches at the same time of a year in 2015. A juvenile or possibly or not with another larger individual remained in Japanese waters between January or March and May 2015. It was first confirmed occurrences of the species on remote, oceanic islands in Japan. One or more visited waters firstly on Kōzu-shima and Nii-Jima for weeks then adjacent to Miho no Matsubara and behind the Tokai University campus for several weeks. Possibly the same individual was seen off Futo as well. This later was identified as the same individual previously recorded on Sakhalin in 2014, the first re-recording one individual at different Asian locations. A young whale was observed by land-based fishermen at Cape Irago in March, 2015. One of the above pair appeared in 2015 off southeastern Japan and then reappeared off Tateyama in January, 2016. The identity of this whale was confirmed by Nana Takanawa who photographed the same whale on Niijima in 2015. Likely the same individual was sighted off Futo and half an hour later off Akazawa beach in Itō, Shizuoka on the 14th. The whale then stayed next to a pier on Miyake-jima and later at Habushi beach on Niijima, the same beach the same individual stayed near on the previous year. One whale of was beached nearby Wadaura on March 4, 2016. Investigations on the corpse indicate that this was likely a different individual from the above animal. A carcass of young female was firstly reported floating along Atami on 4 April then was washed ashore on Ito on the 6th. As of April 20, 2017, one or more whale(s) have been staying within Tokyo Bay since February although at one point another whale if or if not the same individual sighted off Hayama, Kanagawa. It is unclear the exact number of whales included in these sightings; two whales reported by fishermen and Japanese coastal guard reported three whales on 20th or 21st. Whaling North Pacific Eastern population Humans and orcas are the adult gray whale's only predators, although orcas are the more prominent predator. Aboriginal hunters, including those on Vancouver Island and the Makah in Washington, have hunted gray whales. Commercial whaling by Europeans of the species in the North Pacific began in the winter of 1845–46, when two United States ships, the Hibernia and the United States, under Captains Smith and Stevens, caught 32 in Magdalena Bay. More ships followed in the two following winters, after which gray whaling in the bay was nearly abandoned because "of the inferior quality and low price of the dark-colored gray whale oil, the low quality and quantity of whalebone from the gray, and the dangers of lagoon whaling." Gray whaling in Magdalena Bay was revived in the winter of 1855–56 by several vessels, mainly from San Francisco, including the ship Leonore, under Captain Charles Melville Scammon. This was the first of 11 winters from 1855 through 1865 known as the "bonanza period", during which gray whaling along the coast of Baja California reached its peak. Not only were the whales taken in Magdalena Bay, but also by ships anchored along the coast from San Diego south to Cabo San Lucas and from whaling stations from Crescent City in northern California south to San Ignacio Lagoon. During the same period, vessels targeting right and bowhead whales in the Gulf of Alaska, Sea of Okhotsk, and the Western Arctic would take the odd gray whale if neither of the more desirable two species were in sight. In December 1857, Charles Scammon, in the brig Boston, along with his schooner-tender Marin, entered Laguna Ojo de Liebre (Jack-Rabbit Spring Lagoon) or later known as Scammon's Lagoon (by 1860) and found one of the gray's last refuges. He caught 20 whales. He returned the following winter (1858–59) with the bark Ocean Bird and schooner tenders A.M. Simpson and Kate. In three months, he caught 47 cows, yielding of oil. In the winter of 1859–60, Scammon, again in the bark Ocean Bird, along with several other vessels, entered San Ignacio Lagoon to the south where he discovered the last breeding lagoon. Within only a couple of seasons, the lagoon was nearly devoid of whales. Between 1846 and 1874, an estimated 8,000 gray whales were killed by American and European whalemen, with over half having been killed in the Magdalena Bay complex (Estero Santo Domingo, Magdalena Bay itself, and Almejas Bay) and by shore whalemen in California and Baja California. A second, shorter, and less intensive hunt occurred for gray whales in the eastern North Pacific. Only a few were caught from two whaling stations on the coast of California from 1919 to 1926, and a single station in Washington (1911–21) accounted for the capture of another. For the entire west coast of North America for the years 1919 to 1929, 234 gray whales were caught. Only a dozen or so were taken by British Columbian stations, nearly all of them in 1953 at Coal Harbour. A whaling station in Richmond, California, caught 311 gray whales for "scientific purposes" between 1964 and 1969. From 1961 to 1972, the Soviet Union caught 138 gray whales (they originally reported not having taken any). The only other significant catch was made in two seasons by the steam-schooner California off Malibu, California. In the winters of 1934–35 and 1935–36, the California anchored off Point Dume in Paradise Cove, processing gray whales. In 1936, gray whales became protected in the United States. Western population The Japanese began to catch gray whales beginning in the 1570s. At Kawajiri, Nagato, 169 gray whales were caught between 1698 and 1889. At Tsuro, Shikoku, 201 were taken between 1849 and 1896. Several hundred more were probably caught by American and European whalemen in the Sea of Okhotsk from the 1840s to the early 20th century. Whalemen caught 44 with nets in Japan during the 1890s. The real damage was done between 1911 and 1933, when Japanese whalemen killed 1,449 after Japanese companies established several whaling stations on Korean Peninsula and on Chinese coast such as near the Daya bay and on Hainan Island. By 1934, the western gray whale was near extinction. From 1891 to 1966, an estimated 1,800–2,000 gray whales were caught, with peak catches of between 100 and 200 annually occurring in the 1910s. As of 2001, the Californian gray whale population had grown to about 26,000. As of 2016, the population of western Pacific (seas near Korea, Japan, and Kamchatka) gray whales was an estimated 200. North Atlantic The North Atlantic population may have been hunted to extinction in the 18th century. Circumstantial evidence indicates whaling could have contributed to this population's decline, as the increase in whaling activity in the 17th and 18th centuries coincided with the population's disappearance. A. B. Van Deinse points out the "scrag whale", described by P. Dudley in 1725, as one target of early New England whalers, was almost certainly the gray whale. In his 1835 history of Nantucket Island, Obed Macy wrote that in the early pre-1672 colony, a whale of the kind called "scragg" entered the harbor and was pursued and killed by the settlers. Gray whales (Icelandic sandlægja) were described in Iceland in the early 17th century. Formations of commercial whaling among the Mediterranean basin(s) have been considered to be feasible as well. Conservation Gray whales have been granted protection from commercial hunting by the International Whaling Commission (IWC) since 1949, and are no longer hunted on a large scale. Limited hunting of gray whales has continued since that time, however, primarily in the Chukotka region of northeastern Russia, where large numbers of gray whales spend the summer months. This hunt has been allowed under an "aboriginal/subsistence whaling" exception to the commercial-hunting ban. Anti-whaling groups have protested the hunt, saying the meat from the whales is not for traditional native consumption, but is used instead to feed animals in government-run fur farms; they cite annual catch numbers that rose dramatically during the 1940s, at the time when state-run fur farms were being established in the region. Although the Soviet government denied these charges as recently as 1987, in recent years the Russian government has acknowledged the practice. The Russian IWC delegation has said that the hunt is justified under the aboriginal/subsistence exemption, since the fur farms provide a necessary economic base for the region's native population. Currently, the annual quota for the gray whale catch in the region is 140 per year. Pursuant to an agreement between the United States and Russia, the Makah tribe of Washington claimed four whales from the IWC quota established at the 1997 meeting. With the exception of a single gray whale killed in 1999, the Makah people have been prevented from hunting by a series of legal challenges, culminating in a United States federal appeals court decision in December 2002 that required the National Marine Fisheries Service to prepare an Environmental Impact Statement. On September 8, 2007, five members of the Makah tribe shot a gray whale using high-powered rifles in spite of the decision. The whale died within 12 hours, sinking while heading out to sea. As of 2018, the IUCN regards the gray whale as being of least concern from a conservation perspective. However, the specific subpopulation in the northwest Pacific is regarded as being critically endangered. The northwest Pacific population is also listed as endangered by the U.S. government's National Marine Fisheries Service under the U.S. Endangered Species Act. The IWC Bowhead, Right and Gray Whale subcommittee in 2011 reiterated the conservation risk to western gray whales is large because of the small size of the population and the potential anthropogenic impacts. Gray whale migrations off of the Pacific Coast were observed, initially, by Marineland of the Pacific in Palos Verdes, California. The Gray Whale Census, an official gray whale migration census that has been recording data on the migration of the Pacific gray whale has been keeping track of the population of the Pacific gray whale since 1985. This census is the longest running census of the Pacific gray whale. Census keepers volunteer from December 1 through May, from sun up to sun down, seven days a week, keeping track of the amount of gray whales migrating through the area off of Los Angeles. Information from this census is listed through the American Cetacean Society of Los Angeles (ACSLA). South Korea and China list gray whales as protected species of high concern. In South Korea, the was registered as the 126th national monument in 1962, although illegal hunts have taken place thereafter, and there have been no recent sightings of the species in Korean waters. Rewilding proposal In 2005, two conservation biologists proposed a plan to airlift 50 gray whales from the Pacific Ocean to the Atlantic Ocean. They reasoned that, as Californian gray whales had replenished to a suitable population, surplus whales could be transported to repopulate the extinct British population. this plan has not been undertaken. Threats According to the Government of Canada's Management Plan for gray whales, threats to the eastern North Pacific population of gray whales include: increased human activities in their breeding lagoons in Mexico, climate change, acute noise, toxic spills, aboriginal whaling, entanglement with fishing gear, boat collisions, and possible impacts from fossil fuel exploration and extraction. Western gray whales are facing large-scale offshore oil and gas development programs near their summer feeding grounds, as well as fatal net entrapments off Japan during migration, which pose significant threats to the future survival of the population. The substantial nearshore industrialization and shipping congestion throughout the migratory corridors of the western gray whale population represent potential threats by increasing the likelihood of exposure to ship strikes, chemical pollution, and general disturbance. Offshore gas and oil development in the Okhotsk Sea within of the primary feeding ground off northeast Sakhalin Island is of particular concern. Activities related to oil and gas exploration, including geophysical seismic surveying, pipelaying and drilling operations, increased vessel traffic, and oil spills, all pose potential threats to western gray whales. Disturbance from underwater industrial noise may displace whales from critical feeding habitat. Physical habitat damage from drilling and dredging operations, combined with possible impacts of oil and chemical spills on benthic prey communities also warrants concern. The western gray whale population is considered to be endangered according to IUCN standards. Along Japanese coasts, four females including a cow-calf pair were trapped and killed in nets in the 2000s. There had been a record of dead whale thought to be harpooned by dolphin-hunters found on Hokkaido in the 1990s. Meats for sale were also discovered in Japanese markets as well. 2019 has had a record number of gray whale strandings and deaths, with 122 strandings in United States waters and 214 in Canadian waters. The cause of death in some specimens appears to be related to poor nutritional condition. It is hypothesized that some of these strandings are related to changes in prey abundance or quality in the Arctic feeding grounds, resulting in poor feeding. Some scientists suggest that the lack of sea ice has been preventing the fertilization of amphipods, a main source of food for gray whales, so that they have been hunting krill instead, which is far less nutritious. More research needs to be conducted to understand this issue. A recent study provides some evidence that solar activity is correlated to gray whale strandings. When there was a high prevalence of sunspots, gray whales were five times more likely to strand. A possible explanation for this phenomenon is that solar storms release a large amount of electromagnetic radiation, which disrupts Earth's magnetic field and/or the whale's ability to analyze it. This may apply to the other species of cetaceans, such as sperm whales. However, there is not enough evidence to suggest that whales navigate through the use of magnetoreception (an organism's ability to sense a magnetic field). Orcas are "a prime predator of gray whale calves." Typically three to four orcas ram a calf from beneath in order to separate it from its mother, who defends it. Humpback whales have been observed defending gray whale calves from orcas. Orcas will often arrive in Monterey Bay to intercept gray whales during their northbound migration, targeting females migrating with newborn calves. They will separate the calf from the mother and hold the calf under water to drown it. The tactic of holding whales under water to drown them is certainly used by orcas on adult gray whales as well. It is roughly estimated that 33% of the gray whales born in a given year might be killed by predation. Captivity Because of their size and need to migrate, gray whales have rarely been held in captivity, and then only for brief periods of time. The first captive gray whale, who was captured in Scammon's Lagoon, Baja California in 1965, was named Gigi and died two months later from an infection. The second gray whale, who was captured in 1972 from the same lagoon, was named Gigi II and was released a year later after becoming too large for the facilities. The third gray whale, J.J., first beached herself in Marina del Rey, California where she was rushed to SeaWorld San Diego. After 14 months, she was released because she also grew too large to be cared for in the existing facilities. At and when she was released, J.J. was the largest marine mammal ever to be kept in captivity.
Biology and health sciences
Baleen whales
Animals
13034
https://en.wikipedia.org/wiki/Geyser
Geyser
A geyser (, ) is a spring with an intermittent discharge of water ejected turbulently and accompanied by steam. The formation of geysers is fairly rare, and is caused by particular hydrogeological conditions that exist only in a few places on Earth. Generally, geyser field sites are located near active volcanic areas, and the geyser effect is due to the proximity of magma. Surface water works its way down to an average depth of around where it contacts hot rocks. The pressurized water boils, and this causes the geyser effect of hot water and steam spraying out of the geyser's surface vent. A geyser's eruptive activity may change or cease due to ongoing mineral deposition within the geyser plumbing, exchange of functions with nearby hot springs, earthquake influences, and human intervention. Like many other natural phenomena, geysers are not unique to Earth. Jet-like eruptions, often referred to as cryogeysers, have been observed on several of the moons of the outer Solar System. Due to the low ambient pressures, these eruptions consist of vapour without liquid; they are made more easily visible by particles of dust and ice carried aloft by the gas. Water vapour jets have been observed near the south pole of Saturn's moon Enceladus, while nitrogen eruptions have been observed on Neptune's moon Triton. There are also signs of carbon dioxide eruptions from the southern polar ice cap of Mars. In the case of Enceladus, the plumes are believed to be driven by internal energy. In the cases of the venting on Mars and Triton, the activity may be a result of solar heating via a solid-state greenhouse effect. In all three cases, there is no evidence of the subsurface hydrological system which differentiates terrestrial geysers from other sorts of venting, such as fumaroles. Etymology The term 'geyser' in English dates back to the late 18th century and comes from Geysir, which is a geyser in Iceland. Its name means "one who gushes". Geology Form and function Geysers are nonpermanent geological features. Geysers are generally associated with areas of recent magmatism. As the water boils, the resulting pressure forces a superheated column of steam and water to the surface through the geyser's internal plumbing. The formation of geysers specifically requires the combination of three geologic conditions that are usually found in volcanic terrain: heat, water, and a subsurface hydraulic system with the right geometry. The heat needed for geyser formation comes from magma that needs to be close to the surface of the Earth. For the heated water to form a geyser, a plumbing system (made of fractures, fissures, porous spaces, and sometimes cavities) is required. This includes a reservoir to hold the water while it is being heated. Geysers tend to be coated with geyserite, or siliceous sinter. The water in geysers comes in contact with hot silica-containing rocks, such as rhyolite. The heated water dissolves the silica. As it gets closer to the surface, the water cools and the silica drops out of solution, leaving a deposit of amorphous opal. Gradually the opal anneals into quartz, forming geyserite. Geyserite often covers the microbial mats that grow in geysers. As the mats grow and the silica is deposited, the mats can form up to 50% of the volume of the geyserite. Eruptions Geyser activity, like all hot spring activity, is caused by surface water gradually seeping down through the ground until it meets geothermally heated rock. In non-eruptive hot springs, the heated water then rises back toward the surface by convection through porous and fractured rocks, while in geysers, the water instead is explosively forced upwards by the high steam pressure created when water boils below. Geysers also differ from non-eruptive hot springs in their subterranean structure: geysers have constrictions in their plumbing that creates pressure build-up. As the geyser fills, the water at the top of the column cools off, but because of the narrowness of the channel, convective cooling of the water in the reservoir is impossible. The cooler water above presses down on the hotter water beneath, not unlike the lid of a pressure cooker, allowing the water in the reservoir to become superheated, i.e. to remain liquid at temperatures well above the standard-pressure boiling point. Ultimately, the temperatures near the bottom of the geyser rise to a point where boiling begins, forcing steam bubbles to rise to the top of the column. As they burst through the geyser's vent, some water overflows or splashes out, reducing the weight of the column and thus the pressure on the water below. With this release of pressure, the superheated water flashes into steam, boiling violently throughout the column. The resulting froth of expanding steam and hot water then sprays out of the geyser vent. Eventually the water remaining in the geyser cools back to below the boiling point and the eruption ends; heated groundwater begins seeping back into the reservoir, and the whole cycle begins again. The duration of eruptions and the time between successive eruptions vary greatly from geyser to geyser; Strokkur in Iceland erupts for a few seconds every few minutes, while Grand Geyser in the United States erupts for up to 10 minutes every 8–12 hours. General categorization There are two types of geysers: fountain geysers which erupt from pools of water, typically in a series of intense, even violent, bursts; and cone geysers which erupt from cones or mounds of siliceous sinter (including geyserite), usually in steady jets that last anywhere from a few seconds to several minutes. Old Faithful, perhaps the best-known geyser at Yellowstone National Park, is an example of a cone geyser. Grand Geyser, the tallest predictable geyser on Earth (although Geysir in Iceland is taller, it is not predictable), also at Yellowstone National Park, is an example of a fountain geyser. There are many volcanic areas in the world that have hot springs, mud pots and fumaroles, but very few have erupting geysers. The main reason for their rarity is that multiple intense transient forces must occur simultaneously for a geyser to exist. For example, even when other necessary conditions exist, if the rock structure is loose, eruptions will erode the channels and rapidly destroy any nascent geysers. Geysers are fragile, and if conditions change, they may go dormant or extinct. Many have been destroyed simply by people throwing debris into them, while others have ceased to erupt due to dewatering by geothermal power plants. However, the Geysir in Iceland has had periods of activity and dormancy. During its long dormant periods, eruptions were sometimes artificially induced—often on special occasions—by the addition of surfactant soaps to the water. Biology Some geysers have specific colours, because despite the harsh conditions, life is often found in them (and also in other hot habitats) in the form of thermophilic prokaryotes. No known eukaryote can survive over . In the 1960s, when the research of the biology of geysers first appeared, scientists were generally convinced that no life can survive above around —the upper limit for the survival of cyanobacteria, as the structure of key cellular proteins and deoxyribonucleic acid (DNA) would be destroyed. The optimal temperature for thermophilic bacteria was placed even lower, around . However, the observations proved that can exist at high temperatures and that some bacteria even prefer temperatures higher than the boiling point of water. Dozens of such bacteria are known. Thermophiles prefer temperatures from , while hyperthermophiles grow better at temperatures as high as . As they have heat-stable enzymes that retain their activity even at high temperatures, they have been used as a source of thermostable tools, which are important in medicine and biotechnology, for example in manufacturing antibiotics, plastics, detergents (by the use of heat-stable enzymes lipases, pullulanases and proteases), and fermentation products (for example ethanol is produced). Among these, the first discovered and the most important for biotechnology is Thermus aquaticus. Major geyser fields and their distribution Geysers are quite rare, requiring a combination of water, heat, and fortuitous plumbing. The combination exists in few places on Earth. Yellowstone National Park Yellowstone is the largest geyser locale, containing thousands of hot springs, and approximately 300 to 500 geysers. It is home to half of the world's total number of geysers in its nine geyser basins. It is located mostly in Wyoming, USA, with small portions in Montana and Idaho. Yellowstone includes the world's tallest active geyser (Steamboat Geyser in Norris Geyser Basin). Valley of Geysers, Russia The Valley of Geysers (), located in the Kamchatka Peninsula of Russia, is the second-largest concentration of geysers in the world. The area was discovered and explored by Tatyana Ustinova in 1941. There are about 200 geysers in the area, along with many hot-water springs and perpetual spouters. The area was formed by vigorous volcanic activity. The peculiar way of eruptions is an important feature of these geysers. Most of the geysers erupt at angles, and only very few have the geyser cones that exist at many other of the world's geyser fields. On 3 June 2007, a massive mudflow influenced two-thirds of the valley. It was then reported that a thermal lake was forming above the valley. Four of the eight thermal areas in the valley were covered by the landslide or by the lake. Velikan Geyser, one of the field's largest, was not buried in the slide: the slide shortened its period of eruption from 379 minutes before the slide to 339 minutes after (through 2010). El Tatio, Chile The name "El Tatio" comes from the Quechua word for oven. El Tatio is located in the high valleys of the Andes in Chile, surrounded by many active volcanoes, at around above mean sea level. The valley is home to approximately 80 geysers at present. It became the largest geyser field in the Southern Hemisphere after the destruction of many of the New Zealand geysers, and is the third largest geyser field in the world. The salient feature of these geysers is that the height of their eruptions is very low, the tallest being only high, but with steam columns that can be over high. The average geyser eruption height at El Tatio is about . Taupō Volcanic Zone, New Zealand The Taupō Volcanic Zone is located on New Zealand's North Island. It is long by and lies over a subduction zone in the Earth's crust. Mount Ruapehu marks its southwestern end, while the submarine Whakatāne seamount ( beyond Whakaari / White Island) is considered its northeastern limit. Many geysers in this zone were destroyed due to geothermal developments and a hydroelectric reservoir: only one geyser basin at Whakarewarewa remains. In the beginning of the 20th century, the largest geyser ever known, the Waimangu Geyser, existed in this zone. It began erupting in 1900 and erupted periodically for four years until a landslide changed the local water table. Eruptions of Waimangu would typically reach and some superbursts are known to have reached . Recent scientific work indicates that the Earth's crust below the zone may be as little as thick. Beneath this lies a film of magma wide and long. Iceland Due to the high rate of volcanic activity in Iceland, it is home to some of the most famous geysers in the world. There are around 20–29 active geysers in the country, as well as numerous formerly active geysers. Icelandic geysers are distributed in the zone stretching from south-west to north-east, along the boundary between the Eurasian Plate and the North American Plate. Most of the Icelandic geysers are comparatively short-lived. It is also characteristic that many geysers here are reactivated or newly created after earthquakes, becoming dormant or extinct after some years or some decades. Two most prominent geysers of Iceland are located in Haukadalur. The Great Geysir, which first erupted in the 14th century, gave rise to the word geyser. By 1896, Geysir was almost dormant before an earthquake that year caused eruptions to begin again, occurring several times a day; but in 1916, eruptions all but ceased. Throughout much of the 20th century, eruptions did happen from time to time, usually following earthquakes. Some man-made improvements were made to the spring and eruptions were forced with soap on special occasions. Earthquakes in June 2000 subsequently reawakened the giant for a time, but it is not currently erupting regularly. The nearby Strokkur geyser erupts every 5–8 minutes to a height of some . Extinct and dormant geyser fields There used to be two large geyser fields in Nevada—Beowawe and Steamboat Springs—but they were destroyed by the installation of nearby geothermal power plants. At the plants, geothermal drilling reduced the available heat and lowered the local water table to the point that geyser activity could no longer be sustained. Many of New Zealand's geysers have been destroyed by humans in the last century. Several New Zealand geysers have also become dormant or extinct by natural means. The main remaining field is Whakarewarewa at Rotorua. Two-thirds of the geysers at Orakei Korako were flooded by the construction of the hydroelectric Ohakuri dam in 1961. The Wairakei field was lost to a geothermal power plant in 1958. The Rotomahana field was destroyed by the 1886 eruption of Mount Tarawera. Misnamed geysers There are various other types of geysers which are different in nature compared to the normal steam-driven geysers. These geysers differ not only in their style of eruption but also in the cause that makes them erupt. Artificial geysers In a number of places where there is geothermal activity, wells have been drilled and fitted with impermeable casements that allow them to erupt like geysers. The vents of such geysers are artificial, but are tapped into natural hydrothermal systems. These so-called artificial geysers, technically known as erupting geothermal wells, are not true geysers. Little Old Faithful Geyser, in Calistoga, California, is an example. The geyser erupts from the casing of a well drilled in the late 19th century, which opened up a dead geyser. In the case of the Big Mine Run Geyser in Ashland, Pennsylvania, the heat powering the geyser (which erupts from an abandoned mine vent) comes not from geothermal power, but from the long-simmering Centralia mine fire. Perpetual spouter This is a natural hot spring that spouts water constantly without stopping for recharge. Some of these are incorrectly called geysers, but because they are not periodic in nature they are not considered true geysers. Commercialization Geysers are used for various activities such as electricity generation, heating and geotourism. Many geothermal reserves are found all around the world. The geyser fields in Iceland are some of the most commercially viable geyser locations in the world. Since the 1920s hot water directed from the geysers has been used to heat greenhouses and to grow food that otherwise could not have been cultivated in Iceland's inhospitable climate. Steam and hot water from the geysers has also been used for heating homes since 1943 in Iceland. In 1979 the U.S. Department of Energy (DOE) actively promoted development of geothermal energy in the "Geysers-Calistoga Known Geothermal Resource Area" (KGRA) near Calistoga, California through a variety of research programs and the Geothermal Loan Guarantee Program. The department is obligated by law to assess the potential environmental impacts of geothermal development. Extraterrestrial geyser-like features There are many bodies in the Solar System where eruptions which superficially resemble terrestrial geysers have been observed or are believed to occur. Despite being commonly referred to as geysers, they are driven by fundamentally different processes, consist of a wide range of volatiles, and can occur on vastly disparate scales; from the modestly sized Martian carbon dioxide jets to the immense plumes of Enceladus. Generally, there are two broad categories of feature commonly referred to as geysers: sublimation plumes, and cryovolcanic plumes (also referred to as cryogeysers). Sublimation plumes are jets of sublimated volatiles and dust from shallow sources under icy surfaces. Known examples include the CO2 jets on Mars, and the nitrogen eruptions on Neptune's moon Triton. On Mars carbon dioxide jets are believed to occur in the southern polar region of Mars during spring, as a layer of dry ice accumulated over winter is warmed by the sun. Although these jets have not yet been directly observed, they leave evidence visible from orbit in the form of dark spots and lighter fans atop the dry ice. These features consist primarily of sand and dust blown out by the outbursts, as well as spider-like patterns of channels created below the ice by the rapid flow of CO2 gas. There are a plethora of theories to explain the eruptions, including heating from sunlight, chemical reactions, or even biological activity. Triton was found to have active eruptions of nitrogen and dust by Voyager 2 when it flew past the moon in 1989. These plumes were up to 8km high, where winds would blow them up to 150km downwind, creating long, dark streaks across the otherwise bright south polar ice cap. There are various theories as to what drives the activity on Triton, such as solar heating through transparent ice, cryovolcanism, or basal heating of nitrogen ice sheets. Cryovolcanic plumes or cryogeysers generally refer to large-scale eruptions of predominantly water vapour from active cryovolcanic features on certain icy moons. Such plumes occur on Saturn's moon Enceladus and Jupiter's moon Europa. Plumes of water vapour, together with ice particles and smaller amounts of other components (such as carbon dioxide, nitrogen, ammonia, hydrocarbons and silicates), have been observed erupting from vents associated with the "tiger stripes" in the south polar region of Enceladus by the Cassini orbiter. These plumes are the source of the material in Saturn's E ring. The mechanism which causes these eruptions are generated remains uncertain, as well as to what extent they are physically linked to Enceladus' subsurface ocean, but they are believed to be powered at least in part by tidal heating. Cassini flew through these plumes several times, allowing direct analysis of water from inside another solar system body for the first time. In December 2013, the Hubble Space Telescope detected water vapour plumes potentially 200km high above the south polar region of Europa. Re-examination of Galileo data also suggested that it may have flown through a plume during a flyby in 1997. Water was also detected by the Keck Observatory in 2016, announced in a 2019 Nature article speculating the cause to be a cryovolcanic eruption. It is thought that Europa's lineae might be venting this water vapour into space in a similar manner to the "tiger stripes" of Enceladus.
Physical sciences
Volcanic landforms
null
13035
https://en.wikipedia.org/wiki/Gaussian%20elimination
Gaussian elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can also be used to compute the rank of a matrix, the determinant of a square matrix, and the inverse of an invertible matrix. The method is named after Carl Friedrich Gauss (1777–1855). To perform row reduction on a matrix, one uses a sequence of elementary row operations to modify the matrix until the lower left-hand corner of the matrix is filled with zeros, as much as possible. There are three types of elementary row operations: Swapping two rows, Multiplying a row by a nonzero number, Adding a multiple of one row to another row. Using these operations, a matrix can always be transformed into an upper triangular matrix (possibly bordered by rows or columns of zeros), and in fact one that is in row echelon form. Once all of the leading coefficients (the leftmost nonzero entry in each row) are 1, and every column containing a leading coefficient has zeros elsewhere, the matrix is said to be in reduced row echelon form. This final form is unique; in other words, it is independent of the sequence of row operations used. For example, in the following sequence of row operations (where two elementary operations on different rows are done at the first and third steps), the third and fourth matrices are the ones in row echelon form, and the final matrix is the unique reduced row echelon form. Using row operations to convert a matrix into reduced row echelon form is sometimes called . In this case, the term Gaussian elimination refers to the process until it has reached its upper triangular, or (unreduced) row echelon form. For computational reasons, when solving systems of linear equations, it is sometimes preferable to stop row operations before the matrix is completely reduced. Definitions and example of algorithm The process of row reduction makes use of elementary row operations, and can be divided into two parts. The first part (sometimes called forward elimination) reduces a given system to row echelon form, from which one can tell whether there are no solutions, a unique solution, or infinitely many solutions. The second part (sometimes called back substitution) continues to use row operations until the solution is found; in other words, it puts the matrix into reduced row echelon form. Another point of view, which turns out to be very useful to analyze the algorithm, is that row reduction produces a matrix decomposition of the original matrix. The elementary row operations may be viewed as the multiplication on the left of the original matrix by elementary matrices. Alternatively, a sequence of elementary operations that reduces a single row may be viewed as multiplication by a Frobenius matrix. Then the first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row echelon matrix. Row operations There are three types of elementary row operations which may be performed on the rows of a matrix: Interchanging two rows. Multiplying a row by a non-zero scalar. Adding a scalar multiple of one row to another. If the matrix is associated to a system of linear equations, then these operations do not change the solution set. Therefore, if one's goal is to solve a system of linear equations, then using these row operations could make the problem easier. Echelon form For each row in a matrix, if the row does not consist of only zeros, then the leftmost nonzero entry is called the leading coefficient (or pivot) of that row. So if two leading coefficients are in the same column, then a row operation of type 3 could be used to make one of those coefficients zero. Then by using the row swapping operation, one can always order the rows so that for every non-zero row, the leading coefficient is to the right of the leading coefficient of the row above. If this is the case, then matrix is said to be in row echelon form. So the lower left part of the matrix contains only zeros, and all of the zero rows are below the non-zero rows. The word "echelon" is used here because one can roughly think of the rows being ranked by their size, with the largest being at the top and the smallest being at the bottom. For example, the following matrix is in row echelon form, and its leading coefficients are shown in red: It is in echelon form because the zero row is at the bottom, and the leading coefficient of the second row (in the third column), is to the right of the leading coefficient of the first row (in the second column). A matrix is said to be in reduced row echelon form if furthermore all of the leading coefficients are equal to 1 (which can be achieved by using the elementary row operation of type 2), and in every column containing a leading coefficient, all of the other entries in that column are zero (which can be achieved by using elementary row operations of type 3). Example of the algorithm Suppose the goal is to find and describe the set of solutions to the following system of linear equations: The table below is the row reduction process applied simultaneously to the system of equations and its associated augmented matrix. In practice, one does not usually deal with the systems in terms of equations, but instead makes use of the augmented matrix, which is more suitable for computer manipulations. The row reduction procedure may be summarized as follows: eliminate from all equations below , and then eliminate from all equations below . This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for. {| class="wikitable" |- ! System of equations !! Row operations !! Augmented matrix |- align="center" | | | |- align="center" | | | |- align="center" | | | |- |colspan=3 align="center"| The matrix is now in echelon form (also called triangular form) |- align="center" | | | |- align="center" | | | |- align="center" | | | |} The second column describes which row operations have just been performed. So for the first step, the is eliminated from by adding to . Next, is eliminated from by adding to . These row operations are labelled in the table as Once is also eliminated from the third row, the result is a system of linear equations in triangular form, and so the first part of the algorithm is complete. From a computational point of view, it is faster to solve the variables in reverse order, a process known as back-substitution. One sees the solution is , , and . So there is a unique solution to the original system of equations. Instead of stopping once the matrix is in echelon form, one could continue until the matrix is in reduced row echelon form, as it is done in the table. The process of row reducing until the matrix is reduced is sometimes referred to as Gauss–Jordan elimination, to distinguish it from stopping after reaching echelon form. History The method of Gaussian elimination appears – albeit without proof – in the Chinese mathematical text Chapter Eight: Rectangular Arrays of The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 AD, but parts of it were written as early as approximately 150 BC. It was commented on by Liu Hui in the 3rd century. According to Grcar solution of linear equations by elimination was invented independently in several cultures in Eurasia starting from antiquity and in Europe definite examples of procedure were published already by late Renaissance (in 1550's). It is quite possible that already then the procedure was considered by mathematicians elementary and in no need to explanation for professionals, so we may never learn its detailed history except that by then it was practiced in at least several places in Europe. The method in Europe stems from the notes of Isaac Newton. In 1670, he wrote that all the algebra books known to him lacked a lesson for solving simultaneous equations, which Newton then supplied. Cambridge University eventually published the notes as Arithmetica Universalis in 1707 long after Newton had left academic life. The notes were widely imitated, which made (what is now called) Gaussian elimination a standard lesson in algebra textbooks by the end of the 18th century. Carl Friedrich Gauss in 1810 devised a notation for symmetric elimination that was adopted in the 19th century by professional hand computers to solve the normal equations of least-squares problems. The algorithm that is taught in high school was named for Gauss only in the 1950s as a result of confusion over the history of the subject. Some authors use the term Gaussian elimination to refer only to the procedure until the matrix is in echelon form, and use the term Gauss–Jordan elimination to refer to the procedure which ends in reduced echelon form. The name is used because it is a variation of Gaussian elimination as described by Wilhelm Jordan in 1888. However, the method also appears in an article by Clasen published in the same year. Jordan and Clasen probably discovered Gauss–Jordan elimination independently. Applications Historically, the first application of the row reduction method is for solving systems of linear equations. Below are some other important applications of the algorithm. Computing determinants To explain how Gaussian elimination allows the computation of the determinant of a square matrix, we have to recall how the elementary row operations change the determinant: Swapping two rows multiplies the determinant by −1 Multiplying a row by a nonzero scalar multiplies the determinant by the same scalar Adding to one row a scalar multiple of another does not change the determinant. If Gaussian elimination applied to a square matrix produces a row echelon matrix , let be the product of the scalars by which the determinant has been multiplied, using the above rules. Then the determinant of is the quotient by of the product of the elements of the diagonal of : Computationally, for an matrix, this method needs only arithmetic operations, while using Leibniz formula for determinants requires operations (number of summands in the formula times the number of multiplications in each summand), and recursive Laplace expansion requires operations if the sub-determinants are memorized for being computed only once (number of operations in a linear combination times the number of sub-determinants to compute, which are determined by their columns). Even on the fastest computers, these two methods are impractical or almost impracticable for above 20. Finding the inverse of a matrix A variant of Gaussian elimination called Gauss–Jordan elimination can be used for finding the inverse of a matrix, if it exists. If is an square matrix, then one can use row reduction to compute its inverse matrix, if it exists. First, the identity matrix is augmented to the right of , forming an block matrix . Now through application of elementary row operations, find the reduced echelon form of this matrix. The matrix is invertible if and only if the left block can be reduced to the identity matrix ; in this case the right block of the final matrix is . If the algorithm is unable to reduce the left block to , then is not invertible. For example, consider the following matrix: To find the inverse of this matrix, one takes the following matrix augmented by the identity and row-reduces it as a 3 × 6 matrix: By performing row operations, one can check that the reduced row echelon form of this augmented matrix is One can think of each row operation as the left product by an elementary matrix. Denoting by the product of these elementary matrices, we showed, on the left, that , and therefore, . On the right, we kept a record of , which we know is the inverse desired. This procedure for finding the inverse works for square matrices of any size. Computing ranks and bases The Gaussian elimination algorithm can be applied to any matrix . In this way, for example, some 6 × 9 matrices can be transformed to a matrix that has a row echelon form like where the stars are arbitrary entries, and are nonzero entries. This echelon matrix contains a wealth of information about : the rank of is 5, since there are 5 nonzero rows in ; the vector space spanned by the columns of has a basis consisting of its columns 1, 3, 4, 7 and 9 (the columns with in ), and the stars show how the other columns of can be written as linear combinations of the basis columns. All of this applies also to the reduced row echelon form, which is a particular row echelon format. Computational efficiency The number of arithmetic operations required to perform row reduction is one way of measuring the algorithm's computational efficiency. For example, to solve a system of equations for unknowns by performing row operations on the matrix until it is in echelon form, and then solving for each unknown in reverse order, requires divisions, multiplications, and subtractions, for a total of approximately operations. Thus it has a arithmetic complexity (time complexity, where each arithmetic operation take a unit of time, independently of the size of the inputs) of . This complexity is a good measure of the time needed for the whole computation when the time for each arithmetic operation is approximately constant. This is the case when the coefficients are represented by floating-point numbers or when they belong to a finite field. If the coefficients are integers or rational numbers exactly represented, the intermediate entries can grow exponentially large, so the bit complexity is exponential. However, Bareiss' algorithm is a variant of Gaussian elimination that avoids this exponential growth of the intermediate entries; with the same arithmetic complexity of , it has a bit complexity of , and has therefore a strongly-polynomial time complexity. Gaussian elimination and its variants can be used on computers for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using iterative methods. Specific methods exist for systems whose coefficients follow a regular pattern (see system of linear equations). Bareiss algorithm The first strongly-polynomial time algorithm for Gaussian elimination was published by Jack Edmonds in 1967. Independently, and almost simultaneously, Erwin Bareiss discovered another algorithm, based on the following remark, which applies to a division-free variant of Gaussian elimination. In standard Gaussian elimination, one subtracts from each row below the pivot row a multiple of by where and are the entries in the pivot column of and respectively. Bareiss variant consists, instead, of replacing with This produces a row echelon form that has the same zero entries as with the standard Gaussian elimination. Bareiss' main remark is that each matrix entry generated by this variant is the determinant of a submatrix of the original matrix. In particular, if one starts with integer entries, the divisions occurring in the algorithm are exact divisions resulting in integers. So, all intermediate entries and final entries are integers. Moreover, Hadamard inequality provides an upper bound on the absolute values of the intermediate and final entries, and thus a bit complexity of using soft O notation. Moreover, as an upper bound on the size of final entries is known, a complexity can be obtained with modular computation followed either by Chinese remaindering or Hensel lifting. As a corollary, the following problems can be solved in strongly polynomial time with the same bit complexity: Testing whether m given rational vectors are linearly independent Computing the determinant of a rational matrix Computing a solution of a rational equation system Ax = b Computing the inverse matrix of a nonsingular rational matrix Computing the rank of a rational matrix Numeric instability One possible problem is numerical instability, caused by the possibility of dividing by very small numbers. If, for example, the leading coefficient of one of the rows is very close to zero, then to row-reduce the matrix, one would need to divide by that number. This means that any error which existed for the number that was close to zero would be amplified. Gaussian elimination is numerically stable for diagonally dominant or positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable, when using partial pivoting, even though there are examples of stable matrices for which it is unstable. Generalizations Gaussian elimination can be performed over any field, not just the real numbers. Buchberger's algorithm is a generalization of Gaussian elimination to systems of polynomial equations. This generalization depends heavily on the notion of a monomial order. The choice of an ordering on the variables is already implicit in Gaussian elimination, manifesting as the choice to work from left to right when selecting pivot positions. Computing the rank of a tensor of order greater than 2 is NP-hard. Therefore, if , there cannot be a polynomial time analog of Gaussian elimination for higher-order tensors (matrices are array representations of order-2 tensors). Pseudocode As explained above, Gaussian elimination transforms a given matrix into a matrix in row-echelon form. In the following pseudocode, A[i, j] denotes the entry of the matrix in row and column with the indices starting from 1. The transformation is performed in place, meaning that the original matrix is lost for being eventually replaced by its row-echelon form. h := 1 /* Initialization of the pivot row */ k := 1 /* Initialization of the pivot column */ while h ≤ m and k ≤ n /* Find the k-th pivot: */ i_max := argmax (i = h ... m, abs(A[i, k])) if A[i_max, k] = 0 /* No pivot in this column, pass to next column */ k := k + 1 else swap rows(h, i_max) /* Do for all rows below pivot: */ for i = h + 1 ... m: f := A[i, k] / A[h, k] /* Fill with zeros the lower part of pivot column: */ A[i, k] := 0 /* Do for all remaining elements in current row: */ for j = k + 1 ... n: A[i, j] := A[i, j] - A[h, j] * f /* Increase pivot row and column */ h := h + 1 k := k + 1 This algorithm differs slightly from the one discussed earlier, by choosing a pivot with largest absolute value. Such a partial pivoting may be required if, at the pivot place, the entry of the matrix is zero. In any case, choosing the largest possible absolute value of the pivot improves the numerical stability of the algorithm, when floating point is used for representing numbers. Upon completion of this procedure the matrix will be in row echelon form and the corresponding system may be solved by back substitution.
Mathematics
Algebra
null
13040
https://en.wikipedia.org/wiki/Gypsum
Gypsum
Gypsum is a soft sulfate mineral composed of calcium sulfate dihydrate, with the chemical formula . It is widely mined and is used as a fertilizer and as the main constituent in many forms of plaster, drywall and blackboard or sidewalk chalk. Gypsum also crystallizes as translucent crystals of selenite. It forms as an evaporite mineral and as a hydration product of anhydrite. The Mohs scale of mineral hardness defines gypsum as hardness value 2 based on scratch hardness comparison. Fine-grained white or lightly tinted forms of gypsum known as alabaster have been used for sculpture by many cultures including Ancient Egypt, Mesopotamia, Ancient Rome, the Byzantine Empire, and the Nottingham alabasters of Medieval England. Etymology and history The word gypsum is derived from the Greek word (), "plaster". Because the quarries of the Montmartre district of Paris have long furnished burnt gypsum (calcined gypsum) used for various purposes, this dehydrated gypsum became known as plaster of Paris. Upon adding water, after a few dozen minutes, plaster of Paris becomes regular gypsum (dihydrate) again, causing the material to harden or "set" in ways that are useful for casting and construction. Gypsum was known in Old English as , "spear stone", referring to its crystalline projections. Thus, the word spar in mineralogy, by comparison to gypsum, refers to any non-ore mineral or crystal that forms in spearlike projections. In the mid-18th century, the German clergyman and agriculturalist Johann Friderich Mayer investigated and publicized gypsum's use as a fertilizer. Gypsum may act as a source of sulfur for plant growth, and in the early 19th century, it was regarded as an almost miraculous fertilizer. American farmers were so anxious to acquire it that a lively smuggling trade with Nova Scotia evolved, resulting in the so-called "Plaster War" of 1820. Physical properties Gypsum is moderately water-soluble (~2.0–2.5 g/L at 25 °C) and, in contrast to most other salts, it exhibits retrograde solubility, becoming less soluble at higher temperatures. When gypsum is heated in air it loses water and converts first to calcium sulfate hemihydrate (bassanite, often simply called "plaster") and, if heated further, to anhydrous calcium sulfate (anhydrite). As with anhydrite, the solubility of gypsum in saline solutions and in brines is also strongly dependent on sodium chloride (common table salt) concentration. The structure of gypsum consists of layers of calcium (Ca2+) and sulfate () ions tightly bound together. These layers are bonded by sheets of anion water molecules via weaker hydrogen bonding, which gives the crystal perfect cleavage along the sheets (in the {010} plane). Crystal varieties Gypsum occurs in nature as flattened and often twinned crystals, and transparent, cleavable masses called selenite. Selenite contains no significant selenium; rather, both substances were named for the ancient Greek word for the Moon. Selenite may also occur in a silky, fibrous form, in which case it is commonly called "satin spar". Finally, it may also be granular or quite compact. In hand-sized samples, it can be anywhere from transparent to opaque. A very fine-grained white or lightly tinted variety of gypsum, called alabaster, is prized for ornamental work of various sorts. In arid areas, gypsum can occur in a flower-like form, typically opaque, with embedded sand grains called desert rose. It also forms some of the largest crystals found in nature, up to long, in the form of selenite. Occurrence Gypsum is a common mineral, with thick and extensive evaporite beds in association with sedimentary rocks. Deposits are known to occur in strata from as far back as the Archaean eon. Gypsum is deposited from lake and sea water, as well as in hot springs, from volcanic vapors, and sulfate solutions in veins. Hydrothermal anhydrite in veins is commonly hydrated to gypsum by groundwater in near-surface exposures. It is often associated with the minerals halite and sulfur. Gypsum is the most common sulfate mineral. Pure gypsum is white, but other substances found as impurities may give a wide range of colors to local deposits. Because gypsum dissolves over time in water, gypsum is rarely found in the form of sand. However, the unique conditions of the White Sands National Park in the US state of New Mexico have created a expanse of white gypsum sand, enough to supply the US construction industry with drywall for 1,000 years. Commercial exploitation of the area, strongly opposed by area residents, was permanently prevented in 1933 when President Herbert Hoover declared the gypsum dunes a protected national monument. Gypsum is also formed as a by-product of sulfide oxidation, amongst others by pyrite oxidation, when the sulfuric acid generated reacts with calcium carbonate. Its presence indicates oxidizing conditions. Under reducing conditions, the sulfates it contains can be reduced back to sulfide by sulfate-reducing bacteria. This can lead to accumulation of elemental sulfur in oil-bearing formations, such as salt domes, where it can be mined using the Frasch process Electric power stations burning coal with flue gas desulfurization produce large quantities of gypsum as a byproduct from the scrubbers. Orbital pictures from the Mars Reconnaissance Orbiter (MRO) have indicated the existence of gypsum dunes in the northern polar region of Mars, which were later confirmed at ground level by the Mars Exploration Rover (MER) Opportunity. Mining Commercial quantities of gypsum are found in the cities of Araripina and Grajaú in Brazil; in Pakistan, Jamaica, Iran (world's second largest producer), Thailand, Spain (the main producer in Europe), Germany, Italy, England, Ireland, Canada and the United States. Large open pit quarries are located in many places including Fort Dodge, Iowa, which sits on one of the largest deposits of gypsum in the world, and Plaster City, California, United States, and East Kutai, Kalimantan, Indonesia. Several small mines also exist in places such as Kalannie in Western Australia, where gypsum is sold to private buyers for additions of calcium and sulfur as well as reduction of aluminum toxicities on soil for agricultural purposes. Crystals of gypsum up to long have been found in the caves of the Naica Mine of Chihuahua, Mexico. The crystals thrived in the cave's extremely rare and stable natural environment. Temperatures stayed at , and the cave was filled with mineral-rich water that drove the crystals' growth. The largest of those crystals weighs and is around 500,000 years old. Synthesis Synthetic gypsum is produced as a waste product or by-product in a range of industrial processes. Desulfurization Flue gas desulfurization gypsum (FGDG) is recovered at some coal-fired power plants. The main contaminants are Mg, K, Cl, F, B, Al, Fe, Si, and Se. They come both from the limestone used in desulfurization and from the coal burned. This product is pure enough to replace natural gypsum in a wide variety of fields including drywalls, water treatment, and cement set retarder. Improvements in flue gas desulfurization have greatly reduced the amount of toxic elements present. Desalination Gypsum precipitates onto brackish water membranes, a phenomenon known as mineral salt scaling, such as during brackish water desalination of water with high concentrations of calcium and sulfate. Scaling decreases membrane life and productivity. This is one of the main obstacles in brackish water membrane desalination processes, such as reverse osmosis or nanofiltration. Other forms of scaling, such as calcite scaling, depending on the water source, can also be important considerations in distillation, as well as in heat exchangers, where either the salt solubility or concentration can change rapidly. A new study has suggested that the formation of gypsum starts as tiny crystals of a mineral called bassanite (2CaSO4·H2O). This process occurs via a three-stage pathway: homogeneous nucleation of nanocrystalline bassanite; self-assembly of bassanite into aggregates, and transformation of bassanite into gypsum. Refinery waste The production of phosphate fertilizers requires breaking down calcium-containing phosphate rock with acid, producing calcium sulfate waste known as phosphogypsum (PG). This form of gypsum is contaminated by impurities found in the rock, namely fluoride, silica, radioactive elements such as radium, and heavy metal elements such as cadmium. Similarly, production of titanium dioxide produces titanium gypsum (TG) due to neutralization of excess acid with lime. The product is contaminated with silica, fluorides, organic matters, and alkalis. Impurities in refinery gypsum waste have, in many cases, prevented them from being used as normal gypsum in fields such as construction. As a result, waste gypsum is stored in stacks indefinitely, with significant risk of leaching their contaminants into water and soil. To reduce the accumulation and ultimately clear out these stacks, research is underway to find more applications for such waste products. Occupational safety People can be exposed to gypsum in the workplace by breathing it in, skin contact, and eye contact. Calcium sulfate per se is nontoxic and is even approved as a food additive, but as powdered gypsum, it can irritate skin and mucous membranes. United States The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for gypsum exposure in the workplace as TWA 15 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an eight-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of TWA 10 mg/m3 for total exposure and TWA 5 mg/m3 for respiratory exposure over an eight-hour workday. Uses Gypsum is used in a wide variety of applications: Construction industry Gypsum board is primarily used as a finish for walls and ceilings, and is known in construction as plasterboard, "sheetrock", or drywall. Gypsum provides a degree of fire-resistance to these materials, and glass fibers are added to their composition to accentuate this effect. Gypsum has negligible heat conductivity, giving its plaster some insulative properties. Gypsum blocks are used like concrete blocks in construction. Gypsum mortar is an ancient mortar used in construction. A component of Portland cement used to prevent flash setting (too rapid hardening) of concrete. A wood substitute in the ancient world: For example, when wood became scarce due to deforestation on Bronze Age Crete, gypsum was employed in building construction at locations where wood was previously used. Agriculture Fertilizer: In the late 18th and early 19th centuries, Nova Scotia gypsum, often referred to as plaster, was a highly sought fertilizer for wheat fields in the United States. Gypsum provides two of the secondary plant macronutrients, calcium and sulfur. Unlike limestone, it generally does not affect soil pH. Reclamation of saline soils, regardless of pH. When gypsum is added to sodic (saline) and acidic soil, the highly soluble form of boron (sodium metaborate) is converted to the less soluble calcium metaborate. The exchangeable sodium percentage is also reduced by gypsum application. The Zuiderzee Works uses gypsum for the recovered land. Other soil conditioner uses: Gypsum reduces aluminium and boron toxicity in acidic soils. It also improves soil structure, water absorption, and aeration. Soil water potential monitoring: a gypsum block can be inserted into the soil, and its electrical resistance can be measured to derive soil moisture. Modeling, sculpture and art Plaster for casting moulds and modeling. As alabaster, a material for sculpture, it was used especially in the ancient world before steel was developed, when its relative softness made it much easier to carve. During the Middle Ages and Renaissance, it was preferred even to marble. In the medieval period, scribes and illuminators used it as an ingredient in gesso, which was applied to illuminated letters and gilded with gold in illuminated manuscripts. Food and drink A tofu (soy bean curd) coagulant, making it ultimately a significant source of dietary calcium. Adding hardness to water used for brewing. Used in baking as a dough conditioner, reducing stickiness, and as a baked goods source of dietary calcium. The primary component of mineral yeast food. Used in mushroom cultivation to stop grains from clumping together. Medicine and cosmetics Plaster for surgical splints. Impression plasters in dentistry. Other An alternative to iron oxide in some thermite mixes. Tests have shown that gypsum can be used to remove pollutants such as lead or arsenic from contaminated waters. Gallery
Physical sciences
Mineralogy
null
13046
https://en.wikipedia.org/wiki/Geometric%20mean
Geometric mean
In mathematics, the geometric mean is a mean or average which indicates a central tendency of a finite collection of positive real numbers by using the product of their values (as opposed to the arithmetic mean which uses their sum). The geometric mean of numbers is the th root of their product, i.e., for a collection of numbers , the geometric mean is defined as When the collection of numbers and their geometric mean are plotted in logarithmic scale, the geometric mean is transformed into an arithmetic mean, so the geometric mean can equivalently be calculated by taking the natural logarithm of each number, finding the arithmetic mean of the logarithms, and then returning the result to linear scale using the exponential function , The geometric mean of two numbers is the square root of their product, for example with numbers and the geometric mean is The geometric mean of the three numbers is the cube root of their product, for example with numbers , , and , the geometric mean is The geometric mean is useful whenever the quantities to be averaged combine multiplicatively, such as population growth rates or interest rates of a financial investment. Suppose for example a person invests $1000 and achieves annual returns of +10%, −12%, +90%, −30% and +25%, giving a final value of $1609. The average percentage growth is the geometric mean of the annual growth ratios (1.10, 0.88, 1.90, 0.70, 1.25), namely 1.0998, an annual average growth of 9.98%. The arithmetic mean of these annual returns – 16.6% per annum – is not a meaningful average because growth rates do not combine additively. The geometric mean can be understood in terms of geometry. The geometric mean of two numbers, and , is the length of one side of a square whose area is equal to the area of a rectangle with sides of lengths and . Similarly, the geometric mean of three numbers, , , and , is the length of one edge of a cube whose volume is the same as that of a cuboid with sides whose lengths are equal to the three given numbers. The geometric mean is one of the three classical Pythagorean means, together with the arithmetic mean and the harmonic mean. For all positive data sets containing at least one pair of unequal values, the harmonic mean is always the least of the three means, while the arithmetic mean is always the greatest of the three and the geometric mean is always in between (see Inequality of arithmetic and geometric means.) Formulation The geometric mean of a data set is given by: That is, the nth root of the product of the elements. For example, for , the product is , and the geometric mean is the fourth root of 24, approximately 2.213. Formulation using logarithms The geometric mean can also be expressed as the exponential of the arithmetic mean of logarithms. By using logarithmic identities to transform the formula, the multiplications can be expressed as a sum and the power as a multiplication: When since This is sometimes called the log-average (not to be confused with the logarithmic average). It is simply the arithmetic mean of the logarithm-transformed values of (i.e., the arithmetic mean on the log scale), using the exponentiation to return to the original scale, i.e., it is the generalised f-mean with . A logarithm of any base can be used in place of the natural logarithm. For example, the geometric mean of , , , and can be calculated using logarithms base 2: Related to the above, it can be seen that for a given sample of points , the geometric mean is the minimizer of , whereas the arithmetic mean is the minimizer of . Thus, the geometric mean provides a summary of the samples whose exponent best matches the exponents of the samples (in the least squares sense). In computer implementations, naïvely multiplying many numbers together can cause arithmetic overflow or underflow. Calculating the geometric mean using logarithms is one way to avoid this problem. Related concepts Iterative means The geometric mean of a data set is less than the data set's arithmetic mean unless all members of the data set are equal, in which case the geometric and arithmetic means are equal. This allows the definition of the arithmetic-geometric mean, an intersection of the two which always lies in between. The geometric mean is also the arithmetic-harmonic mean in the sense that if two sequences () and () are defined: and where is the harmonic mean of the previous values of the two sequences, then and will converge to the geometric mean of and . The sequences converge to a common limit, and the geometric mean is preserved: Replacing the arithmetic and harmonic mean by a pair of generalized means of opposite, finite exponents yields the same result. Comparison to arithmetic mean The geometric mean of a non-empty data set of positive numbers is always at most their arithmetic mean. Equality is only obtained when all numbers in the data set are equal; otherwise, the geometric mean is smaller. For example, the geometric mean of 2 and 3 is 2.45, while their arithmetic mean is 2.5. In particular, this means that when a set of non-identical numbers is subjected to a mean-preserving spread — that is, the elements of the set are "spread apart" more from each other while leaving the arithmetic mean unchanged — their geometric mean decreases. Geometric mean of a continuous function If is a positive continuous real-valued function, its geometric mean over this interval is For instance, taking the identity function over the unit interval shows that the geometric mean of the positive numbers between 0 and 1 is equal to . Applications Average proportional growth rate The geometric mean is more appropriate than the arithmetic mean for describing proportional growth, both exponential growth (constant proportional growth) and varying growth; in business the geometric mean of growth rates is known as the compound annual growth rate (CAGR). The geometric mean of growth over periods yields the equivalent constant growth rate that would yield the same final amount. As an example, suppose an orange tree yields 100 oranges one year and then 180, 210 and 300 the following years, for growth rates of 80%, 16.7% and 42.9% respectively. Using the arithmetic mean calculates a (linear) average growth of 46.5% (calculated by ). However, when applied to the 100 orange starting yield, 46.5% annual growth results in 314 oranges after three years of growth, rather than the observed 300. The linear average overstates the rate of growth. Instead, using the geometric mean, the average yearly growth is approximately 44.2% (calculated by ). Starting from a 100 orange yield with annual growth of 44.2% gives the expected 300 orange yield after three years. In order to determine the average growth rate, it is not necessary to take the product of the measured growth rates at every step. Let the quantity be given as the sequence , where is the number of steps from the initial to final state. The growth rate between successive measurements and is . The geometric mean of these growth rates is then just: Normalized values The fundamental property of the geometric mean, which does not hold for any other mean, is that for two sequences and of equal length, . This makes the geometric mean the only correct mean when averaging normalized results; that is, results that are presented as ratios to reference values. This is the case when presenting computer performance with respect to a reference computer, or when computing a single average index from several heterogeneous sources (for example, life expectancy, education years, and infant mortality). In this scenario, using the arithmetic or harmonic mean would change the ranking of the results depending on what is used as a reference. For example, take the following comparison of execution time of computer programs: Table 1 The arithmetic and geometric means "agree" that computer C is the fastest. However, by presenting appropriately normalized values and using the arithmetic mean, we can show either of the other two computers to be the fastest. Normalizing by A's result gives A as the fastest computer according to the arithmetic mean: Table 2 while normalizing by B's result gives B as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 3 and normalizing by C's result gives C as the fastest computer according to the arithmetic mean but A as the fastest according to the harmonic mean: Table 4 In all cases, the ranking given by the geometric mean stays the same as the one obtained with unnormalized values. However, this reasoning has been questioned. Giving consistent results is not always equal to giving the correct results. In general, it is more rigorous to assign weights to each of the programs, calculate the average weighted execution time (using the arithmetic mean), and then normalize that result to one of the computers. The three tables above just give a different weight to each of the programs, explaining the inconsistent results of the arithmetic and harmonic means (Table 4 gives equal weight to both programs, the Table 2 gives a weight of 1/1000 to the second program, and the Table 3 gives a weight of 1/100 to the second program and 1/10 to the first one). The use of the geometric mean for aggregating performance numbers should be avoided if possible, because multiplying execution times has no physical meaning, in contrast to adding times as in the arithmetic mean. Metrics that are inversely proportional to time (speedup, IPC) should be averaged using the harmonic mean. The geometric mean can be derived from the generalized mean as its limit as goes to zero. Similarly, this is possible for the weighted geometric mean. Financial The geometric mean has from time to time been used to calculate financial indices (the averaging is over the components of the index). For example, in the past the FT 30 index used a geometric mean. It is also used in the CPI calculation and recently introduced "RPIJ" measure of inflation in the United Kingdom and in the European Union. This has the effect of understating movements in the index compared to using the arithmetic mean. Applications in the social sciences Although the geometric mean has been relatively rare in computing social statistics, starting from 2010 the United Nations Human Development Index did switch to this mode of calculation, on the grounds that it better reflected the non-substitutable nature of the statistics being compiled and compared: The geometric mean decreases the level of substitutability between dimensions [being compared] and at the same time ensures that a 1 percent decline in say life expectancy at birth has the same impact on the HDI as a 1 percent decline in education or income. Thus, as a basis for comparisons of achievements, this method is also more respectful of the intrinsic differences across the dimensions than a simple average. Not all values used to compute the HDI (Human Development Index) are normalized; some of them instead have the form . This makes the choice of the geometric mean less obvious than one would expect from the "Properties" section above. The equally distributed welfare equivalent income associated with an Atkinson Index with an inequality aversion parameter of 1.0 is simply the geometric mean of incomes. For values other than one, the equivalent value is an Lp norm divided by the number of elements, with p equal to one minus the inequality aversion parameter. Geometry In the case of a right triangle, its altitude is the length of a line extending perpendicularly from the hypotenuse to its 90° vertex. Imagining that this line splits the hypotenuse into two segments, the geometric mean of these segment lengths is the length of the altitude. This property is known as the geometric mean theorem. In an ellipse, the semi-minor axis is the geometric mean of the maximum and minimum distances of the ellipse from a focus; it is also the geometric mean of the semi-major axis and the semi-latus rectum. The semi-major axis of an ellipse is the geometric mean of the distance from the center to either focus and the distance from the center to either directrix. Another way to think about it is as follows: Consider a circle with radius . Now take two diametrically opposite points on the circle and apply pressure from both ends to deform it into an ellipse with semi-major and semi-minor axes of lengths and . Since the area of the circle and the ellipse stays the same, we have: The radius of the circle is the geometric mean of the semi-major and the semi-minor axes of the ellipse formed by deforming the circle. Distance to the horizon of a sphere (ignoring the effect of atmospheric refraction when atmosphere is present) is equal to the geometric mean of the distance to the closest point of the sphere and the distance to the farthest point of the sphere. The geometric mean is used in both in the approximation of squaring the circle by S.A. Ramanujan and in the construction of the heptadecagon with "mean proportionals". Aspect ratios The geometric mean has been used in choosing a compromise aspect ratio in film and video: given two aspect ratios, the geometric mean of them provides a compromise between them, distorting or cropping both in some sense equally. Concretely, two equal area rectangles (with the same center and parallel sides) of different aspect ratios intersect in a rectangle whose aspect ratio is the geometric mean, and their hull (smallest rectangle which contains both of them) likewise has the aspect ratio of their geometric mean. In the choice of 16:9 aspect ratio by the SMPTE, balancing 2.35 and 4:3, the geometric mean is , and thus ... was chosen. This was discovered empirically by Kerns Powers, who cut out rectangles with equal areas and shaped them to match each of the popular aspect ratios. When overlapped with their center points aligned, he found that all of those aspect ratio rectangles fit within an outer rectangle with an aspect ratio of 1.77:1 and all of them also covered a smaller common inner rectangle with the same aspect ratio 1.77:1. The value found by Powers is exactly the geometric mean of the extreme aspect ratios, 4:3(1.33:1) and CinemaScope(2.35:1), which is coincidentally close to (). The intermediate ratios have no effect on the result, only the two extreme ratios. Applying the same geometric mean technique to 16:9 and 4:3 approximately yields the 14:9 (...) aspect ratio, which is likewise used as a compromise between these ratios. In this case 14:9 is exactly the arithmetic mean of and , since 14 is the average of 16 and 12, while the precise geometric mean is but the two different means, arithmetic and geometric, are approximately equal because both numbers are sufficiently close to each other (a difference of less than 2%). Paper formats The geometric mean is also used to calculate B and C series paper formats. The format has an area which is the geometric mean of the areas of and . For example, the area of a B1 paper is , because it is the geometric mean of the areas of an A0 () and an A1 () paper The same principle applies with the C series, whose area is the geometric mean of the A and B series. For example, the C4 format has an area which is the geometric mean of the areas of A4 and B4. An advantage that comes from this relationship is that an A4 paper fits inside a C4 envelope, and both fit inside a B4 envelope. Other applications Spectral flatness: in signal processing, spectral flatness, a measure of how flat or spiky a spectrum is, is defined as the ratio of the geometric mean of the power spectrum to its arithmetic mean. Anti-reflective coatings: In optical coatings, where reflection needs to be minimised between two media of refractive indices n0 and n2, the optimum refractive index n1 of the anti-reflective coating is given by the geometric mean: . Subtractive color mixing: The spectral reflectance curve for paint mixtures (of equal tinting strength, opacity and dilution) is approximately the geometric mean of the paints' individual reflectance curves computed at each wavelength of their spectra. Image processing: The geometric mean filter is used as a noise filter in image processing. Labor compensation: The geometric mean of a subsistence wage and market value of the labor using capital of employer was suggested as the natural wage by Johann von Thünen in 1875.
Mathematics
Statistics
null
13057
https://en.wikipedia.org/wiki/Gatling%20gun
Gatling gun
The Gatling gun is a rapid-firing multiple-barrel firearm invented in 1861 by Richard Jordan Gatling. It is an early machine gun and a forerunner of the modern electric motor-driven rotary cannon. The Gatling gun's operation centered on a cyclic multi-barrel design which facilitated cooling and synchronized the firing-reloading sequence. As the handwheel is cranked, the barrels rotate, and each barrel sequentially loads a single cartridge from a top-mounted magazine, fires off the shot when it reaches a set position (usually at 4 o'clock), then ejects the spent casing out of the left side at the bottom, after which the barrel is empty and allowed to cool until rotated back to the top position and gravity-fed another new round. This configuration eliminated the need for a single reciprocating bolt design and allowed higher rates of fire to be achieved without the barrels overheating quickly. One of the best-known early rapid-fire firearms, the Gatling gun saw occasional use by the U.S. forces during the American Civil War, which was the first time it was employed in combat. It was later used in numerous military conflicts, including the Boshin War, the Anglo-Zulu War, and the assault on San Juan Hill during the Spanish–American War. It was also used by the Pennsylvania militia in episodes of the Great Railroad Strike of 1877, specifically in Pittsburgh. Gatling guns were also mounted aboard ships. Design The Gatling gun is operated by a hand-crank mechanism, with six barrels revolving around a central shaft (although some models had as many as ten). Each barrel fires once per revolution at about the same position. The barrels, a carrier, and a lock cylinder were separate and all mounted on a solid plate revolving around a central shaft, mounted on an oblong fixed frame. Turning the crank rotated the shaft. The carrier was grooved and the lock cylinder was drilled with holes corresponding to the barrels. The casing was partitioned, and through this opening, the barrel shaft was journaled. In front of the casing was a cam with spiral surfaces. The cam imparted a reciprocating motion to the locks when the gun rotated. Also in the casing was a cocking ring with projections to cock and fire the gun. Each barrel had a single lock, working in the lock cylinder on a line with the barrel. The lock cylinder was encased and joined to the frame. Early models had a fibrous matting stuffed in among the barrels, which could be soaked with water to cool the barrels down. Later models eliminated the matting jacketing as being unnecessary. Cartridges, held in a hopper, dropped individually into the grooves of the carrier. The lock was simultaneously forced by the cam to move forward and load the cartridge, and when the cam was at its highest point, the cocking ring freed the lock and fired the cartridge. After the cartridge was fired the continuing action of the cam drew back the lock bringing with it the spent casing which then dropped to the ground. The grouped barrel concept had been explored by inventors since the 18th century, but poor engineering and the lack of a unitary cartridge made previous designs unsuccessful. The initial Gatling gun design used self-contained, reloadable steel cylinders with a chamber holding a ball and black-powder charge, and a percussion cap on one end. As the barrels rotated, these steel cylinders dropped into place, were fired, and were then ejected from the gun. The innovative features of the Gatling gun were its independent firing mechanism for each barrel and the simultaneous action of the locks, barrels, carrier, and breech. The ammunition that Gatling eventually implemented was a paper cartridge charged with black powder and primed with a percussion cap because self-contained brass cartridges were not yet fully developed and available. The shells were gravity-fed into the breech through a hopper or simple box "magazine" with an unsprung gravity follower on top of the gun. Each barrel had its own firing mechanism. Despite self-contained brass cartridges replacing the paper cartridge in the 1860s, it wasn't until the Model 1881 that Gatling switched to the 'Bruce'-style feed system (U.S. Patents 247,158 and 343,532) that accepted two rows of .45-70 cartridges. While one row was being fed into the gun, the other could be reloaded, thus allowing sustained fire. The final gun required four operators. By 1886, the gun was capable of firing more than 400 rounds per minute. The smallest-caliber gun also had a Broadwell drum feed in place of the curved box of the other guns. The drum, named after L. W. Broadwell, an agent for Gatling's company, comprised twenty stacks of rounds arranged around a central axis, like the spokes of a wheel, each holding twenty cartridges with the bullet noses oriented toward the central axis. This invention was patented in U. S. 110,338. As each stack emptied, the drum was manually rotated to bring a new stack into use until all 400 rounds had been fired. A more common variant had 240 rounds in twenty stands of fifteen. By 1893, the Gatling was adapted to take the new .30 Army smokeless cartridge. The new M1893 guns featured six barrels, later increased to ten barrels, and were capable of a maximum (initial) rate of fire of 800–900 rounds per minute, though 600 rpm was recommended for continuous fire. Dr. Gatling later used examples of the M1893 powered by electric motor and belt to drive the crank. Tests demonstrated the electric Gatling could fire bursts of up to 1,500 rpm. The M1893, with minor revisions, became the M1895, and 94 guns were produced for the U.S. Army by Colt. Four M1895 Gatlings under Lt. John H. Parker saw considerable combat during the Santiago campaign in Cuba in 1898. The M1895 was designed to accept only the Bruce feeder. All previous models were unpainted, but the M1895 was painted olive drab green, with some parts left blued. The Model 1900 was very similar to the model 1895, but with only a few components finished in O.D. green. The U.S. Army purchased several M1900s. All Gatling Models 1895–1903 could be mounted on an armored field carriage. In 1903, the Army converted its M1900 guns into .30 Army to fit the new .30-03 cartridge (standardized for the M1903 Springfield rifle) as the M1903. The later M1903-'06 was an M1903 converted to .30-06. This conversion was principally carried out at the Army's Springfield Armory arsenal repair shops. All models of Gatling guns were declared obsolete by the U.S. military in 1911, after 45 years of service. The original Gatling gun was a field weapon that used multiple rotating barrels turned by a hand crank, and firing loose (no links or belt) metal cartridge ammunition using a gravity feed system from a hopper. The Gatling gun's innovation lay in the use of multiple barrels to limit overheating, a rotating mechanism, and a gravity-feed reloading system, which allowed unskilled operators to achieve a relatively high rate of fire of 200 rounds per minute. Although the first Gatling gun was capable of firing continuously, it required a person to crank it; therefore it was not a true automatic weapon. The Maxim gun, invented and patented in 1883, was the first true fully automatic weapon, making use of the fired projectile's recoil force to reload the weapon. Nonetheless, the Gatling gun represented a huge leap in firearm technology. Before the Gatling gun, the only weapons available to military forces capable of firing many projectiles in a short period of time were mass-firing volley weapons, like the Belgian and French mitrailleuse of the 1860s and 1870s, and field cannons firing canister shot, much like an upsized shotgun. The latter was widely used during and after the Napoleonic Wars. Although the maximum rate of fire was increased by firing multiple projectiles simultaneously, these weapons still needed to be reloaded after each discharge, which for multi-barrel systems like the mitrailleuse was cumbersome and time-consuming. This negated much of the advantage of their high rate of fire per discharge, making them much less powerful on the battlefield. In comparison, the Gatling gun offered a rapid and continuous rate of fire without having to be manually reloaded by opening the breech. Early multi-barrel guns were approximately the size and weight of artillery pieces and were often perceived as a replacement for cannons firing grapeshot or canister shot. Compared with earlier weapons such as the mitrailleuse, which required manual reloading, the Gatling gun was more reliable and easier to operate and had a lower, but continuous rate of fire. The large wheels required to move these guns around required a high firing position, which increased the vulnerability of their crews. Sustained firing of black powder cartridges generated a cloud of smoke, making concealment impossible until smokeless powder became available in the late 19th century. When operators were firing Gatling guns against troops of industrialized nations, they were at risk, being vulnerable to artillery they could not reach and snipers they could not see. History The Gatling gun was designed by the American inventor Richard J. Gatling in 1861 and patented on November 4, 1862. Gatling wrote that he created it to reduce the size of armies and so reduce the number of deaths by combat and disease. United States and South America The US Army adopted Gatling guns in several calibers, including .42 caliber, .45-70, .50 caliber, 1 inch, and (M1893 and later) .30 Army, with conversions of M1900 weapons to .30-03 and .30-06. The .45-70 weapon was also mounted on some US Navy ships of the 1880s and 1890s. British manufacturer James George Accles, previously employed by Colt 1867–1886, developed a modified Gatling gun circa 1888 known as the Accles Machine Gun. Circa 1895 the American Ordnance Company acquired the rights to manufacture and distribute this weapon in the Americas. It was trialed by the US Navy in December 1895, and was said to be the only weapon to complete the trial out of five competing weapons, but was apparently not adopted by US forces. The Gatling gun was first used in warfare during the American Civil War. Twelve of the guns were purchased personally by Union commanders and used in the trenches during the Siege of Petersburg, Virginia (June 1864—April 1865). Eight other Gatling guns were fitted on gunboats. The gun was not accepted by the American Army until 1866 when a sales representative of the manufacturing company demonstrated it in combat. On July 17, 1863, Gatling guns were purportedly used to overawe New York anti-draft rioters. Two were brought by a Pennsylvania National Guard unit from Philadelphia to use against strikers in Pittsburgh. Gatling guns were famously not used at the Battle of the Little Bighorn, also known as "Custer's Last Stand", when Gen. George Armstrong Custer chose not to bring Gatling guns with his main force. In April 1867, a Gatling gun was purchased for the Argentine Army by minister Domingo F. Sarmiento under instructions from president Bartolomé Mitre. Captain Luis Germán Astete of the Peruvian Navy took with him dozens of Gatling guns from the United States to Peru in December 1879 during the Peru-Chile War of the Pacific. Gatling guns were used by the Peruvian Navy and Army, especially in the Battle of Tacna (May 1880) and the Battle of San Juan (January 1881) against the invading Chilean Army. In 1888 the SS Ozama smuggled a number of Gatling guns into Haiti In 1907 Gatling guns were used by Nicaragua in the battle of Namasique, largely manned by American mercenaries Gatling guns were kept in store by coal companies and used during the Battle of Blair Mountain; In September 1 a group of miners looted one of these guns and assaulted a spot called Craddock Fork. Opposing forces fought back with a machine gun, but after three hours of heavy fire, their weapon jammed. The miners surged forward and briefly broke the defensive line, but were repulsed by another machine gun nest located further up the ridge. Africa and Asia The Gatling gun was used most successfully to expand European colonial empires by defeating indigenous warriors mounting massed attacks, including the Zulu, the Bedouin, and the Mahdists. Imperial Russia purchased 400 Gatling guns and used them against Turkmen cavalry and other nomads of central Asia. The British Army first deployed the Gatling gun in 1873-74 during the Anglo-Ashanti wars, and extensively during the last actions of the 1879 Anglo-Zulu war. The Royal Navy used Gatling guns during the 1882 Anglo-Egyptian War. Gatling guns were used by Egyptian forces both on sea and land, and saw combat in Sudan and Abyssinia. Isma'il Pasha ordered 120 Colt 1865 six-barrel Gatling guns; after being convinced by Shahine Pasha who witnessed Gatling gun trials at Shoeburyness in 1866. In 1872 a few ''camel'' guns were purchased, these were smaller and used a tripod instead of the carriage. During the Siege of Khartoum an Egyptian Gatling gun aided by a telescope was able to target Sudanese artillery crews from a distance of 2,000 yards. Gatling guns were imported by some states in Nigeria. They were used during the Kalabari Civil war of 1879–83, the Abbi House bought one from King Jaja of Opobo and it may have been used in canoe warfare. The Ijesha used a Gatling gun against the Ibadan during the early 1880s. In 1882 the Bonny used a Gatling gun during an attack on New Calabar. By 1880 Siam had imported an unknown number of Gatlings. By 1885 the kingdom had a Gatling Gun regiment of 600 men; those weapons were possibly used in the Haw Wars. They were also seen among Prince Bigit's escort in 1886. The Korean Empire possessed a number of Gatlings. Six had been imported in 1884, by 1891 it had a battery of fourteen guns and in 1894 the army's two American drilled regiments had as many as 40 Gatlings and practiced regularly (Supposedly because the noise pleased Emperor Gojong). Some of them were deployed to defend the approaches of the capital during the Donghak Rebellion, but there is no evidence they saw combat. British North America Lieutenant Arthur L. Howard of the Connecticut National Guard had an interest in the company manufacturing Gatling guns and took a personally owned Gatling gun to the District of Saskatchewan, Canada, in 1885 for use with the Canadian military against Métis and First Nations rebels during Louis Riel's North-West Rebellion. Spanish–American War Because of infighting within army ordnance, Gatling guns were used by the U.S. Army during the Spanish–American War. A four-gun battery of Model 1895 ten-barrel Gatling guns in .30 Army, made by Colt's Arms Company, was formed into a separate detachment led by Lt. John "Gatling Gun" Parker. The detachment proved very effective, supporting the advance of American forces at the Battle of San Juan Hill. Three of the Gatlings with swivel mountings were used with great success against the Spanish defenders. During the American charge up San Juan and Kettle hills, the three guns fired a total of 18,000 .30 Army rounds in minutes (an average of over 700 rounds per minute per gun of continuous fire) against Spanish troop positions along the crest of both hills, causing significant casualties. Despite this successful deployment, the Gatling's weight and cumbersome artillery carriage hindered its ability to keep up with infantry forces over difficult ground, particularly in Cuba, where roads were often little more than jungle footpaths. By this time, the U.S. Marines had been issued the modern tripod-mounted M1895 Colt–Browning machine gun using the 6mm Lee Navy round, which they employed to defeat the Spanish infantry at the battle of Cuzco Wells. Philippine–American War Gatling guns were used by the U.S. Army during the Philippine–American War. One such instance was during the Battle of San Jacinto (1899) () which was fought on November 11, 1899, in San Jacinto in the Philippines, between Philippine Republican Army soldiers and American troops. The Gatling's weight and artillery carriage hindered its ability to keep up with American troops over uneven terrain, particularly in the Philippines, where outside the cities there were heavily foliaged forests and steep mountain paths. Further development After the Gatling gun was replaced in service by newer recoil or gas-operated weapons, the approach of using multiple externally powered rotating barrels fell into disuse for many decades. However, some examples were developed during the interwar years, but only existed as prototypes or were rarely used. The concept resurfaced after World War II with the development of the Minigun and the M61 Vulcan. Other versions of the Gatling gun were built from the late 20th century to the present, the largest of these being the 30mm GAU-8 Avenger autocannon as used on the Fairchild Republic A-10 Thunderbolt II. Users Argentina Austria-Hungary Brazil British Empire Bolivia Kingdom of Bonny Chile Colombia Khedivate of Egypt France Haiti Ijesha Kingdom Kingdom of Italy Empire of Japan Kalabari Kingdom Korean Empire Liberation Army of the South Kingdom of Montenegro Morocco Nicaragua Ottoman Empire Peru Qing Empire Radical Civic Union Kingdom of Romania Russian Empire Siam Empire Tokugawa Shogunate Beylik of Tunis<ref>Longstaff, F. V. "The Book of the Machine Gun 1917 (2003)</ref> United States Gallery
Technology
Specific firearms
null
13077
https://en.wikipedia.org/wiki/Galileo%20project
Galileo project
Galileo was an American robotic space program that studied the planet Jupiter and its moons, as well as several other Solar System bodies. Named after the Italian astronomer Galileo Galilei, the Galileo spacecraft consisted of an orbiter and an atmospheric entry probe. It was delivered into Earth orbit on October 18, 1989, by on the STS-34 mission, and arrived at Jupiter on December 7, 1995, after gravity assist flybys of Venus and Earth, and became the first spacecraft to orbit Jupiter. The spacecraft then launched the first probe to directly measure its atmosphere. Despite suffering major antenna problems, Galileo achieved the first asteroid flyby, of 951 Gaspra, and discovered the first asteroid moon, Dactyl, around 243 Ida. In 1994, Galileo observed Comet Shoemaker–Levy 9's collision with Jupiter. Jupiter's atmospheric composition and ammonia clouds were recorded, as were the volcanism and plasma interactions on Io with Jupiter's atmosphere. The data Galileo collected supported the theory of a liquid ocean under the icy surface of Europa, and there were indications of similar liquid-saltwater layers under the surfaces of Ganymede and Callisto. Ganymede was shown to possess a magnetic field and the spacecraft found new evidence for exospheres around Europa, Ganymede, and Callisto. Galileo also discovered that Jupiter's faint ring system consists of dust from impact events on the four small inner moons. The extent and structure of Jupiter's magnetosphere was also mapped. The primary mission concluded on December 7, 1997, but the Galileo orbiter commenced an extended mission known as the Galileo Europa Mission (GEM), which ran until December 31, 1999. By the time GEM ended, most of the spacecraft was operating well beyond its original design specifications, having absorbed three times the radiation exposure that it had been built to withstand. Many of the instruments were no longer operating at peak performance, but were still functional, so a second extension, the Galileo Millennium Mission (GMM) was authorized. On September 20, 2003, after 14 years in space and 8 years in the Jovian system, Galileo mission was terminated by sending the orbiter into Jupiter's atmosphere at a speed of over to eliminate the possibility of contaminating the moons with bacteria. Background Jupiter is the largest planet in the Solar System, with more than twice the mass of all the other planets combined. Consideration of sending a probe to Jupiter began as early as 1959, when the National Aeronautics and Space Administration (NASA) Jet Propulsion Laboratory (JPL) developed four mission concepts: Deep space flights would fly through interplanetary space; Planetary flyby missions would fly past planets close enough to collect scientific data and could visit multiple planets on a single mission; Orbiter missions would place a spacecraft in orbit around a planet for prolonged and detailed study; Atmospheric entry and lander missions would explore a planet's atmosphere and surface. Two missions to Jupiter, Pioneer 10 and Pioneer 11, were approved in 1969, with NASA's Ames Research Center given responsibility for planning the missions. Pioneer 10 was launched in March 1972 and passed within of Jupiter in December 1973. It was followed by Pioneer 11, which was launched in April 1973, and passed within of Jupiter in December 1974, before heading on to an encounter with Saturn. They were followed by the more advanced Voyager 1 and Voyager 2 spacecraft, which were launched on 5 September and 20 August 1977 respectively, and reached Jupiter in March and July 1979. Planning Initiation Following the approval of the Voyager missions, NASA's Scientific Advisory Group for Outer Solar System Missions considered the requirements for Jupiter orbiters and atmospheric probes. It noted that the technology to build a heat shield for an atmospheric probe did not yet exist, and indeed facilities to test one under the conditions found on Jupiter would not be available until 1980. There was also concern about the effects of radiation on spacecraft components, which would be better understood after Pioneer 10 and Pioneer 11 had conducted their flybys. Pioneer 10s flyby in December 1973 indicated that the effects were not as severe as had been feared. NASA management designated JPL as the lead center for the Jupiter Orbiter Probe (JOP) Project. John R. Casani, who had headed the Mariner and Voyager projects, became the first project manager. The JOP would be the fifth spacecraft to visit Jupiter, but the first to orbit it, and the probe the first to enter its atmosphere. Ames and JPL decided to use a Mariner spacecraft for the Jupiter orbiter like the ones used for Voyager rather than a Pioneer spacecraft. Pioneer was stabilized by spinning the spacecraft at 60 rpm, which gave a 360-degree view of the surroundings, and did not require an attitude control system. By contrast, Mariner had an attitude control system with three gyroscopes and two sets of six nitrogen jet thrusters. Attitude was determined with reference to the Sun and Canopus, which were monitored with two primary and four secondary star tracker sensors. There was also an inertial reference unit and an accelerometer. The attitude control system allowed the spacecraft to take high-resolution images, but the functionality came at the cost of increased weight: a Mariner weighed compared to just for a Pioneer. The increase in weight had implications. The Voyager spacecraft had been launched by Titan IIIE rockets with a Centaur upper stage, but Titan was retired afterwards. In the late 1970s, NASA was focused on the development of the reusable Space Shuttle, which was expected to make expendable rockets obsolete. In late 1975, NASA decreed that all future planetary missions would be launched by the Space Shuttle. The JOP would be the first to do so. The Space Shuttle was supposed to have the services of a space tug to launch payloads requiring something more than a low Earth orbit, but this was never approved. The United States Air Force (USAF) instead developed the solid-fueled Interim Upper Stage (IUS), later renamed the Inertial Upper Stage (with the same acronym), for the purpose. The IUS was constructed in a modular fashion, with two stages, a large one with of propellant, and a smaller one with . This was sufficient for most satellites. It could also be configured with two large stages to launch multiple satellites. A configuration with three stages, two large and one small, would be enough for a planetary mission, so NASA contracted with Boeing for the development of a three-stage IUS. A two-stage IUS was not powerful enough to launch a payload to Jupiter without resorting to using a series of gravity-assist maneuvers around planets to garner additional speed. Most engineers regarded this solution as inelegant and planetary scientists at JPL disliked it because it meant that the mission would take months or even years longer to reach Jupiter. Longer travel times meant that the spacecraft's components would age and possibly fail, and the onboard power supply and propellant would be depleted. Some of the gravity assist options also involved flying closer to the Sun, which would induce thermal stresses that also might cause failures. It was estimated that the JOP would cost $634 million (equivalent to $ billion in ), and it had to compete for fiscal year 1978 funding with the Space Shuttle and the Hubble Space Telescope. A successful lobbying campaign secured funding for both JOP and Hubble over the objections of Senator William Proxmire, the chairman of the Independent Agencies Appropriations Subcommittee. The United States Congress approved funding for the Jupiter Orbiter Probe on July 19, 1977, and JOP officially commenced on October 1, 1977, the start of the fiscal year. Project manager Casani solicited suggestions for a more inspirational name for the project from people associated with it. The most votes went to "Galileo", after Galileo Galilei, the first person to view Jupiter through a telescope, and the discoverer of what are now known as the Galilean moons in 1610. It was noted at the time that the name was also that of a spacecraft in the Star Trek television show. In February 1978, Casani officially announced the choice of the name "Galileo". Preparation To enhance reliability and reduce costs, the project engineers decided to switch from a pressurized atmospheric probe to a vented one, so the pressure inside the probe would be the same as that outside, thus extending its lifetime in Jupiter's atmosphere, but this added to its weight. Another was added in structural changes to improve reliability. This required additional fuel in the IUS, but the three-stage IUS was itself overweight with respect to its design specifications, by about . Lifting Galileo and the three-stage IUS required a special lightweight version of the Space Shuttle external tank, the Space Shuttle orbiter stripped of all non-essential equipment, and the Space Shuttle main engines (SSME) running at full power level—109 percent of their rated power level. Running at this power level necessitated the development of a more elaborate engine cooling system. Concerns were raised over whether the engines could be run at 109 percent by the launch date, so a gravity-assist maneuver using Mars was substituted for a direct flight. Plans called for the to launch Galileo on the STS-23 mission, tentatively scheduled for sometime between January 2 and 12, 1982, this being the launch window when Earth, Mars and Jupiter were aligned to permit Mars to be used for the gravity-assist maneuver. By 1980, delays in the Space Shuttle program pushed the launch date for Galileo back to 1984. While a Mars slingshot was still possible in 1984, it would no longer be sufficient. NASA decided to launch Galileo on two separate missions, launching the orbiter in February 1984 with the probe following a month later. The orbiter would be in orbit around Jupiter when the probe arrived, allowing the orbiter to perform its role as a relay. This configuration required a second Space Shuttle mission and a second carrier spacecraft to be built for the probe to take it to Jupiter, and was estimated to cost an additional $50 million (equivalent to $ million in ), but NASA hoped to be able to recoup some of this through competitive bidding. The problem was that while the atmospheric probe was light enough to launch with the two-stage IUS, the Jupiter orbiter was too heavy to do so, even with a gravity assist from Mars, so the three-stage IUS was still required. By late 1980, the price tag for the IUS had risen to $506 million (equivalent to $ billion in ). The USAF could absorb this cost overrun on the development of the two-stage IUS (and indeed anticipated that it might cost far more), but NASA was faced with a quote of $179 million (equivalent to $ million in ) for the development of the three-stage version, which was $100 million (equivalent to $ million in ) more than it had budgeted for. At a press conference on January 15, 1981, Robert A. Frosch, the NASA Administrator, announced that NASA was withdrawing support for the three-stage IUS, and going with a Centaur G Prime upper stage because "no other alternative upper stage is available on a reasonable schedule or with comparable costs." Centaur provided many advantages over the IUS. The main one was that it was far more powerful. The probe and orbiter could be recombined, and the probe could be delivered directly to Jupiter in two years' flight time. The second was that, despite this, it was gentler than the IUS, because it had lower thrust. This reduced the chance of damage to the payload. Thirdly, unlike solid-fuel rockets which burned to completion once ignited, a Centaur could be switched off and on again. This gave it flexibility, which increased the chances of a successful mission, and permitted options like asteroid flybys. Centaur was proven and reliable, whereas the IUS had not yet flown. The only concern was about safety; solid-fuel rockets were considered safer than liquid-fuel ones, especially ones containing liquid hydrogen. NASA engineers estimated that additional safety features might take up to five years to develop and cost up to $100 million (equivalent to $ million in ). In February 1981, JPL learned that the Office of Management and Budget (OMB) was planning major cuts to NASA's budget, and was considering cancelling Galileo. The USAF intervened to save Galileo from cancellation. JPL had considerable experience with autonomous spacecraft that could make their own decisions. This was a necessity for deep space probes, since a signal from Earth takes from 35 to 52 minutes to reach Jupiter, depending on the relative position of the planets in their orbits. The USAF was interested in providing this capability for its satellites, so that they would be able to determine their attitude using onboard systems rather than relying on ground stations, which were not "hardened" against nuclear weapons, and could take independent evasive action against anti-satellite weapons. It was also interested in the manner in which JPL was designing Galileo to withstand the intense radiation of the magnetosphere of Jupiter, as this could be used to harden satellites against the electromagnetic pulse of nuclear explosions. On February 6, 1981 Strom Thurmond, the President pro tem of the Senate, wrote directly to David Stockman, the director of the OMB, arguing that Galileo was vital to the nation's defense. In December 1984, Casani proposed adding a flyby of asteroid 29 Amphitrite to the Galileo mission. In plotting a course to Jupiter, the engineers wanted to avoid asteroids. Little was known about them at the time, and it was suspected that they could be surrounded by dust particles. Flying through a dust cloud could damage the spacecraft's optics and possibly other parts of the spacecraft as well. To be safe, JPL wanted to avoid asteroids by at least . Most of the asteroids in the vicinity of the flight path like 1219 Britta and 1972 Yi Xing were only a few kilometers in diameter and promised little scientific value when observed from a safe distance, but 29 Amphitrite was one of the largest, and a flyby at even could have great value. The flyby would delay the spacecraft's arrival in Jupiter orbit from August 29 to December 10, 1988, and the expenditure of propellant would reduce the number of orbits of Jupiter from eleven to ten. This was expected to add $20 to $25 million (equivalent to $ to $ million in ) to the cost of the Galileo project. The 29 Amphitrite flyby was approved by NASA Administrator James M. Beggs on December 6, 1984. During testing, contamination was discovered in the system of metal slip rings and brushes used to transmit electrical signals around the spacecraft, and they were returned to be refabricated. The problem was traced back to a chlorofluorocarbon used to clean parts after soldering. It had been absorbed, and was then released in a vacuum environment. It mixed with debris generated as the brushes wore down, and caused intermittent problems with electrical signal transmission. Problems were also detected in the performance of memory devices in an electromagnetic radiation environment. The components were replaced, but then a read disturb problem arose, in which reads from one memory location disturbed the contents of adjacent locations. This was found to have been caused by the changes made to make the components less sensitive to electromagnetic radiation. Each component had to be removed, retested, and replaced. All of the spacecraft components and spare parts received a minimum of 2,000 hours of testing. The spacecraft was expected to last for at least five years—long enough to reach Jupiter and perform its mission. On December 19, 1985, it departed JPL in Pasadena, California, on the first leg of its journey, a road trip to the Kennedy Space Center in Florida. The Galileo mission was scheduled for STS-61-G on May 20, 1986, using . Spacecraft JPL built the Galileo spacecraft and managed the Galileo program for NASA, but West Germany's Messerschmitt-Bölkow-Blohm supplied the propulsion module, and Ames managed the atmospheric probe, which was built by the Hughes Aircraft Company. At launch, the orbiter and probe together had a mass of and stood tall. There were twelve experiments on the orbiter and seven on the atmospheric probe. The orbiter was powered by a pair of general-purpose heat source radioisotope thermoelectric generators (GPHS-RTGs) fueled by plutonium-238 that generated 570 watts at launch. The atmospheric probe had a lithium–sulfur battery rated at 730 watt-hours. Probe instruments included sensors for measuring atmospheric temperature and pressure. There was a mass spectrometer and a helium-abundance detector to study atmospheric composition, and a whistler detector for measurements of lightning activity and Jupiter's radiation belt. There were magnetometer sensors, a plasma-wave detector, a high-energy particle detector, a cosmic and Jovian dust detector, and a heavy ion counter. There was a near-infrared mapping spectrometer for multispectral images for atmospheric and moon surface chemical analysis, and an ultraviolet spectrometer to study gases. Reconsideration On January 28, 1986, lifted off on the STS-51-L mission. A failure of the solid rocket booster 73 seconds into flight tore the spacecraft apart, resulting in the deaths of all seven crew members. The Space Shuttle Challenger disaster was America's worst space disaster up to that time. The immediate impact on the Galileo project was that the May launch date could not be met because the Space Shuttles were grounded while the cause of the disaster was investigated. When they did fly again, Galileo would have to compete with high-priority Department of Defense launches, the tracking and data relay satellite system, and the Hubble Space Telescope. By April 1986, it was expected that the Space Shuttles would not fly again before July 1987 at the earliest, and Galileo could not be launched before December 1987. The Rogers Commission into the Challenger disaster handed down its report on June 6, 1986. It was critical of NASA's safety protocols and risk management. In particular, it noted the hazards of a Centaur-G stage. On June 19, 1986, NASA Administrator James C. Fletcher canceled the Shuttle-Centaur project. This was only partly due to the NASA management's increased aversion to risk in the wake of the Challenger disaster; NASA management also considered the money and manpower required to get the Space Shuttle flying again, and decided that there were insufficient resources to resolve lingering issues with Shuttle-Centaur as well. The changes to the Space Shuttle proved more extensive than anticipated, and in April 1987, JPL was informed that Galileo could not be launched before October 1989. The Galileo spacecraft was shipped back to JPL. Without Centaur, it looked like there was no means of getting Galileo to Jupiter. For a time, Los Angeles Times science reporter Usha Lee McFarling noted, "it looked like Galileos only trip would be to the Smithsonian Institution." The cost of keeping it ready to fly in space was reckoned at $40 to $50 million per year (equivalent to $ to $ million in ), and the estimated cost of the whole project had blown out to $1.4 billion (equivalent to $ billion in ). At JPL, the Galileo Mission Design Manager and Navigation Team Chief, Robert Mitchell, assembled a team that consisted of Dennis Byrnes, Louis D'Amario, Roger Diehl and himself, to see if they could find a trajectory that would get Galileo to Jupiter using only a two-stage IUS. Roger Diehl came up with the idea of using a series of gravity assists to provide the additional velocity required to reach Jupiter. This would require Galileo to fly past Venus, and then past Earth twice. This was referred to as the Venus-Earth-Earth Gravity Assist (VEEGA) trajectory. The reason no one had considered the VEEGA trajectory before was that the second encounter with Earth would not give the spacecraft any extra energy. Diehl realised that this was not necessary; the second encounter would merely change its direction to put it on a course for Jupiter. In addition to increasing the flight time, the VEEGA trajectory had another drawback from the point of view of NASA Deep Space Network (DSN): Galileo would arrive at Jupiter when it was at the maximum range from Earth, and maximum range meant minimum signal strength. It would have a declination of 23 degrees south instead of 18 degrees north, so the tracking station would be the Canberra Deep Space Communication Complex in Australia, with its two 34-meter and one 70-meter antennae. A northerly declination could have been supported by two sites, at Goldstone and Madrid. The Canberra antennae were supplemented by the 64-meter antenna at the Parkes Observatory. Initially it was thought that the VEEGA trajectory demanded a November launch, but D'Amario and Byrnes calculated that a mid-course correction between Venus and Earth would permit an October launch as well. Taking such a roundabout route meant that Galileo would require sixty months to reach Jupiter instead of just thirty, but it would get there. Consideration was given to using the USAF's Titan IV launch system with its Centaur G Prime upper stage. This was retained as a backup for a time, but in November 1988 the USAF informed NASA that it could not provide a Titan IV in time for the May 1991 launch opportunity, owing to the backlog of high priority Department of Defense missions. However, the USAF supplied IUS-19, which had originally been earmarked for a Department of Defense mission, for use by the Galileo mission. Nuclear concerns As the launch date of Galileo neared, anti-nuclear groups, concerned over what they perceived as an unacceptable risk to the public's safety from the plutonium in Galileos GPHS-RTG modules, sought a court injunction prohibiting Galileo launch. RTGs were necessary for deep space probes because they had to fly distances from the Sun that made the use of solar energy impractical. They had been used for years in planetary exploration without mishap: the Department of Defense's Lincoln Experimental Satellites 8/9 had 7 percent more plutonium on board than Galileo, and the two Voyager spacecraft each carried 80 percent of Galileo load of plutonium. By 1989, plutonium had been used in 22 spacecraft. Activists remembered the crash of the Soviet Union's nuclear-powered Kosmos 954 satellite in Canada in 1978, and the Challenger disaster, while it did not involve nuclear fuel, raised public awareness about spacecraft failures. No RTGs had ever done a non-orbital swing past the Earth at close range and high speed, as Galileo VEEGA trajectory required it to do. This created the possibility of a mission failure in which Galileo struck Earth's atmosphere and dispersed plutonium. Planetary scientist Carl Sagan, a strong supporter of the Galileo mission, wrote that "there is nothing absurd about either side of this argument." Before the Challenger disaster, JPL had conducted shock tests on the RTGs that indicated that they could withstand a pressure of without a failure, which would have been sufficient to withstand an explosion on the launch pad. The possibility of adding additional shielding was considered but rejected, mainly because it would add an unacceptable amount of extra weight. After the Challenger disaster, NASA commissioned a study on the possible effects if such an event occurred with Galileo on board. Angus McRonald, a JPL engineer, concluded that what would happen would depend on the altitude at which the Space Shuttle broke up. If the Galileo/IUS combination fell free from the orbiter at , the RTGs would fall to Earth without melting, and drop into the Atlantic Ocean about from the Florida coast. On the other hand, if the orbiter broke up at an altitude of it would be traveling at and the RTG cases and GPHS modules would melt before falling into the Atlantic off the Florida coast. NASA concluded that the chance of a disaster was 1 in 2,500, although anti-nuclear groups thought it might be as high as 1 in 430. NASA assessed the risk to an individual at 1 in 100 million, about two orders of magnitude less than the danger of being killed by lightning. The prospect of an inadvertent re-entry into the atmosphere during the VEEGA maneuvers was reckoned at less than 1 in 2 million, but an accident might have released a maximum of . This could result in up to 9 fatalities from cancer per 10 million exposed people. Launch STS-34 was the mission designated to launch Galileo, scheduled for October 12, 1989, in the Space Shuttle Atlantis. The spacecraft was delivered to the Kennedy Space Center by a high-speed truck convoy that departed JPL in the middle of the night. There were fears that the trucks might be hijacked by anti-nuclear activists or terrorists after the plutonium, so the route was kept secret from the drivers beforehand, and they drove through the night and the following day and only stopped for food and fuel. Last-minute efforts by three environmental groups (the Christic Institute, the Florida Coalition for Peace and Justice and the Foundation on Economic Trends) to halt the launch were rejected by the District of Columbia Circuit on technical grounds rather than the merits of the case, but in a concurring opinion, Chief Justice Patricia Wald wrote that while the legal challenge was not frivolous, there was no evidence of the plaintiffs' claim that NASA had acted improperly in compiling the mission's environmental assessment. On October 16, eight protesters were arrested for trespassing at the Kennedy Space Center; three were jailed and the remaining five released. Federal judge Oliver Gasch ruled on October 21 that the launch was in the public interest, as canceling it would cost the public $164 million and increased knowledge of the Solar system. The launch was twice delayed; first by a faulty main engine controller that forced a postponement to October 17, and then by inclement weather, which necessitated a postponement to the following day, but this was not a concern since the launch window extended until November 21. Atlantis finally lifted off at 16:53:40 UTC on October 18, and went into a orbit. Galileo was successfully deployed at 00:15 UTC on October 19. Following the IUS burn, the Galileo spacecraft adopted its configuration for solo flight, and separated from the IUS at 01:06:53 UTC on October 19. The launch was perfect, and Galileo was soon headed towards Venus at over . Atlantis returned to Earth safely on October 23. Venus encounter The encounter with Venus on February 9 was in view of the DSN's Canberra and Madrid Deep Space Communications Complexes. Galileos closest approach to Venus came at 05:58:48 UTC on February 10, 1990, at a range of . Due to the Doppler effect, the spacecraft's velocity relative to Earth could be computed by measuring the change in carrier frequency of the spacecraft's transmission compared to the nominal frequency. Doppler data collected by the DSN allowed JPL to verify that the gravity-assist maneuver had been successful, and the spacecraft had obtained the expected increase in speed. Unfortunately, three hours into the flyby, the tracking station at Goldstone had to be shut down due to high winds, and Doppler data was lost. Because Venus was much closer to the Sun than the spacecraft had been designed to operate, great care was taken to avoid thermal damage. In particular, the X-band high gain antenna (HGA) was not deployed, but was kept folded up like an umbrella and pointed away from the Sun to keep it shaded and cool. This meant that the two small S-band low-gain antennae (LGAs) had to be used instead. They had a maximum bandwidth of 1,200 bits per second (bit/s) compared to the 134,000 bit/s expected from the HGA. As the spacecraft moved further from Earth, reception necessitated the use of the DSN's 70-meter dishes, to the detriment of other users, who had lower priority than Galileo. Even so, the downlink telemetry rate fell to 40 bit/s within a few days of the Venus flyby, and by March it was down to just 10 bit/s. Venus had been the focus of many automated flybys, probes, balloons and landers, most recently the 1989 Magellan spacecraft, and Galileo had not been designed with Venus in mind. Nonetheless, there were useful observations that it could make, as it carried some instruments that had never flown on spacecraft to Venus, such as the near-infrared mapping spectrometer (NIMS). Telescopic observations of Venus had revealed that there were certain parts of the infrared spectrum that the greenhouse gases in the Venusian atmosphere did not block, making them transparent on these wavelengths. This permitted the NIMS to both view the clouds and obtain maps of the equatorial- and mid-latitudes of the night side of Venus with three to six times the resolution of Earth-based telescopes. The ultraviolet spectrometer (UVS) was also deployed to observe the Venusian clouds and their motions. Another set of observations was conducted using Galileo's energetic-particles detector (EPD) when Galileo moved through the bow shock caused by Venus's interaction with the solar wind. Earth's magnetic field causes the bow shock to occur at around from its center, but Venus's weak magnetic field causes it to occur nearly on the surface, so the solar wind interacts with the atmosphere. A search for lightning on Venus was conducted using the plasma-wave detector, which noted nine bursts likely to have been caused by lightning, but efforts to capture an image of lightning with the solid-state imaging system (SSI) were unsuccessful. Earth encounters Flybys Galileo made two course corrections on April 9 to 12 and May 11 to 12, 1990, to alter its velocity by . The spacecraft flew by Earth twice; the first time at a range of at 20:34:34 UTC on December 8, 1990. This was higher than predicted, and the time of the closest approach was within a second of the prediction. It was the first time that a deep space probe had returned to Earth from interplanetary space. A second flyby of Earth was at at 15:09:25 UTC on December 8, 1992. This time the spacecraft passed within a kilometer of its aiming point over the South Atlantic. This was so accurate that a scheduled course correction was cancelled, thereby saving of propellant. Earth's bow shock and the solar wind The Earth encounters provided an opportunity for a series of experiments. A study of Earth's bow shock was conducted as Galileo passed by Earth's day side. The solar wind travels at and is deflected by Earth's magnetic field, creating a magnetic tail on Earth's dark side over a thousand times the radius of the planet. Observations were made by Galileo when it passed through the magnetic tail on Earth's dark side at a distance of from the planet. The magnetosphere was quite active at the time, and Galileo detected magnetic storms and whistlers caused by lightning strikes. The NIMS was employed to look for mesospheric clouds, which were thought to be caused by methane released by industrial processes. The water vapor in the clouds breaks down the ozone in the upper atmosphere. Normally the clouds are only seen in September or October, but Galileo was able to detect them in December, an indication of possible damage to Earth's ozone layer. Remote detection of life on Earth Carl Sagan, pondering the question of whether life on Earth could be easily detected from space, devised a set of experiments in the late 1980s using Galileo remote sensing instruments during the mission's first Earth flyby in December 1990. After data acquisition and processing, Sagan published a paper in Nature in 1993 detailing the results of the experiment. Galileo had indeed found what are now referred to as the "Sagan criteria for life". These included strong absorption of light at the red end of the visible spectrum (especially over continents) by chlorophyll in photosynthesizing plants; absorption bands of molecular oxygen as a result of plant activity; infrared bands caused by the approximately 1 micromole per mole of methane (a gas which must be replenished by volcanic or biological activity) in the atmosphere; and modulated narrowband radio wave transmissions uncharacteristic of any known natural source. Galileo experiments were thus the first ever scientific controls in the newborn science of astrobiological remote sensing. Lunar observations En route to Galileo second gravity-assist flyby of Earth, the spacecraft flew over the lunar north pole on December 8, 1992, at an altitude of . The north pole had been photographed before, by Mariner 10 in 1973, but Galileo cameras, with their per pixel imagery, provided new information about a region that still held some scientific mysteries. The infrared spectromer surveyed the surface minerals and revealed that the region was more minerallogically diverse than expected. There was evidence that the Moon had been volcanically active earlier than originally thought, and the spectrometer clearly distinguished different lava flows on the Mare Serenitatis. Areas where titanium-rich material had been blasted from vents, like the one sampled by Apollo 17, showed up clearly. Galileo Optical Experiment During the second Earth flyby, another experiment was performed. Optical communications in space were assessed by detecting light pulses from powerful lasers with Galileo CCD. The experiment, dubbed Galileo Optical Experiment or GOPEX, used two separate sites to beam laser pulses to the spacecraft, one at Table Mountain Observatory in California and the other at the Starfire Optical Range in New Mexico. The Table Mountain site used a Nd:YAG laser operating at a frequency-doubled wavelength of 532 nm, with a repetition rate of 15 to 30 Hertz and a pulse power full width at half maximum (FWHM) in the tens of megawatts range, which was coupled to a Cassegrain reflector telescope for transmission to Galileo. The Starfire range site used a similar setup with a larger transmitting telescope. Long-exposure (~0.1 to 0.8 s) images using Galileo 560 nm centered green filter produced images of Earth clearly showing the laser pulses even at distances of up to . Adverse weather conditions, restrictions placed on laser transmissions by the U.S. Space Defense Operations Center (SPADOC) and a pointing error caused by the scan platform on the spacecraft not being able to change direction and speed as quickly as expected (which prevented laser detection on all frames with less than 400 ms exposure times) contributed to a reduction in the number of successful detections of the laser transmission to 48 of the total 159 frames taken. Nonetheless, the experiment was considered a resounding success and the data acquired were used to design laser downlinks to send large volumes of data very quickly from spacecraft to Earth. The scheme was studied in 2004 for a data link to a future Mars-orbiting spacecraft. On December 5, 2023, NASA's Deep Space Optical Communications experiment on the Psyche spacecraft used infrared lasers for two-way communication between Earth and the spacecraft. High-gain antenna problem Once Galileo headed beyond Earth, it was no longer risky to employ the , so on April 11, 1991, Galileo was ordered to unfurl it. This was done using two small dual drive actuator (DDA) motors to drive a worm gear, and was expected to take 165 seconds, or 330 seconds if one actuator failed. The antenna had 18 graphite-epoxy ribs; when the driver motor started and put pressure on the ribs, they were supposed to pop out of the cup their tips were held in, and the antenna would unfold like an umbrella. When it reached the fully deployed configuration, redundant microswitches would shut down the motors. Otherwise they would run for eight minutes before being automatically shut down to prevent them from overheating. Through telemetry from Galileo, investigators determined that the electric motors had stalled at 56 seconds. The spacecraft's spin rate had decreased due to an increase in its moment of inertia and its wobble increased, indicative of an asymmetric unfolding. Only 15 ribs had popped out, leaving the antenna looking like a lop-sided, half-open umbrella. It was not possible to re-fold the antenna and try the opening sequence again; although the motors were capable of running in reverse, the antenna was not designed for this, and human assistance was required when it was done on Earth to ensure that the wire mesh did not snag. The first thing the Galileo team tried was to rotate the spacecraft away from the Sun and back again on the assumption that the problem was with friction holding the pins in their sockets. If so, then heating and cooling the ribs might cause them to pop out of their sockets. This was done seven times, but with no result. They then tried swinging LGA-2 (which faced in the opposite direction to the HGA and LGA-1) 145 degrees to a hard stop, thereby shaking the spacecraft. This was done six times with no effect. Finally, they tried shaking the antenna by pulsing the DDA motors at 1.25 and 1.875 Hertz. This increased the torque by up to 40 percent. The motors were pulsed 13,000 times over a three-week period in December 1992 and January 1993, but only managed to move the ballscrew by one and a half revolutions beyond the stall point. Investigators concluded that during the 4.5 years that Galileo spent in storage after the Challenger disaster, the lubricants between the tips of the ribs and the cup were eroded. They were then worn down by vibration during the three cross-country journeys by truck between California and Florida for the spacecraft. The failed ribs were those closest to the flat-bed trailers carrying Galileo on these trips. The use of land transport was partly to save costs—air transport would have cost an additional $65,000 () or so per trip—but also to reduce the amount of handling required in loading and unloading the aircraft, which was considered a major risk of damage. The spacecraft was also subjected to severe vibration in a vacuum environment by the IUS. Experiments on Earth with the test HGA showed that having a set of stuck ribs all on one side reduced the DDA torque produced by up to 40 percent. The antenna lubricants were applied only once, nearly a decade before launch. Furthermore, the HGA was not subjected to the usual rigorous testing, because there was no backup unit that could be installed in Galileo in case of damage. The flight-ready HGA was never given a thermal evaluation test, and was unfurled only a half dozen or so times before the mission. Testing might not have revealed the problem in any case; the Lewis Research Center was never able to replicate the problem on Earth, and it was assumed to be the combination of loss of lubricant during transportation, vibration during launch by the IUS, and a prolonged period of time in the vacuum of space where bare metal touching could undergo cold welding. Whatever the cause, the HGA was rendered useless. The two LGAs were capable of transmitting information back to Earth, but since it transmitted its signal over a cone with a 120-degree half-angle, allowing it to communicate even when not pointed at Earth, its bandwidth was significantly less than that of the HGA would have been, as the HGA transmitted over a half-angle of one-sixth of a degree. The HGA was to have transmitted at 134 kilobits per second, whereas LGA-1 was only intended to transmit at about 8 to 16 bits per second. LGA-1 transmitted with a power of about 15 to 20 watts, which by the time it reached Earth and had been collected by one of the large aperture 70-meter DSN antennas, had a total power of about 1020 watts. The change to mission plan required a series of software changes to be uploaded. Image data collected was buffered and collected in Galileos Command and Data Subsystem (CDS) memory. This represented 192 kilobytes of the 384 kilobyte CDS storage, and had been added late, out of concern that the 6504 Complementary metal–oxide–semiconductor (CMOS) memory devices might not be reliable during a mission. As it happened, they gave no trouble, but the CDS memory could store up to 31 minutes of data from the Radio Relay Hardware (RRH) channels. To conserve bandwidth, data-compression software was implemented. Image compression used an integer approximation of the discrete cosine transform, while other data were compressed with variant of the Lempel–Ziv–Welch algorithm. Using compression, the arraying of several Deep Space Network antennas, and sensitivity upgrades to the receivers used to listen to Galileo signal, data throughput was increased to a maximum of 160 bits per second. By further using data compression, the effective bandwidth could be raised to 1,000 bits per second. The data collected on Jupiter and its moons were stored in the spacecraft's onboard tape recorder, and transmitted back to Earth during the long apoapsis portion of the probe's orbit using the low-gain antenna. At the same time, measurements were made of Jupiter's magnetosphere and transmitted back to Earth. The reduction in available bandwidth reduced the total amount of data transmitted throughout the mission, but William J. O'Neil, Galileo project manager from 1992 to 1997, expressed confidence that 70 percent of Galileo science goals could still be met. The decision to use magnetic tape for storage was a conservative one, taken in the late 1970s when the use of tape was common. Conservatism was not restricted to engineers; a 1980 suggestion that the results of Galileo could be distributed electronically instead of on paper was regarded as ridiculous by geologists, on the grounds that storage would be prohibitively expensive; some of them thought that taking measurements on a computer involved putting a wooden ruler up to the screen. Asteroid encounters 951 Gaspra Two months after entering the asteroid belt, Galileo performed the first asteroid encounter by a spacecraft. Galileo passed 951 Gaspra, an S-type asteroid, at a distance of at 22:37 UTC on October 29, 1991, at a relative speed of about . Fifty-seven images of Gaspra were taken with the SSI, covering about 80 percent of the asteroid. Without the HGA, the bit rate was only about 40 bit/s, so an image took up to 60 hours to transmit back to Earth. The Galileo project was able to secure 80 hours of Canberra's 70-meter dish time between 7 and 14 November 1991, but most of images taken, including low-resolution images of more of the surface, were not transmitted to Earth until November 1992. The imagery revealed a cratered and irregular body, measuring about . Its shape was not remarkable for an asteroid of its size. Measurements were taken using the NIMS to indicate the asteroid's composition and physical properties. While Gaspra has plenty of small craters—over 600 of them ranging in size from —it lacks large ones, hinting at a relatively recent origin, although it is possible that some of the depressions were eroded craters. Several relatively flat planar areas were found, suggesting that Gaspra was formed from another body by a collision. Measurements of the solar wind in the vicinity of the asteroid showed it changing direction a few hundred kilometers from Gaspra, which hinted that Gaspra might have a magnetic field, but this was not certain. 243 Ida and Dactyl Following the second Earth encounter, Galileo performed close observations of another asteroid, 243 Ida. A slight trajectory correction was made to enable this on August 26, 1993. With four hours to go before the encounter with Ida, Galileo spontaneously abandoned the observation configuration and resumed its cruise configuration. Engineers were able to correct the problem and have the instruments ready by 16:52:04 UTC on August 28, 1993, when Galileo flew past Ida at a range of . High-resolution images were taken to create a color mosaic of one side of the asteroid, with the highest resolution image taken at a range of . Measurements were taken using SSI and NIMS. Transmission was still limited to the 40 bit/s data rate available during the Gaspra flyby. At that rate, it took thirty hours to send each of the five frames. In September, the line of sight between Galileo and Earth was close to the Sun, so there was only time to send one mosaic before it was blocked by the Sun on September 29, 1993; the rest of the mosaics were transmitted in February and March, after Earth had come around the Sun. Galileo tape recorder was used to store the images, but tape space was also required for the primary Jupiter mission. A technique was developed whereby only image fragments of two or three lines out of every 330 were initially sent. A determination could then be made as to whether the image was of 243 Ida or of empty space. Ultimately, only about 16 percent of the SSI data recorded could be sent back to Earth. When astronomer Ann Harch examined the images on February 17, 1994, she found that Ida had a small moon measuring around in diameter, which appeared in 47 images. A competition was held among Galileo project members to select a name for the moon, which was ultimately dubbed Dactyl after the legendary Dactyls, mythical beings which lived on Mount Ida, the geographical feature on Crete the asteroid was named for. Craters on Dactyl were named after individual dactyloi. Regions on 243 Ida were named after cities where Johann Palisa, the discover of 243 Ida, made his observations, while ridges on 243 Ida were named in honor of deceased Galileo team members. Dactyl was the first asteroid moon to be discovered. Moons of asteroids had been assumed to be rare, but the discovery of Dactyl hinted that they might in fact be quite common. From subsequent analysis of this data, Dactyl appeared to be an S-type asteroid, and spectrally different from 243 Ida, although Ida is also an S-type asteroid. It was hypothesized that both may have been produced by the breakup of a Koronis parent body. Voyage to Jupiter Comet Shoemaker–Levy 9 Galileo prime mission was a two-year study of the Jovian system, but on March 26, 1993, while it was en route, astronomers Carolyn S. Shoemaker, Eugene M. Shoemaker and David H. Levy discovered fragments of a comet orbiting Jupiter, the remains of a comet that had passed within Jupiter's Roche limit and had been torn apart by tidal forces. It was named Comet Shoemaker–Levy 9. Calculations indicated that it would crash into the planet sometime between July 16 and 24, 1994. Although Galileo was still away, Jupiter was 66 pixels wide in its camera, and it was perfectly positioned to observe this event. Terrestrial telescopes had to wait to see the impact event sites as they rotated into view because it would occur on Jupiter's night side. Instead of burning up in Jupiter's atmosphere as expected, the first of the 21 comet fragments struck the planet at around and exploded with a fireball high, easily discernible to Earth-based telescopes even though it was on the night side of the planet. The impact left a series of dark scars on the planet, some two or three times as large as the Earth, that persisted for weeks. When Galileo observed an impact in ultraviolet light, the fireballs lasted for about ten seconds, but in the infrared they persisted for 90 seconds or more. When a fragment hit the planet, it increased Jupiter's overall brightness by about 20 percent. The NIMS observed one fragment create a fireball in diameter that burned with a temperature of , which was hotter than the surface of the Sun. Probe deployment The Galileo probe separated from the orbiter at 03:07 UTC on July 13, 1995, five months before its rendezvous with the planet on December 7. At this point, the spacecraft was from Jupiter, but from Earth, and telemetry from the spacecraft, transmitted at the speed of light, took 37 minutes to reach JPL. A tiny frequency change in the radio signal indicated that the separation had been accomplished. The Galileo orbiter was still on a collision course with Jupiter. Previously, course corrections had been made using the twelve thrusters, but with the probe on its way, the Galileo orbiter could now fire its Messerschmitt-Bölkow-Blohm main engine which had been covered by the probe until then. At 07:38 UTC on July 27, it was fired for the first time to place the Galileo orbiter on course to enter orbit around Jupiter, whence it would act as a communications relay for the Galileo probe. The Galileo probe's project manager, Marcie Smith at the Ames Research Center, was confident that the LGAs could be used as relays. The burn lasted for five minutes and eight seconds, and changed the velocity of the Galileo orbiter by . Dust storms In August 1995, the Galileo orbiter encountered a severe dust storm from Jupiter that took several months to traverse. Normally the spacecraft's dust detector picked up a dust particle every three days; now it detected up to 20,000 particles a day. Interplanetary dust storms had previously been encountered by the Ulysses probe, which had passed by Jupiter three years before on its mission to study the Sun's polar regions, but those encountered by Galileo were more intense. The dust particles were 5 to 10 nm in size, about the same as those in cigarette smoke, and had speeds ranging from depending on their size. The existence of the dust storms had come as a complete surprise to scientists when Ulysses encountered them. While data from both Ulysses and Galileo hinted that they originated somewhere in the Jovian system, it was a mystery how they had been created and how they had escaped from Jupiter's strong gravitational and electromagnetic fields. Tape recorder anomaly The failure of Galileo high-gain antenna meant that data storage to the tape recorder for later compression and playback was crucial in order to obtain any substantial information from the flybys of Jupiter and its moons. The four-track, 114-megabyte digital tape recorder was manufactured by Odetics Corporation. On October 11, it was stuck in rewind mode for 15 hours before engineers learned what had happened and were able to send commands to shut it off. Although the recorder itself was still in working order, the malfunction had possibly damaged a length of tape at the end of the reel. This section of tape was declared "off limits" to any future data recording, and was covered with 25 more turns of tape to secure the section and reduce any further stresses, which could tear it. Because it happened only weeks before Galileo entered orbit around Jupiter, the anomaly prompted engineers to sacrifice data acquisition of almost all of the Io and Europa observations during the orbit insertion phase in order to focus on recording data sent from the atmospheric probe during its descent. Jupiter Arrival The Galileo orbiter's magnetometers reported that the spacecraft had encountered the bow shock of Jupiter's magnetosphere on November 16, 1995, when it was from Jupiter. The bow shock moved to and fro in response to solar wind gusts, and was therefore crossed multiple times between 16 and 26 November, by which time Galileo was from Jupiter. On December 7, 1995, the orbiter arrived in the Jovian system. That day it made a flyby of Europa at 11:09 UTC, and then an flyby of Io at 15:46 UTC, using Io's gravity to reduce its speed, and thereby conserve propellant for use later in the mission. At 19:54 it made its closest approach to Jupiter. The orbiter's electronics had been heavily shielded against radiation, but the radiation surpassed expectations, and nearly exceeded the spacecraft's design limits. One of the navigational systems failed, but the backup took over. Most robotic spacecraft respond to failures by entering safe mode and awaiting further instructions from Earth, but this was not possible for Galileo during the arrival sequence due to the great distance and consequent long turnaround time. Atmospheric probe The descent probe awoke in response to an alarm at 16:00 UTC and began powering up its instruments. It passed through the rings of Jupiter and encountered a previously undiscovered radiation belt ten times as strong as Earth's Van Allen radiation belt above Jupiter's cloud tops. It had been predicted that the probe would pass through three layers of clouds; an upper one consisting of ammonia-ice particles at a pressure of ; a middle one of ammonium hydrosulfide ice particles at a pressure of ; and one of water vapor at . The atmosphere through which the probe descended was much denser and hotter than expected. Jupiter was also found to have only half the amount of helium expected and the data did not support the three-layered cloud structure theory: only one significant cloud layer was measured by the probe, at a pressure of around but with many indications of smaller areas of increased particle densities along the whole length of its trajectory. The descent probe entered Jupiter's atmosphere, defined for the purpose as being above the pressure level, without any braking at 22:04 UTC on December 7, 1995. At this point it was moving at relative to Jupiter. This was by far the most difficult atmospheric entry yet attempted by any spacecraft; the probe had to withstand a peak deceleration of . The rapid flight through the atmosphere produced a plasma with a temperature of about , and the probe's carbon phenolic heat shield lost more than half of its mass, , during the descent. As the probe passed through Jupiter's cloud tops, it started transmitting data to the orbiter, above. The data was not immediately relayed to Earth, but a single bit was transmitted from the orbiter as a notification that the signal from the probe was being received and recorded, which would then take days to be transmitted using the LGA. The atmospheric probe deployed its parachute fifty-three seconds later than anticipated, resulting in a small loss of upper-atmospheric readings. This was attributed to wiring problems with an accelerometer that determined when to begin the parachute deployment sequence. The probe then dropped its heat shield, which fell into Jupiter's interior. The parachute reduced the probe's speed to . The signal from the probe was no longer detected by the orbiter after 61.4 minutes, at an elevation of below the cloud tops and a pressure of . It was believed that the probe continued to fall at terminal velocity, as the temperature increased to and the pressure to , destroying it. The probe detected less lightning, less water, but stronger winds than expected. Scientists had expected to find wind speeds of up to , but winds of up to were detected. The implication was that the winds are not produced by heat generated by sunlight (as Jupiter gets less sunlight than Earth) or the condensation of water vapor (the main causes on Earth), but are due to an internal heat source. It was already well known that the atmosphere of Jupiter was mainly composed of hydrogen, but the clouds of ammonia and ammonium sulfide were much thinner than expected, and clouds of water vapor were not detected. This was the first observation of ammonia clouds in another planet's atmosphere. The atmosphere creates ammonia-ice particles from material coming up from lower depths. The atmosphere was more turbulent than expected. Wind speeds in the outermost layers were , in agreement with previous measurements from afar, but those wind speeds increased dramatically at pressure levels of , then remaining consistently high at around . The abundance of nitrogen, carbon and sulfur was three times that of the Sun, raising the possibility that they had been acquired from other bodies in the Solar system, but the low abundance of water cast doubt on theories that Earth's water had been acquired from comets. There was far less lightning activity than expected, only about a tenth of the level of activity on Earth, but this was consistent with the lack of water vapor. More surprising was the high abundance of noble gases (argon, krypton and xenon), with abundances up to three times that found in the Sun. For Jupiter to trap these gases, it would have had to be much colder than today, around , which suggested that either Jupiter had once been much further from the Sun, or that the interstellar debris that the Solar system had formed from was much colder than had been thought. Orbiter With the probe data collected, the Galileo orbiter's next task was to slow down in order to avoid heading off into the outer solar system. A burn sequence commencing at 00:27 UTC on December 8 and lasting 49 minutes reduced the spacecraft's speed by and it entered a parking orbit with an orbital period of 198 days. The Galileo orbiter thus became the first artificial satellite of Jupiter. Most of its initial orbit was occupied transmitting the data from the probe back to Earth. When the orbiter reached its apojove on March 26, 1996, the main engine was fired again to increase the orbit from four times the radius of Jupiter to ten times. By this time the orbiter had received half the radiation allowed for in the mission plan, and the higher orbit was to conserve the instruments for as long as possible by limiting the radiation exposure. The spacecraft traveled around Jupiter in elongated ellipses, each orbit lasting about two months. The differing distances from Jupiter afforded by these orbits allowed Galileo to sample different parts of the planet's extensive magnetosphere. The orbits were designed for close-up flybys of Jupiter's largest moons. A naming scheme was devised for the orbits: a code with the first letter of the moon being encountered on that orbit (or "J" if none was encountered) plus the orbit number. Mission extension After the primary mission concluded on December 7, 1997, most of the mission staff departed, including O'Neil, but about a fifth of them remained. The Galileo orbiter commenced an extended mission known as the Galileo Europa Mission (GEM), which ran until December 31, 1999. This was a low-cost mission, with a budget of $30 million (equivalent to $ million in ). The reason for calling it as the "Europa" mission rather than the "Extended" mission was political; although it was wasteful to scrap a spacecraft that was still functional and capable of performing a continuing mission, Congress took a dim view of requests for more money for projects that had already been fully funded. This was avoided through rebranding. The smaller GEM team did not have the resources to deal with problems, but when they arose it was able to temporarily recall former team members for intensive efforts to solve them. The spacecraft performed several flybys of Europa, Callisto and Io. On each one the spacecraft collected only two days' worth of data instead of the seven it had collected during the prime mission. The radiation environment near Io, which Galileo approached to within on November 26, 1999, on orbit I25, was very unhealthy for Galileo systems, and so these flybys were saved for the extended mission when loss of the spacecraft would be more acceptable. By the time GEM ended, most of the spacecraft was operating well beyond its original design specifications, having absorbed more than 600 kilorads in between 1995 and 2002, three times the radiation exposure that it had been built to withstand. Many of the instruments were no longer operating at peak performance, but were still functional, so a second extension, the Galileo Millennium Mission (GMM) was authorized. This was intended to run until March 2001, but it was subsequently extended until January 2003. GMM included return visits to Europa, Io, Ganymede and Callisto, and for the first time to Amalthea. The total cost of the original Galileo mission was about (equivalent to $ billion in ). Of this, (equivalent to $ million in ) was spent on spacecraft development. Another $110 million (equivalent to $ million in ) was contributed by international agencies. Io The innermost of the four Galilean moons, Io is roughly the same size as Earth's moon, with a mean radius of . It is in orbital resonance with Ganymede and Europa, and tidally locked with Jupiter, so just as the Earth's Moon always has the same side facing Earth, Io always has the same side facing Jupiter. It has a faster orbit though, with a rotation period of 1.769 days. As a result, the rotational and tidal forces on Io are 220 times as great as those on Earth's moon. These frictional forces are sufficient to melt rock, creating volcanoes and lava flows. Although only a third of the size of Earth, Io generates twice as much heat. While geological events occur on Earth over periods of thousands or even millions of years, cataclysmic events are common on Io. Visible changes occurred between orbits of Galileo. The colorful surface is a mixture of red, white and yellow sulfur compounds. Galileo flew past Io, but in the interest of protecting the tape recorder, O'Neil decided to forego collecting images. To use the SSI camera meant operating the tape recorder at high speed, with sudden stops and starts, whereas the fields and particles instruments only required the tape recorder to run continuously at slow speeds, and it was believed that it could handle this. This was a crushing blow to scientists, some of whom had waited years for the opportunity. No other Io encounters were scheduled during the prime mission because it was feared that the high radiation levels close to Jupiter would damage the spacecraft. However, valuable information was still obtained; Doppler data used to measure Io's gravitational field revealed that Io had a core of molten iron and iron sulfide. Another opportunity to observe Io arose during the Galileo Europa Mission (GEM), when Galileo flew past Io on orbits I24 and I25, and it would revisit Io during the Galileo Millennium Mission (GMM) on orbits I27, I31, I32 and I33. As Galileo approached Io on I24 at 11:09 UTC on October 11, 1999, it entered safe mode. Apparently, high-energy electrons had altered a bit on a memory chip. When it entered safe mode, the spacecraft turned off all non-essential functions. Normally it took seven to ten days to diagnose and recover from a safe mode incident; this time the Galileo Project team at JPL had nineteen hours before the encounter with Io. After a frantic effort, they managed to diagnose a problem that had never been seen before, and restore the spacecraft systems with just two hours to spare. Not all of the planned activities could be carried out, but Galileo obtained a series of high-resolution color images of the Pillan Patera, and Zamama, Prometheus, and Pele volcanic eruption centers. When Galileo next approached Io on I25 at 03:40 UTC on November 26, 1999, JPL were eating their Thanksgiving dinner at the Galileo Mission Control Center when, with the encounter with Io just four hours away, the spacecraft again entered safe mode. This time the problem was traced to a software patch implemented to bring Galileo out of safe mode during I24. Fortunately, the spacecraft had not shut down as much as it had on I24, and the team at JPL were able to bring it back online. During I24 they had done so with two hours to spare; this time, they had just three minutes. Nonetheless, the flyby was successful, with Galileo NIMS and SSI camera capturing an erupting volcano that generated a long plume of lava that was sufficiently large and hot to have also been detected by the NASA Infrared Telescope Facility atop Mauna Kea in Hawaii. While such events were more common and spectacular on Io than on Earth, it was extremely fortuitous to have captured it; planetary scientist Alfred McEwen estimated the odds at 1 in 500. The safe-mode incidents on I24 and I25 left some gaps in the data, which I27 targeted. This time Galileo passed over the surface of Io. At this time, the spacecraft was nearly at the maximum distance from Earth, and there was a solar conjunction, a period when the Sun blocked the line of sight between Earth and Jupiter. As a consequence, three quarters of the observations had to be taken over a period of three hours. NIMS images revealed fourteen active volcanoes in a region thought to contain just four. Images of Loki Patera showed that in the four and half months between I24 and I27, some had been covered in fresh lava. A series of observations of extreme ultraviolet (EUV) had to be cancelled due to yet another safe-mode event. Radiation exposure caused a transient bus reset, a computer hardware error resulting in a safe mode event. A software patch implemented after the Europa encounter on orbit E19 guarded against this when the spacecraft was within 15 Jupiter radii of the planet, but this time it occurred at 29 Jupiter radii. The safe mode event also caused a loss of tape playback time, but the project managers decided to carry over some Io data into orbit G28, and play it back then. This limited the amount of tape space available for that Ganymede encounter, but the Io data was considered to be more valuable. The discovery of Io's iron core raised the possibility that it had a magnetic field. The I24, I25 and I27 encounters had involved passes over Io's equator, which made it difficult to determine whether Io had its own magnetic field or one induced by Jupiter. Accordingly, on orbit I31, Galileo passed within of the surface of the north pole of Io, and on orbit I32 it flew over the south pole. After examining the magnetometer results, planetary scientist Margaret G. Kivelson, announced that Io had no intrinsic magnetic field, which meant that its molten iron core did not have the same convective properties as that of Earth. On I31 Galileo sped through an area that had been in the plume of the Tvashtar Paterae volcano, and it was hoped that the plume could be sampled. This time, Tvashtar was quiet, but the spacecraft flew through the plume of another, previously unknown, volcano away. What had been assumed to be hot ash from the volcanic eruption turned out to be sulfur dioxide snowflakes, each consisting of 15 to 20 molecules clustered together. Galileo final return to Io on orbit I33 was marred by another safe mode incident, and much of the hoped-for data was lost. Europa Although the smallest of the four Galilean moons, with a radius of , Europa is the sixth-largest moon in the solar system. Observations from Earth indicated that it was covered in ice. Like Io, Europa is tidally locked with Jupiter. It is in orbital resonance with Io and Ganymede, with its 85-hour orbit being twice that of Io, but half that of Ganymede. Conjunctions with Io always occur on the opposite side of Jupiter to those with Ganymede. Europa is therefore subject to tidal effects. There is no evidence of volcanism like on Io, but Galileo revealed that the surface ice was covered in cracks. Some observations of Europa were made during orbits G1 and G2. On C3, Galileo conducted a "nontargeted" encounter of Europa—meaning a secondary flyby at a distance of up to —on November 6, 1996. During E4 from December 15 to 22, 1996, Galileo flew within of Europa, but data transmission was hindered by a Solar occultation that blocked transmission for ten days. Galileo returned to Europa on E6 in January 1997, this time at a height of , to analyze oval-shaped features in the infrared and ultraviolet spectra. Occultations by Europa, Io and Jupiter provided data on the atmospheric profiles of them, and measurements were made of Europa's gravitational field. On E11 from November 2 to 9, 1997, data was collected on the magnetosphere. Due to the problems with the HGA, only about two percent of the anticipated number of images of Europa were obtained by the primary mission. On the GEM, the first eight orbits (E12 through E19) were all dedicated to Europa, and Galileo paid it a final visit on E26 during the GMM. Images of Europa also showed few impact craters. It seemed unlikely that it had escaped the meteor and comet impacts that scarred Ganymede and Callisto, so this indicated Europa has an active geology that renews the surface and obliterates craters. Astronomer Clark Chapman argued that, assuming a crater occurs in Europa once every million years, and given only about twenty have been spotted on Europa, the implication is that the surface must only be about 10 million years old. With more data on hand, in 2003 a team led by Kevin Zahle at NASA's Ames Research Center arrived at a figure of 30 to 70 million years. Tidal flexing of up to per day was the most likely culprit. But not all scientists were convinced; Michael Carr, a planetologist from the US Geological Survey, argued that, on the contrary, Europa's surface age was closer to a billion years. He compared the craters on Ganymede with those on Earth's moon, and concluded that the satellites of Jupiter were not subject to the same amount of cratering. Evidence of surface renewal hinted at the possibility of a viscous layer below the surface of warm ice or liquid water. NIMS observations by Galileo indicated that the surface of Europa appeared to contain magnesium- and sodium-based salts. A likely source was brine below the ice crust. Further evidence was provided by the magnetometer, which reported that the magnetic field was induced by Jupiter. This could be explained by the existence of a spherical shell of conductive material like salt water. Since the surface temperature on Europa was , any water breaching the surface ice would instantly freeze over. Heat required to keep water in a liquid state could not come from the Sun, which at that distance had only 4 percent of the intensity it had on Earth, but ice is a good insulator, and the heat could be provided by the tidal flexing. Galileo also yielded evidence that the crust of Europa had slipped over time, moving south on the hemisphere facing Jupiter, and north on the far side. There was acrimonious debate among scientists over the thickness of the ice crust, and those who presented results indicating that it might be thinner than the proposed by the accredited scientists on the Galileo Imaging Team faced intimidation, scorn, and reduced career opportunities. The Galileo Imaging Team was led by Michael J. Belton from the Kitt Peak National Observatory. Scientists who planned imaging sequences had the exclusive right to the initial interpretation of the Galileo data, most which was performed by their research students. The scientific community did not want a repetition of the 1979 Morabito incident, when Linda A. Morabito, an engineer at JPL working on Voyager 1, discovered the first active extraterrestrial volcano on Io. The Imaging Team controlled the manner in which discoveries were presented to the scientific community and the public through press conferences, conference papers and publications. Observations by the Hubble Space Telescope in 1995 reported that Europa had a thin oxygen atmosphere. This was confirmed by Galileo in six experiments on orbits E4 and E6 during occultations when Europa was between Galileo and the Earth. This allowed Canberra and Goldstone to investigate the ionosphere of Europa by measuring the degree to which the radio beam was diffracted by charged particles. This indicated the presence of water ions, which were most likely water molecules that had been dislodged from the surface ice and then ionized by the Sun or the Jovian magnetosphere. The presence of an ionosphere was sufficient to deduce the existence of a thin atmosphere on Europa. On December 11, 2013, NASA reported, based on results from the Galileo mission, the detection of "clay-like minerals" (specifically, phyllosilicates), often associated with organic materials, on the icy crust of Europa. The presence of the minerals may have been the result of a collision with an asteroid or comet. Ganymede The largest of the Galilean moons with a radius of , Ganymede is larger than Earth's moon, the dwarf planet Pluto or the planet Mercury. It is the largest of the moons in the Solar system that are characterized by large amounts of water ice, which also includes Saturn's moon Titan, and Neptune's moon Triton. Ganymede has three times as much water for its mass as Earth has. When Galileo entered Jovian orbit, it did so at an orbital inclination to the Jovian equator, and therefore in the orbital plane of the four Galilean moons. To transfer orbit while conserving propellant, two slingshot maneuvers were performed. On G1, the gravity of Ganymede was used to slow the spacecraft's orbital period from 210 to 72 days to allow for more encounters and to take Galileo out of the more intense regions of radiation. On G2, the gravity assist was employed to put it into a coplanar orbit to permit subsequent encounters with Io, Europa and Callisto. Although the primary purpose of G1 and G2 was navigational, the opportunity to make some observations was not missed. The plasma-wave experiment and the magnetometer detected a magnetic field with a strength of about , more than strong enough to create a separate magnetosphere within that of Jupiter. This was the first time that a magnetic field had ever been detected on a moon contained within the magnetosphere of its host planet. This discovery led naturally to questions about its origin. The evidence pointed to an iron or iron sulfide core and mantle below the surface, encased in ice. Margaret Kivelson, the scientist in charge of the magnetometer experiment, contended that the induced magnetic field required an iron core, and speculated that an electrically conductive layer was required, possibly a brine ocean below the surface. Galileo returned to Ganymede on orbits G7 and G9 in April and May 1997, and on G28 and G29 in May and December 2000 on the GMM. Images of the surface revealed two types of terrain: highly cratered dark regions and grooved terrain sulcus. Images of the Arbela Sulcus taken on G28 made Ganymede look more like Europa, but tidal flexing could not provide sufficient heat to keep water in liquid form on Ganymede, although it may have made a contribution. One possibility was radioactivity, which might provide sufficient heat for liquid water to exist below the surface. Another possibility was volcanism. Slushy water or ice reaching the surface would quickly freeze over, creating areas of a relatively smooth surface. Callisto Callisto is the outermost of the Galilean moons, and the most pockmarked, indeed the most of any body in the Solar system. So many craters must have taken billions of years to accumulate, which gave scientists the idea that its surface was as much as four billion years old, and provided a record of meteor activity in the Solar system. Galileo visited Callisto on orbits C3, C9 and C100 during the prime mission, and then on C20, C21, C22 and C23 during the GEM. When the cameras observed Callisto close up, there was a puzzling absence of small craters. The surface features appeared to have been eroded, indicating that they had been subject to active geological processes. Galileo flyby of Callisto on C3 marked the first time that the Deep Space Network operated a link between its antennae in Canberra and Goldstone that allowed them to operate as a gigantic array, thereby enabling a higher bit rate. With the assistance of the antenna at Parkes, this raised the effective bandwidth to as much as 1,000 bits per second. Data accumulated on C3 indicated that Callisto had a homogeneous composition, with heavy and light elements intermixed. This was estimated to be composed of 60 percent silicate, iron and iron sulfide rock and 40 percent water ice. This was overturned by further radio Doppler observations on C9 and C10, which indicated that rock had settled towards the core, and therefore that Callisto indeed has a differentiated internal structure, although not as much so as the other Galilean moons. Observations made with Galileo magnetometer indicated that Callisto had no magnetic field of its own, and therefore lacked an iron core like Ganymede's, but that it did have an induced field from Jupiter's magnetosphere. Because ice is too poor a conductor to generate this effect, it pointed to the possibility that Callisto, like Europa and Ganymede, might have a subsurface ocean of brine. Galileo made its closest encounter with Callisto on C30, when it made a pass over the surface, during which it photographed the Asgard, Valhalla and Bran craters. This was used for slingshot maneuvers to set up the final encounters with Io on I31 and I32. Amalthea Although Galileo main mission was to explore the Galilean moons, it also captured images of four of the inner moons, Thebe, Adrastea, Amalthea, and Metis. Such images were only possible from a spacecraft; to Earth-based telescopes they were merely specks of light. Two years of Jupiter's intense radiation took its toll on the spacecraft's systems, and its fuel supply was running low in the early 2000s. Galileo cameras were deactivated on January 17, 2002, after they had sustained irreparable radiation damage. NASA engineers were able to recover the damaged tape-recorder electronics, and Galileo continued to return scientific data until it was deorbited in 2003, performing one last scientific experiment: a measurement of Amalthea's mass as the spacecraft swung by it. This was tricky to arrange; to be useful, Galileo had to fly within of Amalthea, but not so close as to crash into it. This was complicated by its irregular potato-like shape. It was tidally locked, pointing its long axis towards Jupiter. A successful flyby meant knowing which direction the asteroid was pointed in relation to Galileo at all times. Galileo flew by Amalthea on November 5, 2002, during its 34th orbit, allowing a measurement of the moon's mass as it passed within of its surface. The results startled the scientific team; they revealed that Amalthea had a mass of , and with a volume of , it therefore had a density of 857 ± 99 kilograms per cubic meter, less than that of water. A final discovery occurred during the last two orbits of the mission. When the spacecraft passed the orbit of Amalthea, the star scanner detected unexpected flashes of light that were reflections from seven to nine moonlets. None of the individual moonlets was reliably sighted twice, so no orbits were determined. It is believed that they were most likely debris ejected from Amalthea that formed a tenuous, and perhaps temporary, ring around Jupiter. Star scanner Galileo star scanner was a small optical telescope that provided an absolute attitude reference, but it made several scientific discoveries serendipitously. In the prime mission, it was found that the star scanner was able to detect high-energy particles as a noise signal. This data was eventually calibrated to show the particles were predominantly > electrons that were trapped in the Jovian magnetic belts, and released to the Planetary Data System. A second discovery occurred in 2000. The star scanner was observing a set of stars that included the second magnitude star Delta Velorum. At one point, this star dimmed for 8 hours below the star scanner's detection threshold. Subsequent analysis of Galileo data and work by amateur and professional astronomers showed that Delta Velorum is the brightest known eclipsing binary, brighter at maximum than Algol. It has a primary period of 45 days and the dimming is just visible with the naked eye. Radiation-related anomalies Jupiter's uniquely harsh radiation environment caused over 20 anomalies over the course of Galileo mission, in addition to the incidents expanded upon below. Despite having exceeded its radiation design limit by at least a factor of three, the spacecraft survived all these anomalies. Work-arounds were found eventually for all of these problems, and Galileo was never rendered entirely non-functional by Jupiter's radiation. The radiation limits for Galileo computers were based on data returned from Pioneer 10 and Pioneer 11, since much of the design work was underway before the two Voyagers arrived at Jupiter in 1979. A typical effect of the radiation was that several of the science instruments suffered increased noise while within about of Jupiter. The SSI camera began producing totally white images when the spacecraft was hit by the exceptional Bastille Day coronal mass ejection in 2000, and did so again on subsequent close approaches to Jupiter. The quartz crystal used as the frequency reference for the radio suffered permanent frequency shifts with each Jupiter approach. A spin detector failed, and the spacecraft gyro output was biased by the radiation environment. The most severe effects of the radiation were current leakages somewhere in the spacecraft's power bus, most likely across brushes at a spin bearing connecting rotor and stator sections of the orbiter. These current leakages triggered a reset of the onboard computer and caused it to go into safe mode. The resets occurred when the spacecraft was either close to Jupiter or in the region of space magnetically downstream of Jupiter. A change to the software was made in April 1999 that allowed the onboard computer to detect these resets and autonomously recover, so as to avoid safe mode. Tape recorder problems Routine maintenance of the tape recorder involved winding the tape halfway down its length and back again to prevent it sticking. In November 2002, after the completion of the mission's only encounter with Jupiter's moon Amalthea, problems with playback of the tape recorder again plagued Galileo. About 10 minutes after the closest approach of the Amalthea flyby, Galileo stopped collecting data, shut down all of its instruments, and went into safe mode, apparently as a result of exposure to Jupiter's intense radiation environment. Though most of the Amalthea data was already written to tape, it was found that the recorder refused to respond to commands telling it to play back data. After weeks of troubleshooting of an identical flight spare of the recorder on the ground, it was determined that the cause of the malfunction was a reduction of light output in three infrared Optek OP133 light-emitting diodes (LEDs) located in the drive electronics of the recorder's motor encoder wheel. The gallium arsenide LEDs had been particularly sensitive to proton-irradiation-induced atomic lattice displacement defects, which greatly decreased their effective light output and caused the drive motor's electronics to falsely believe the motor encoder wheel was incorrectly positioned. Galileo flight team then began a series of "annealing" sessions, where current was passed through the LEDs for hours at a time to heat them to a point where some of the crystalline lattice defects would be shifted back into place, thus increasing the LED's light output. After about 100 hours of annealing and playback cycles, the recorder was able to operate for up to an hour at a time. After many subsequent playback and cooling cycles, the complete transmission back to Earth of all recorded Amalthea flyby data was successful. End of mission and deorbit When the exploration of Mars was being considered in the early 1960s, Carl Sagan and Sidney Coleman produced a paper concerning contamination of the red planet. In order that scientists could determine whether native life forms existed before the planet became contaminated by micro-organisms from Earth, they proposed that space missions should aim at a 99.9 percent chance that contamination should not occur. This figure was adopted by the Committee on Space Research (COSPAR) of the International Council of Scientific Unions in 1964, and was subsequently applied to all planetary probes. The danger was highlighted in 1969 when the Apollo 12 astronauts returned components of the Surveyor 3 spacecraft that had landed on the Moon three years before, and it was found that microbes were still viable even after three years in that harsh climate. An alternative was the Prime Directive, a philosophy of non-interference with alien life forms enunciated by the original Star Trek television series that prioritized the interests of the life forms over those of scientists. Given the (admittedly slim) prospect of life on Europa, scientists Richard Greenberg and Randall Tufts proposed that a new standard be set of no greater chance of contamination than that which might occur naturally by meteorites. Galileo had not been sterilized prior to launch and could conceivably have carried bacteria from Earth. Therefore, a plan was formulated to send the probe directly into Jupiter, in an intentional crash to eliminate the possibility of an impact with Jupiter's moons, particularly Europa, and prevent a forward contamination. On April 14, 2003, the Galileo orbiter reached its greatest orbital distance from Jupiter for the entire mission since orbital insertion, , before plunging back towards the gas giant for its final impact. At the completion of J35, its final orbit around the Jovian system, Galileo struck Jupiter in darkness just south of the equator on September 21, 2003, at 18:57 UTC. Its impact speed was approximately . Major findings The composition of Jupiter differs from that of the Sun, indicating that Jupiter has evolved since the formation of the Solar System. Galileo made the first observation of ammonia clouds in another planet's atmosphere. The atmosphere creates ammonia ice particles from material coming up from lower depths. Io was confirmed to have extensive volcanic activity that is 100 times greater than that found on Earth. The heat and frequency of eruptions are reminiscent of early Earth. Complex plasma interactions in Io's atmosphere create immense electrical currents which couple to Jupiter's atmosphere. Several lines of evidence from Galileo support the theory that liquid oceans exist under Europa's icy surface. Ganymede possesses its own, substantial magnetic field – the first satellite known to have one. Galileo magnetic data provided evidence that Europa, Ganymede and Callisto have a liquid salt water layer under the visible surface. Evidence exists that Europa, Ganymede, and Callisto all have a thin atmospheric layer known as a "surface-bound exosphere". Jupiter's ring system is formed by dust kicked up as interplanetary meteoroids smash into the planet's four small inner moons. The outermost ring is actually two rings, one embedded with the other. There is probably a separate ring along Amalthea's orbit as well. The Galileo spacecraft identified the global structure and dynamics of a giant planet's magnetosphere. Follow-on missions There was a spare Galileo spacecraft that was considered by the NASA–ESA Outer Planets Study Team in 1983 for a mission to Saturn, but it was passed over in favor of a newer design, which became Cassini–Huygens. While Galileo was operating, Ulysses passed by Jupiter in 1992 on its mission to study the Sun's polar regions, and Cassini–Huygens coasted by the planet in 2000 and 2001 en route to Saturn. New Horizons passed close by Jupiter in 2007 for a gravity assist en route to Pluto, and it too collected data on the planet. Juno The next mission to orbit Jupiter was NASA's Juno spacecraft, which was launched on August 5, 2011, and entered Jovian orbit on July 4, 2016. Although intended for a two-year mission, it is still active in 2024 and expected to continue until September 2025. Juno provided the first views of Jupiter's north pole and new insights into Jupiter's aurorae, magnetic field, and atmosphere. Information gathered about Jovian lightning prompted revision of earlier theories, and analysis of the frequency of interplanetary dust impacts (primarily on the backs of the solar panels), as Juno passed between Earth and the asteroid belt, indicated that this dust comes from Mars, rather than from comets or asteroids, as was previously thought. Jupiter Icy Moons Explorer The European Space Agency is planning to return to the Jovian system with the Jupiter Icy Moons Explorer (JUICE). This was launched from Europe's Spaceport in French Guiana on April 14, 2023, and is expected to reach Jupiter in July 2031. Europa Clipper Even before Galileo concluded, NASA considered the Europa Orbiter, but it was canceled in 2002. A lower-cost version was then studied, which led to Europa Clipper being approved in 2015. This mission launched from Kennedy Space Center on October 14, 2024 and is expected to reach Jupiter in April 2030. Europa Lander A lander, simply called Europa Lander was assessed by the Jet Propulsion Laboratory. , this mission remains a concept, although some funds were released for instrument development and maturation.
Technology
Unmanned spacecraft
null
13088
https://en.wikipedia.org/wiki/Granite
Granite
Granite ( ) is a coarse-grained (phaneritic) intrusive igneous rock composed mostly of quartz, alkali feldspar, and plagioclase. It forms from magma with a high content of silica and alkali metal oxides that slowly cools and solidifies underground. It is common in the continental crust of Earth, where it is found in igneous intrusions. These range in size from dikes only a few centimeters across to batholiths exposed over hundreds of square kilometers. Granite is typical of a larger family of granitic rocks, or granitoids, that are composed mostly of coarse-grained quartz and feldspars in varying proportions. These rocks are classified by the relative percentages of quartz, alkali feldspar, and plagioclase (the QAPF classification), with true granite representing granitic rocks rich in quartz and alkali feldspar. Most granitic rocks also contain mica or amphibole minerals, though a few (known as leucogranites) contain almost no dark minerals. Granite is nearly always massive (lacking any internal structures), hard (falling between 6 and 7 on the Mohs hardness scale), and tough. These properties have made granite a widespread construction stone throughout human history. Description The word "granite" comes from the Latin granum, a grain, in reference to the coarse-grained structure of such a completely crystalline rock. Granites can be predominantly white, pink, or gray in color, depending on their mineralogy. Granitic rocks mainly consist of feldspar, quartz, mica, and amphibole minerals, which form an interlocking, somewhat equigranular matrix of feldspar and quartz with scattered darker biotite mica and amphibole (often hornblende) peppering the lighter color minerals. Occasionally some individual crystals (phenocrysts) are larger than the groundmass, in which case the texture is known as porphyritic. A granitic rock with a porphyritic texture is known as a granite porphyry. Granitoid is a general, descriptive field term for lighter-colored, coarse-grained igneous rocks. Petrographic examination is required for identification of specific types of granitoids. The alkali feldspar in granites is typically orthoclase or microcline and is often perthitic. The plagioclase is typically sodium-rich oligoclase. Phenocrysts are usually alkali feldspar. Granitic rocks are classified according to the QAPF diagram for coarse grained plutonic rocks and are named according to the percentage of quartz, alkali feldspar (orthoclase, sanidine, or microcline) and plagioclase feldspar on the A-Q-P half of the diagram. True granite (according to modern petrologic convention) contains between 20% and 60% quartz by volume, with 35% to 90% of the total feldspar consisting of alkali feldspar. Granitic rocks poorer in quartz are classified as syenites or monzonites, while granitic rocks dominated by plagioclase are classified as granodiorites or tonalites. Granitic rocks with over 90% alkali feldspar are classified as alkali feldspar granites. Granitic rock with more than 60% quartz, which is uncommon, is classified simply as quartz-rich granitoid or, if composed almost entirely of quartz, as quartzolite. True granites are further classified by the percentage of their total feldspar that is alkali feldspar. Granites whose feldspar is 65% to 90% alkali feldspar are syenogranites, while the feldspar in monzogranite is 35% to 65% alkali feldspar. A granite containing both muscovite and biotite micas is called a binary or two-mica granite. Two-mica granites are typically high in potassium and low in plagioclase, and are usually S-type granites or A-type granites, as described below. Another aspect of granite classification is the ratios of metals that potentially form feldspars. Most granites have a composition such that almost all their aluminum and alkali metals (sodium and potassium) are combined as feldspar. This is the case when K2O + Na2O + CaO > Al2O3 > K2O + Na2O. Such granites are described as normal or metaluminous. Granites in which there is not enough aluminum to combine with all the alkali oxides as feldspar (Al2O3 < K2O + Na2O) are described as peralkaline, and they contain unusual sodium amphiboles such as riebeckite. Granites in which there is an excess of aluminum beyond what can be taken up in feldspars (Al2O3 > CaO + K2O + Na2O) are described as peraluminous, and they contain aluminum-rich minerals such as muscovite. Physical properties The average density of granite is between , its compressive strength usually lies above 200 MPa (29,000 psi), and its viscosity near STP is 3–6·1020 Pa·s. The melting temperature of dry granite at ambient pressure is ; it is strongly reduced in the presence of water, down to 650 °C at a few hundred megapascals of pressure. Granite has poor primary permeability overall, but strong secondary permeability through cracks and fractures if they are present. Chemical composition A worldwide average of the chemical composition of granite, by mass percent, based on 2485 analyses: The medium-grained equivalent of granite is microgranite. The extrusive igneous rock equivalent of granite is rhyolite. Occurrence Granitic rock is widely distributed throughout the continental crust. Much of it was intruded during the Precambrian age; it is the most abundant basement rock that underlies the relatively thin sedimentary veneer of the continents. Outcrops of granite tend to form tors, domes or bornhardts, and rounded massifs. Granites sometimes occur in circular depressions surrounded by a range of hills, formed by the metamorphic aureole or hornfels. Granite often occurs as relatively small, less than 100 km2 stock masses (stocks) and in batholiths that are often associated with orogenic mountain ranges. Small dikes of granitic composition called aplites are often associated with the margins of granitic intrusions. In some locations, very coarse-grained pegmatite masses occur with granite. Origin Granite forms from silica-rich (felsic) magmas. Felsic magmas are thought to form by addition of heat or water vapor to rock of the lower crust, rather than by decompression of mantle rock, as is the case with basaltic magmas. It has also been suggested that some granites found at convergent boundaries between tectonic plates, where oceanic crust subducts below continental crust, were formed from sediments subducted with the oceanic plate. The melted sediments would have produced magma intermediate in its silica content, which became further enriched in silica as it rose through the overlying crust. Early fractional crystallisation serves to reduce a melt in magnesium and chromium, and enrich the melt in iron, sodium, potassium, aluminum, and silicon. Further fractionation reduces the content of iron, calcium, and titanium. This is reflected in the high content of alkali feldspar and quartz in granite. The presence of granitic rock in island arcs shows that fractional crystallization alone can convert a basaltic magma to a granitic magma, but the quantities produced are small. For example, granitic rock makes up just 4% of the exposures in the South Sandwich Islands. In continental arc settings, granitic rocks are the most common plutonic rocks, and batholiths composed of these rock types extend the entire length of the arc. There are no indication of magma chambers where basaltic magmas differentiate into granites, or of cumulates produced by mafic crystals settling out of the magma. Other processes must produce these great volumes of felsic magma. One such process is injection of basaltic magma into the lower crust, followed by differentiation, which leaves any cumulates in the mantle. Another is heating of the lower crust by underplating basaltic magma, which produces felsic magma directly from crustal rock. The two processes produce different kinds of granites, which may be reflected in the division between S-type (produced by underplating) and I-type (produced by injection and differentiation) granites, discussed below. Alphabet classification system The composition and origin of any magma that differentiates into granite leave certain petrological evidence as to what the granite's parental rock was. The final texture and composition of a granite are generally distinctive as to its parental rock. For instance, a granite that is derived from partial melting of metasedimentary rocks may have more alkali feldspar, whereas a granite derived from partial melting of metaigneous rocks may be richer in plagioclase. It is on this basis that the modern "alphabet" classification schemes are based. The letter-based Chappell & White classification system was proposed initially to divide granites into I-type (igneous source) granite and S-type (sedimentary sources). Both types are produced by partial melting of crustal rocks, either metaigneous rocks or metasedimentary rocks. I-type granites are characterized by a high content of sodium and calcium, and by a strontium isotope ratio, 87Sr/86Sr, of less than 0.708. 87Sr is produced by radioactive decay of 87Rb, and since rubidium is concentrated in the crust relative to the mantle, a low ratio suggests origin in the mantle. The elevated sodium and calcium favor crystallization of hornblende rather than biotite. I-type granites are known for their porphyry copper deposits. I-type granites are orogenic (associated with mountain building) and usually metaluminous. S-type granites are sodium-poor and aluminum-rich. As a result, they contain micas such as biotite and muscovite instead of hornblende. Their strontium isotope ratio is typically greater than 0.708, suggesting a crustal origin. They also commonly contain xenoliths of metamorphosed sedimentary rock, and host tin ores. Their magmas are water-rich, and they readily solidify as the water outgasses from the magma at lower pressure, so they less commonly make it to the surface than magmas of I-type granites, which are thus more common as volcanic rock (rhyolite). They are also orogenic but range from metaluminous to strongly peraluminous. Although both I- and S-type granites are orogenic, I-type granites are more common close to the convergent boundary than S-type. This is attributed to thicker crust further from the boundary, which results in more crustal melting. A-type granites show a peculiar mineralogy and geochemistry, with particularly high silicon and potassium at the expense of calcium and magnesium and a high content of high field strength cations (cations with a small radius and high electrical charge, such as zirconium, niobium, tantalum, and rare earth elements.) They are not orogenic, forming instead over hot spots and continental rifting, and are metaluminous to mildly peralkaline and iron-rich. These granites are produced by partial melting of refractory lithology such as granulites in the lower continental crust at high thermal gradients. This leads to significant extraction of hydrous felsic melts from granulite-facies resitites. A-type granites occur in the Koettlitz Glacier Alkaline Province in the Royal Society Range, Antarctica. The rhyolites of the Yellowstone Caldera are examples of volcanic equivalents of A-type granite. M-type granite was later proposed to cover those granites that were clearly sourced from crystallized mafic magmas, generally sourced from the mantle. Although the fractional crystallisation of basaltic melts can yield small amounts of granites, which are sometimes found in island arcs, such granites must occur together with large amounts of basaltic rocks. H-type granites were suggested for hybrid granites, which were hypothesized to form by mixing between mafic and felsic from different sources, such as M-type and S-type. However, the big difference in rheology between mafic and felsic magmas makes this process problematic in nature. Granitization Granitization is an old, and largely discounted, hypothesis that granite is formed in place through extreme metasomatism. The idea behind granitization was that fluids would supposedly bring in elements such as potassium, and remove others, such as calcium, to transform a metamorphic rock into granite. This was supposed to occur across a migrating front. However, experimental work had established by the 1960s that granites were of igneous origin. The mineralogical and chemical features of granite can be explained only by crystal-liquid phase relations, showing that there must have been at least enough melting to mobilize the magma. However, at sufficiently deep crustal levels, the distinction between metamorphism and crustal melting itself becomes vague. Conditions for crystallization of liquid magma are close enough to those of high-grade metamorphism that the rocks often bear a close resemblance. Under these conditions, granitic melts can be produced in place through the partial melting of metamorphic rocks by extracting melt-mobile elements such as potassium and silicon into the melts but leaving others such as calcium and iron in granulite residues. This may be the origin of migmatites. A migmatite consists of dark, refractory rock (the melanosome) that is permeated by sheets and channels of light granitic rock (the leucosome). The leucosome is interpreted as partial melt of a parent rock that has begun to separate from the remaining solid residue (the melanosome). If enough partial melt is produced, it will separate from the source rock, become more highly evolved through fractional crystallization during its ascent toward the surface, and become the magmatic parent of granitic rock. The residue of the source rock becomes a granulite. The partial melting of solid rocks requires high temperatures and the addition of water or other volatiles which lower the solidus temperature (temperature at which partial melting commences) of these rocks. It was long debated whether crustal thickening in orogens (mountain belts along convergent boundaries) was sufficient to produce granite melts by radiogenic heating, but recent work suggests that this is not a viable mechanism. In-situ granitization requires heating by the asthenospheric mantle or by underplating with mantle-derived magmas. Ascent and emplacement Granite magmas have a density of 2.4 Mg/m3, much less than the 2.8 Mg/m3 of high-grade metamorphic rock. This gives them tremendous buoyancy, so that ascent of the magma is inevitable once enough magma has accumulated. However, the question of precisely how such large quantities of magma are able to shove aside country rock to make room for themselves (the room problem) is still a matter of research. Two main mechanisms are thought to be important: Stokes diapir Fracture propagation Of these two mechanisms, Stokes diapirism has been favoured for many years in the absence of a reasonable alternative. The basic idea is that magma will rise through the crust as a single mass through buoyancy. As it rises, it heats the wall rocks, causing them to behave as a power-law fluid and thus flow around the intrusion allowing it to pass without major heat loss. This is entirely feasible in the warm, ductile lower crust where rocks are easily deformed, but runs into problems in the upper crust which is far colder and more brittle. Rocks there do not deform so easily: for magma to rise as a diapir it would expend far too much energy in heating wall rocks, thus cooling and solidifying before reaching higher levels within the crust. Fracture propagation is the mechanism preferred by many geologists as it largely eliminates the major problems of moving a huge mass of magma through cold brittle crust. Magma rises instead in small channels along self-propagating dykes which form along new or pre-existing fracture or fault systems and networks of active shear zones. As these narrow conduits open, the first magma to enter solidifies and provides a form of insulation for later magma. These mechanisms can operate in tandem. For example, diapirs may continue to rise through the brittle upper crust through stoping, where the granite cracks the roof rocks, removing blocks of the overlying crust which then sink to the bottom of the diapir while the magma rises to take their place. This can occur as piecemeal stopping (stoping of small blocks of chamber roof), as cauldron subsidence (collapse of large blocks of chamber roof), or as roof foundering (complete collapse of the roof of a shallow magma chamber accompanied by a caldera eruption.) There is evidence for cauldron subsidence at the Mt. Ascutney intrusion in eastern Vermont. Evidence for piecemeal stoping is found in intrusions that are rimmed with igneous breccia containing fragments of country rock. Assimilation is another mechanism of ascent, where the granite melts its way up into the crust and removes overlying material in this way. This is limited by the amount of thermal energy available, which must be replenished by crystallization of higher-melting minerals in the magma. Thus, the magma is melting crustal rock at its roof while simultaneously crystallizing at its base. This results in steady contamination with crustal material as the magma rises. This may not be evident in the major and minor element chemistry, since the minerals most likely to crystallize at the base of the chamber are the same ones that would crystallize anyway, but crustal assimilation is detectable in isotope ratios. Heat loss to the country rock means that ascent by assimilation is limited to distance similar to the height of the magma chamber. Weathering Physical weathering occurs on a large scale in the form of exfoliation joints, which are the result of granite's expanding and fracturing as pressure is relieved when overlying material is removed by erosion or other processes. Chemical weathering of granite occurs when dilute carbonic acid, and other acids present in rain and soil waters, alter feldspar in a process called hydrolysis. As demonstrated in the following reaction, this causes potassium feldspar to form kaolinite, with potassium ions, bicarbonate, and silica in solution as byproducts. An end product of granite weathering is grus, which is often made up of coarse-grained fragments of disintegrated granite. Climatic variations also influence the weathering rate of granites. For about two thousand years, the relief engravings on Cleopatra's Needle obelisk had survived the arid conditions of its origin before its transfer to London. Within two hundred years, the red granite has drastically deteriorated in the damp and polluted air there. Soil development on granite reflects the rock's high quartz content and dearth of available bases, with the base-poor status predisposing the soil to acidification and podzolization in cool humid climates as the weather-resistant quartz yields much sand. Feldspars also weather slowly in cool climes, allowing sand to dominate the fine-earth fraction. In warm humid regions, the weathering of feldspar as described above is accelerated so as to allow a much higher proportion of clay with the Cecil soil series a prime example of the consequent Ultisol great soil group. Natural radiation Granite is a natural source of radiation, like most natural stones. Potassium-40 is a radioactive isotope of weak emission, and a constituent of alkali feldspar, which in turn is a common component of granitic rocks, more abundant in alkali feldspar granite and syenites. Some granites contain around 10 to 20 parts per million (ppm) of uranium. By contrast, more mafic rocks, such as tonalite, gabbro and diorite, have 1 to 5 ppm uranium, and limestones and sedimentary rocks usually have equally low amounts. Many large granite plutons are sources for palaeochannel-hosted or roll front uranium ore deposits, where the uranium washes into the sediments from the granite uplands and associated, often highly radioactive pegmatites. Cellars and basements built into soils over granite can become a trap for radon gas, which is formed by the decay of uranium. Radon gas poses significant health concerns and is the number two cause of lung cancer in the US behind smoking. Thorium occurs in all granites. Conway granite has been noted for its relatively high thorium concentration of 56±6 ppm. There is some concern that some granite sold as countertops or building material may be hazardous to health. Dan Steck of St. Johns University has stated that approximately 5% of all granite is of concern, with the caveat that only a tiny percentage of the tens of thousands of granite slab types have been tested. Resources from national geological survey organizations are accessible online to assist in assessing the risk factors in granite country and design rules relating, in particular, to preventing accumulation of radon gas in enclosed basements and dwellings. A study of granite countertops was done (initiated and paid for by the Marble Institute of America) in November 2008 by National Health and Engineering Inc. of USA. In this test, all of the 39 full-size granite slabs that were measured for the study showed radiation levels well below the European Union safety standards (section 4.1.1.1 of the National Health and Engineering study) and radon emission levels well below the average outdoor radon concentrations in the US. Industry and uses Granite and related marble industries are considered one of the oldest industries in the world, existing as far back as Ancient Egypt. Major modern exporters of granite include China, India, Italy, Brazil, Canada, Germany, Sweden, Spain and the United States. Antiquity The Red Pyramid of Egypt (), named for the light crimson hue of its exposed limestone surfaces, is the third largest of Egyptian pyramids. Pyramid of Menkaure, likely dating 2510 BC, was constructed of limestone and granite blocks. The Great Pyramid of Giza (c. 2580 BC) contains a granite sarcophagus fashioned of "Red Aswan Granite". The mostly ruined Black Pyramid dating from the reign of Amenemhat III once had a polished granite pyramidion or capstone, which is now on display in the main hall of the Egyptian Museum in Cairo (see Dahshur). Other uses in Ancient Egypt include columns, door lintels, sills, jambs, and wall and floor veneer. How the Egyptians worked the solid granite is still a matter of debate. Tool marks described by the Egyptologist Anna Serotta indicate the use of flint tools on finer work with harder stones, e.g. when producing the hieroglyphic inscriptions. Patrick Hunt has postulated that the Egyptians used emery, which has greater hardness. The Seokguram Grotto in Korea is a Buddhist shrine and part of the Bulguksa temple complex. Completed in 774 AD, it is an artificial grotto constructed entirely of granite. The main Buddha of the grotto is a highly regarded piece of Buddhist art, and along with the temple complex to which it belongs, Seokguram was added to the UNESCO World Heritage List in 1995. Rajaraja Chola I of the Chola Dynasty in South India built the world's first temple entirely of granite in the 11th century AD in Tanjore, India. The Brihadeeswarar Temple dedicated to Lord Shiva was built in 1010. The massive Gopuram (ornate, upper section of shrine) is believed to have a mass of around 81 tonnes. It was the tallest temple in south India. Imperial Roman granite was quarried mainly in Egypt, and also in Turkey, and on the islands of Elba and Giglio. Granite became "an integral part of the Roman language of monumental architecture". The quarrying ceased around the third century AD. Beginning in Late Antiquity the granite was reused, which since at least the early 16th century became known as spolia. Through the process of case-hardening, granite becomes harder with age. The technology required to make tempered metal chisels was largely forgotten during the Middle Ages. As a result, Medieval stoneworkers were forced to use saws or emery to shorten ancient columns or hack them into discs. Giorgio Vasari noted in the 16th century that granite in quarries was "far softer and easier to work than after it has lain exposed" while ancient columns, because of their "hardness and solidity have nothing to fear from fire or sword, and time itself, that drives everything to ruin, not only has not destroyed them but has not even altered their colour." Modern Sculpture and memorials In some areas, granite is used for gravestones and memorials. Granite is a hard stone and requires skill to carve by hand. Until the early 18th century, in the Western world, granite could be carved only by hand tools with generally poor results. A key breakthrough was the invention of steam-powered cutting and dressing tools by Alexander MacDonald of Aberdeen, inspired by seeing ancient Egyptian granite carvings. In 1832, the first polished tombstone of Aberdeen granite to be erected in an English cemetery was installed at Kensal Green Cemetery. It caused a sensation in the London monumental trade and for some years all polished granite ordered came from MacDonald's. As a result of the work of sculptor William Leslie, and later Sidney Field, granite memorials became a major status symbol in Victorian Britain. The royal sarcophagus at Frogmore was probably the pinnacle of its work, and at 30 tons one of the largest. It was not until the 1880s that rival machinery and works could compete with the MacDonald works. Modern methods of carving include using computer-controlled rotary bits and sandblasting over a rubber stencil. Leaving the letters, numbers, and emblems exposed and the remainder of the stone covered with rubber, the blaster can create virtually any kind of artwork or epitaph. The stone known as "black granite" is usually gabbro, which has a completely different chemical composition. Buildings Granite has been extensively used as a dimension stone and as flooring tiles in public and commercial buildings and monuments. Aberdeen in Scotland, which is constructed principally from local granite, is known as "The Granite City". Because of its abundance in New England, granite was commonly used to build foundations for homes there. The Granite Railway, America's first railroad, was built to haul granite from the quarries in Quincy, Massachusetts, to the Neponset River in the 1820s. Engineering Engineers have traditionally used polished granite surface plates to establish a plane of reference, since they are relatively impervious, inflexible, and maintain good dimensional stability. Sandblasted concrete with a heavy aggregate content has an appearance similar to rough granite, and is often used as a substitute when use of real granite is impractical. Granite tables are used extensively as bases or even as the entire structural body of optical instruments, CMMs, and very high precision CNC machines because of granite's rigidity, high dimensional stability, and excellent vibration characteristics. A most unusual use of granite was as the material of the tracks of the Haytor Granite Tramway, Devon, England, in 1820. Granite block is usually processed into slabs, which can be cut and shaped by a cutting center. In military engineering, Finland planted granite boulders along its Mannerheim Line to block invasion by Russian tanks in the Winter War of 1939–40. Paving Granite is used as a pavement material. This is because it is extremely durable, permeable and requires little maintenance. For example, in Sydney, Australia black granite stone is used for the paving and kerbs throughout the Central Business District. Curling stones Curling stones are traditionally fashioned of Ailsa Craig granite. The first stones were made in the 1750s, the original source being Ailsa Craig in Scotland. Because of the rarity of this granite, the best stones can cost as much as US$1,500. Between 60 and 70 percent of the stones used today are made from Ailsa Craig granite. Although the island is now a wildlife reserve, it is still quarried under license for Ailsa granite by Kays of Scotland for curling stones. Rock climbing Granite is one of the rocks most prized by climbers, for its steepness, soundness, crack systems, and friction. Well-known venues for granite climbing include the Yosemite Valley, the Bugaboos, the Mont Blanc massif (and peaks such as the Aiguille du Dru, the Mourne Mountains, the Adamello-Presanella Alps, the Aiguille du Midi and the Grandes Jorasses), the Bregaglia, Corsica, parts of the Karakoram (especially the Trango Towers), the Fitzroy Massif and the Paine Massif in Patagonia, Baffin Island, Ogawayama, the Cornish coast, the Cairngorms, Sugarloaf Mountain in Rio de Janeiro, Brazil, and the Stawamus Chief, British Columbia, Canada. Gallery
Physical sciences
Petrology
null
13115
https://en.wikipedia.org/wiki/Gametophyte
Gametophyte
A gametophyte () is one of the two alternating multicellular phases in the life cycles of plants and algae. It is a haploid multicellular organism that develops from a haploid spore that has one set of chromosomes. The gametophyte is the sexual phase in the life cycle of plants and algae. It develops sex organs that produce gametes, haploid sex cells that participate in fertilization to form a diploid zygote which has a double set of chromosomes. Cell division of the zygote results in a new diploid multicellular organism, the second stage in the life cycle known as the sporophyte. The sporophyte can produce haploid spores by meiosis that on germination produce a new generation of gametophytes. Algae In some multicellular green algae (Ulva lactuca is one example), red algae and brown algae, sporophytes and gametophytes may be externally indistinguishable (isomorphic). In Ulva, the gametes are isogamous, all of one size, shape and general morphology. Land plants In land plants, anisogamy is universal. As in animals, female and male gametes are called, respectively, eggs and sperm. In extant land plants, either the sporophyte or the gametophyte may be reduced (heteromorphic). No extant gametophytes have stomata, but they have been found on fossil species like the early Devonian Aglaophyton from the Rhynie chert. Other fossil gametophytes found in the Rhynie chert shows they were much more developed than present forms, resembling the sporophyte in having a well-developed conducting strand, a cortex, an epidermis and a cuticle with stomata, but were much smaller. Bryophytes In bryophytes (mosses, liverworts, and hornworts), the gametophyte is the most visible stage of the life cycle. The bryophyte gametophyte is longer lived, nutritionally independent, and the sporophytes are attached to the gametophytes and dependent on them. When a moss spore germinates it grows to produce a filament of cells (called the protonema). The mature gametophyte of mosses develops into leafy shoots that produce sex organs (gametangia) that produce gametes. Eggs develop in archegonia and sperm in antheridia. In some bryophyte groups such as many liverworts of the order Marchantiales, the gametes are produced on specialized structures called gametophores (or gametangiophores). Vascular plants All vascular plants are sporophyte dominant, and a trend toward smaller and more sporophyte-dependent female gametophytes is evident as land plants evolved reproduction by seeds. Those vascular plants, such as clubmosses and many ferns, that produce only one type of spore are said to be homosporous. They have exosporic gametophytes — that is, the gametophyte is free-living and develops outside of the spore wall. Exosporic gametophytes can either be bisexual, capable of producing both sperm and eggs in the same thallus (monoicous), or specialized into separate male and female organisms (dioicous). In heterosporous vascular plants (plants that produce both microspores and megaspores), the gametophytes develop endosporically (within the spore wall). These gametophytes are dioicous, producing either sperm or eggs but not both. Ferns In most ferns, for example, in the leptosporangiate fern Dryopteris, the gametophyte is a photosynthetic free living autotrophic organism called a prothallus that produces gametes and maintains the sporophyte during its early multicellular development. However, in some groups, notably the clade that includes Ophioglossaceae and Psilotaceae, the gametophytes are subterranean and subsist by forming mycotrophic relationships with fungi. Homosporous ferns secrete a chemical called antheridiogen. Lycophytes Extant lycophytes produce two different types of gametophytes. In the homosporous families Lycopodiaceae and Huperziaceae, spores germinate into bisexual free-living, subterranean and mycotrophic gametophytes that derive nutrients from symbiosis with fungi. In Isoetes and Selaginella, which are heterosporous, microspores and megaspores are dispersed from sporangia either passively or by active ejection. Microspores produce microgametophytes which produce sperm. Megaspores produce reduced megagametophytes inside the spore wall. At maturity, the megaspore cracks open at the trilete suture to allow the male gametes to access the egg cells in the archegonia inside. The gametophytes of Isoetes appear to be similar in this respect to those of the extinct Carboniferous arborescent lycophytes Lepidodendron and Lepidostrobus. Seed plants The seed plant gametophyte life cycle is even more reduced than in basal taxa (ferns and lycophytes). Seed plant gametophytes are not independent organisms and depend upon the dominant sporophyte tissue for nutrients and water. With the exception of mature pollen, if the gametophyte tissue is separated from the sporophyte tissue it will not survive. Due to this complex relationship and the small size of the gametophyte tissue—in some situations single celled—differentiating with the human eye or even a microscope between seed plant gametophyte tissue and sporophyte tissue can be a challenge. While seed plant gametophyte tissue is typically composed of mononucleate haploid cells (1 x n), specific circumstances can occur in which the ploidy does vary widely despite still being considered part of the gametophyte. In gymnosperms, the male gametophytes are produced inside microspores within the microsporangia located inside male cones or microstrobili. In each microspore, a single gametophyte is produced, consisting of four haploid cells produced by meiotic division of a diploid microspore mother cell. At maturity, each microspore-derived gametophyte becomes a pollen grain. During its development, the water and nutrients that the male gametophyte requires are provided by the sporophyte tissue until they are released for pollination. The cell number of each mature pollen grain varies between the gymnosperm orders. Cycadophyta have 3 celled pollen grains while Ginkgophyta have 4 celled pollen grains. Gnetophyta may have 2 or 3 celled pollen grains depending on the species, and Coniferophyta pollen grains vary greatly ranging from single celled to 40 celled. One of these cells is typically a germ cell and other cells may consist of a single tube cell which grows to form the pollen tube, sterile cells, and/or prothallial cells which are both vegetative cells without an essential reproductive function. After pollination is successful, the male gametophyte continues to develop. If a tube cell was not developed in the microstrobilus, one is created after pollination via mitosis. The tube cell grows into the diploid tissue of the female cone and may branch out into the megastrobilus tissue or grow straight towards the egg cell. The megastrobilus sporophytic tissue provides nutrients for the male gametophyte at this stage. In some gymnosperms, the tube cell will create a direct channel from the site of pollination to the egg cell, in other gymnosperms, the tube cell will rupture in the middle of the megastrobilus sporophyte tissue. This occurs because in some gymnosperm orders, the germ cell is nonmobile and a direct pathway is needed, however, in Cycadophyta and Ginkgophyta, the germ cell is mobile due to flagella being present and a direct tube cell path from the pollination site to the egg is not needed. In most species the germ cell can be more specifically described as a sperm cell which mates with the egg cell during fertilization, though that is not always the case. In some Gnetophyta species, the germ cell will release two sperm nuclei that undergo a rare gymnosperm double fertilization process occurring solely with sperm nuclei and not with the fusion of developed cells. After fertilization is complete in all orders, the remaining male gametophyte tissue will deteriorate. The female gametophyte in gymnosperms differs from the male gametophyte as it spends its whole life cycle in one organ, the ovule located inside the megastrobilus or female cone. Similar to the male gametophyte, the female gametophyte normally is fully dependent on the surrounding sporophytic tissue for nutrients and the two organisms cannot be separated. However, the female gametophytes of Ginkgo biloba do contain chlorophyll and can produce some of their own energy, though, not enough to support itself without being supplemented by the sporophyte. The female gametophyte forms from a diploid megaspore that undergoes meiosis and starts being singled celled. The size of the mature female gametophyte varies drastically between gymnosperm orders. In Cycadophyta, Ginkgophyta, Coniferophyta, and some Gnetophyta, the single celled female gametophyte undergoes many cycles of mitosis ending up consisting of thousands of cells once mature. At a minimum, two of these cells are egg cells and the rest are haploid somatic cells, but more egg cells may be present and their ploidy, though typically haploid, may vary. In select Gnetophyta, the female gametophyte stays singled celled. Mitosis does occur, but no cell divisions are ever made. This results in the mature female gametophyte in some Gnetophyta having many free nuclei in one cell. Once mature, this single celled gametophyte is 90% smaller than the female gametophytes in other gymnosperm orders. After fertilization, the remaining female gametophyte tissue in gymnosperms serves as the nutrient source for the developing zygote (even in Gnetophyta where the diploid zygote cell is much smaller at that stage, and for a while lives within the single celled gametophyte). The precursor to the male angiosperm gametophyte is a diploid microspore mother cell located inside the anther. Once the microspore undergoes meiosis, 4 haploid cells are formed, each of which is a singled celled male gametophyte. The male gametophyte will develop via one or two rounds of mitosis inside the anther. This creates a 2 or 3 celled male gametophyte which becomes known as the pollen grain once dehiscing occurs. One cell is the tube cell, and the remaining cell/cells are the sperm cells. The development of the three celled male gametophyte prior to dehiscing has evolved multiple times and is present in about a third of angiosperm species allowing for faster fertilization after pollination. Once pollination occurs, the tube cell grows in size and if the male gametophyte is only 2 cells at this stage, the single sperm cell undergoes mitosis to create a second sperm cell. Just like in gymnosperms, the tube cell in angiosperms obtains nutrients from the sporophytic tissue, and may branch out into the pistil tissue or grow directly towards the ovule. Once double fertilization is completed, the tube cell and other vegetative cells, if present, are all that remains of the male gametophyte and soon degrade. The female gametophyte of angiosperms develops in the ovule (located inside the female or hermaphrodite flower). Its precursor is a diploid megaspore that undergoes meiosis which produces four haploid daughter cells. Three of these independent gametophyte cells degenerate and the one that remains is the gametophyte mother cell which normally contains one nucleus. In general, it will then divide by mitosis until it consists of 8 nuclei separated into 1 egg cell, 3 antipodal cells, 2 synergid cells, and a central cell that contains two nuclei. In select angiosperms, special cases occur in which the female gametophyte is not 7 celled with 8 nuclei. On the small end of the spectrum, some species have mature female gametophytes with only 4 cells, each with one nuclei. Conversely, some species have 10-celled mature female gametophytes consisting of 16 total nuclei. Once double fertilization occurs, the egg cell becomes the zygote which is then considered sporophyte tissue. Scholars still disagree on whether the fertilized central cell is considered gametophyte tissue. Some botanists consider this endospore as gametophyte tissue with typically 2/3 being female and 1/3 being male, but as the central cell before double fertilization can range from 1n to 8n in special cases, the fertilized central cells range from 2n (50% male/female) to 9n (1/9 male, 8/9th female). However, other botanists consider the fertilized endospore as sporophyte tissue. Some believe it is neither. Heterospory In heterosporic plants, there are two distinct kinds of gametophytes. Because the two gametophytes differ in form and function, they are termed heteromorphic, from hetero- "different" and morph "form". The egg-producing gametophyte is known as a megagametophyte, because it is typically larger, and the sperm producing gametophyte is known as a microgametophyte. Species which produce egg and sperm on separate gametophytes plants are termed dioicous, while those that produce both eggs and sperm on the same gametophyte are termed monoicous. In heterosporous plants (water ferns, some lycophytes, as well as all gymnosperms and angiosperms), there are two distinct types of sporangia, each of which produces a single kind of spore that germinates to produce a single kind of gametophyte. However, not all heteromorphic gametophytes come from heterosporous plants. That is, some plants have distinct egg-producing and sperm-producing gametophytes, but these gametophytes develop from the same kind of spore inside the same sporangium; Sphaerocarpos is an example of such a plant. In seed plants, the microgametophyte is called pollen. Seed plant microgametophytes consists of several (typically two to five) cells when the pollen grains exit the sporangium. The megagametophyte develops within the megaspore of extant seedless vascular plants and within the megasporangium in a cone or flower in seed plants. In seed plants, the microgametophyte (pollen) travels to the vicinity of the egg cell (carried by a physical or animal vector) and produces two sperm by mitosis. In gymnosperms, the megagametophyte consists of several thousand cells and produces one to several archegonia, each with a single egg cell. The gametophyte becomes a food storage tissue in the seed. In angiosperms, the megagametophyte is reduced to only a few cells, and is sometimes called the embryo sac. A typical embryo sac contains seven cells and eight nuclei, one of which is the egg cell. Two nuclei fuse with a sperm nucleus to form the primary endospermic nucleus which develops to form triploid endosperm, which becomes the food storage tissue in the seed.
Biology and health sciences
Plant reproduction
null
13134
https://en.wikipedia.org/wiki/Gecko
Gecko
Geckos are small, mostly carnivorous lizards that have a wide distribution, found on every continent except Antarctica. Belonging to the infraorder Gekkota, geckos are found in warm climates throughout the world. They range from . Geckos are unique among lizards for their vocalisations, which differ from species to species. Most geckos in the family Gekkonidae use chirping or clicking sounds in their social interactions. Tokay geckos (Gekko gecko) are known for their loud mating calls, and some other species are capable of making hissing noises when alarmed or threatened. They are the most species-rich group of lizards, with about 1,500 different species worldwide. All geckos, except species in the family Eublepharidae lack eyelids; instead, the outer surface of the eyeball has a transparent membrane, the brille. They have a fixed lens within each iris that enlarges in darkness to let in more light. Since they cannot blink, species without eyelids generally lick their own brilles when they need to clear them of dust and dirt, in order to keep them clean and moist. Unlike most lizards, geckos are usually nocturnal and have excellent night vision; their colour vision in low light is 350 times more sensitive than human eyes. The nocturnal geckos evolved from diurnal species, which had lost the rod cells from their eyes. The gecko eye, therefore, modified its cone cells that increased in size into different types, both single and double. Three different photo-pigments have been retained, and are sensitive to ultraviolet, blue, and green. They also use a multifocal optical system that allows them to generate a sharp image for at least two different depths. While most gecko species are nocturnal, some species are diurnal and active during the day, which have evolved multiple times independently. Many species are well known for their specialised toe pads, which enable them to grab and climb onto smooth and vertical surfaces, and even cross indoor ceilings with ease. Geckos are well known to people who live in warm regions of the world, where several species make their home inside human habitations. These, for example the house gecko, become part of the indoor menagerie and are often welcomed, as they feed on insect pests; including moths and mosquitoes. Like most lizards, geckos can lose their tails in defence, a process called autotomy; the predator may attack the wriggling tail, allowing the gecko to escape. The largest species, Gigarcanum delcourti, is only known from a single, stuffed specimen probably collected in the 19th century found in the basement of the Natural History Museum of Marseille in Marseille, France. This gecko was long, and it was likely endemic to New Caledonia, where it lived in native forests. The smallest gecko, the Jaragua sphaero, is a mere long, and was discovered in 2001 on a small island off the coast of Hispaniola. Etymology The Neo-Latin gekko and English 'gecko' stem from Indonesian-Malaysian gēkoq, a Malay word borrowed from Javanese, from tokek, which imitates the sounds that some species like Tokay gecko make. Common traits Like other reptiles, geckos are ectothermic, producing very little metabolic heat. Essentially, a gecko's body temperature is dependent on its environment. Also, to accomplish their main functions; such as locomotion, feeding, reproduction, etc., geckos must have a relatively elevated temperature. Shedding or molting All geckos shed their skin at fairly regular intervals, with species differing in timing and method. Leopard geckos shed at about two- to four-week intervals. The presence of moisture aids in the shedding. When shedding begins, the gecko speeds the process by detaching the loose skin from its body and eating it. For young geckos, shedding occurs more frequently, once a week, but when they are fully grown, they shed once every one to two months. Adhesion ability About 60% of gecko species have adhesive toe pads which allow them to adhere to most surfaces without the use of liquids or surface tension. Such pads have been gained and lost repeatedly over the course of gecko evolution. Adhesive toepads evolved independently in about eleven different gecko lineages, and were lost in at least nine lineages. It was previously thought that the spatula-shaped setae arranged in lamellae on gecko footpads enable attractive van der Waals' forces (the weakest of the weak chemical forces) between the β-keratin lamellae / setae / spatulae structures and the surface. These van der Waals interactions involve no fluids; in theory, a boot made of synthetic setae would adhere as easily to the surface of the International Space Station as it would to a living-room wall, although adhesion varies with humidity. However, a 2014 study suggests that gecko adhesion is in fact mainly determined by electrostatic interaction (caused by contact electrification), not van der Waals or capillary forces. The setae on the feet of geckos are also self-cleaning, and usually remove any clogging dirt within a few steps. Polytetrafluoroethylene (PTFE), which has very low surface energy, is more difficult for geckos to adhere to than many other surfaces. Gecko adhesion is typically improved by higher humidity, even on hydrophobic surfaces, yet is reduced under conditions of complete immersion in water. The role of water in that system is under discussion, yet recent experiments agree that the presence of molecular water layers (water molecules carry a very large dipole moment) on the setae, as well as on the surface, increase the surface energy of both, therefore the energy gain in getting these surfaces in contact is enlarged, which results in an increased gecko adhesion force. Moreover, the elastic properties of the b-keratin change with water uptake. Gecko toes seem to be double-jointed, but this is a misnomer, and is properly called digital hyperextension. Gecko toes can hyperextend in the opposite direction from human fingers and toes. This allows them to overcome the van der Waals force by peeling their toes off surfaces from the tips inward. In essence, by this peeling action, the gecko separates spatula by spatula from the surface, so for each spatula separation, only some force necessary. (The process is similar to removing Scotch Tape from a surface.) Geckos' toes operate well below their full attractive capabilities most of the time, because the margin for error is great depending upon the surface roughness, and therefore the number of setae in contact with that surface. Use of small van der Waals force requires very large surface areas; every square millimetre of a gecko's footpad contains about 14,000 hair-like setae. Each seta has a diameter of 5 μm. Human hair varies from 18 to 180 μm, so the cross-sectional area of a human hair is equivalent to 12 to 1300 setae. Each seta is in turn tipped with between 100 and 1,000 spatulae. Each spatula is 0.2 μm long (one five-millionth of a metre), or just below the wavelength of visible light. The setae of a typical mature gecko would be capable of supporting a weight of : each spatula could exert an adhesive force of 5 to 25 nN. The exact value of the adhesion force of a spatula varies with the surface energy of the substrate to which it adheres. Recent studies have moreover shown that the component of the surface energy derived from long-range forces, such as van der Waals forces, depends on the material's structure below the outermost atomic layers (up to 100 nm beneath the surface); taking that into account, the adhesive strength can be inferred. Apart from the setae, phospholipids; fatty substances produced naturally in their bodies, also come into play. These lipids lubricate the setae and allow the gecko to detach its foot before the next step. The origin of gecko adhesion likely started as simple modifications to the epidermis on the underside of the toes. This was recently discovered in the genus Gonatodes from South America. Simple elaborations of the epidermal spinules into setae have enabled Gonatodes humeralis to climb smooth surfaces and sleep on smooth leaves. Biomimetic technologies designed to mimic gecko adhesion could produce reusable self-cleaning dry adhesives with many applications. Development effort is being put into these technologies, but manufacturing synthetic setae is not a trivial material design task. Skin Gecko skin does not generally bear scales, but appears at a macro scale as a papillose surface, which is made from hair-like protuberances developed across the entire body. These confer superhydrophobicity, and the unique design of the hair confers a profound antimicrobial action. These protuberances are very small, up to 4 microns in length, and tapering to a point. Gecko skin has been observed to have an anti-bacterial property, killing gram-negative bacteria when they come in contact with the skin. The mossy leaf-tailed gecko of Madagascar, U. sikorae, has coloration developed as camouflage, most being greyish brown to black, or greenish brown, with various markings meant to resemble tree bark; down to the lichens and moss found on the bark. It also has flaps of skin, running the length of its body, head and limbs, known as the dermal flap, which it can lay against the tree during the day, scattering shadows, and making its outline practically invisible. Teeth As polyphyodonts, geckos can replace each of their 100 teeth every 3 to 4 months. Next to the full grown tooth there is a small replacement tooth developing from the odontogenic stem cell in the dental lamina. The formation of the teeth is pleurodont; they are fused (ankylosed) by their sides to the inner surface of the jaw bones. This formation is common in all species in the order Squamata. Taxonomy and classification The infraorder Gekkota is divided into seven families, containing about 125 genera of geckos, including the snake-like (legless) pygopods. Family Carphodactylidae Family Diplodactylidae Family Eublepharidae Family Gekkonidae Family Phyllodactylidae Family Pygopodidae Family Sphaerodactylidae Legless lizards of the family Dibamidae, also referred to as blind lizards, have occasionally been counted as gekkotans, but recent molecular phylogenies suggest otherwise. Evolutionary history Several species of lizard from the Late Jurassic have been considered early relatives of geckos, the most prominent and most well supported being the arboreal Eichstaettisaurus from the Late Jurassic of Germany. Norellius from the Early Cretaceous of Mongolia is also usually placed as a close relative of geckos. The oldest known fossils of modern geckos are from the mid-Cretaceous Burmese amber of Myanmar (including Cretaceogekko), around 100 million years old, which have adhesive pads on the feet similar to those of living geckos. Species More than 1,850 species of geckos occur worldwide, including these familiar species: Coleonyx variegatus, the western banded gecko, is native to the southwestern United States and northwest Mexico. Cyrtopodion brachykolon, the bent-toed gecko, is found in northwestern Pakistan; it was first described in 2007. Eublepharis macularius, the leopard gecko, is the most common gecko kept as a pet; it does not have adhesive toe pads and cannot climb the glass of a vivarium. Gehyra mutilata (Pteropus mutilatus), the stump-toed gecko, is able to vary its color from very light to very dark to camouflage itself; this gecko is at home in the wild, as well as in residential areas. Gekko gecko, the Tokay gecko, is a large, common, Southeast Asian gecko known for its aggressive temperament, loud mating calls, and bright markings. Hemidactylus is genus of geckos with many varieties. Hemidactylus frenatus, the common house gecko, thrives around people and human habitation structures in the tropics and subtropics worldwide. Hemidactylus garnotii, the Indo-Pacific gecko, is found in houses throughout the tropics, and has become an invasive species of concern in Florida and Georgia in the US. Hemidactylus mabouia, the tropical house gecko, Afro-American house gecko, or cosmopolitan house gecko, is a species of house gecko native to sub-Saharan Africa and also currently found in North, Central, and South America and the Caribbean. Hemidactylus turcicus, the Mediterranean house gecko, is frequently found in and around buildings, and is an introduced species in the US. Lepidodactylus lugubris, the mourning gecko, is originally an East Asian and Pacific species; it is equally at home in the wild and residential neighborhoods. Pachydactylus bibroni, Bibron's gecko, is native to southern Africa; this hardy arboreal gecko is considered a household pest. Phelsuma laticauda, the gold dust day gecko, is diurnal; it lives in northern Madagascar and on the Comoros. It is also an introduced species in Hawaii. Ptychozoon is a genus of arboreal geckos from Southeast Asia also known as flying or parachute geckos; they have wing-like flaps from the neck to the upper leg to help them conceal themselves on trees and provide lift while jumping. Rhacodactylus is genus of geckos native to New Caledonia. Rhacodactylus ciliatus (now assigned to the genus Correlophus), the crested gecko, was believed extinct until rediscovered in 1994, and is gaining popularity as a pet. Rhacodactylus leachianus, the New Caledonian giant gecko, was first described by Cuvier in 1829; it is the largest living species of gecko. Sphaerodactylus ariasae, the dwarf gecko, is native to the Caribbean Islands; it is the world's smallest lizard. Tarentola mauritanica, the crocodile or Moorish gecko, is commonly found in the Mediterranean region from the Iberian Peninsula and southern France to Greece and northern Africa; their most distinguishing characteristics are their pointed heads, spiked skin, and tails resembling those of a crocodile. Reproduction Most geckos lay a small clutch of eggs. Some are live-bearing, and a few can reproduce asexually via parthenogenesis. Geckos also have a large diversity of sex-determining mechanisms, including temperature-dependent sex determination and both XX/XY and ZZ/ZW sex chromosomes with multiple transitions among them over evolutionary time. Madagascar day geckos engage in a mating ritual in which sexually mature males produce a waxy substance from pores on the back of their legs. Males approach females with a head swaying motion along with rapid tongue flicking in the female. Obligate parthenogenesis as a reproductive system has evolved multiple times in the family Gekkonidae. It has been shown that oocytes are able to undergo meiosis in three different obligate parthenogenetic complexes of geckos. An extra premeiotic endoreplication of chromosomes is essential for obligate parthenogenesis in these geckos. Appropriate segregation during meiosis to form viable progeny is facilitated by the formation of bivalents made from copies of identical chromosomes.
Biology and health sciences
Reptiles
null
13146
https://en.wikipedia.org/wiki/Gabbro
Gabbro
Gabbro ( ) is a phaneritic (coarse-grained and magnesium- and iron-rich), mafic intrusive igneous rock formed from the slow cooling magma into a holocrystalline mass deep beneath the Earth's surface. Slow-cooling, coarse-grained gabbro is chemically equivalent to rapid-cooling, fine-grained basalt. Much of the Earth's oceanic crust is made of gabbro, formed at mid-ocean ridges. Gabbro is also found as plutons associated with continental volcanism. Due to its variant nature, the term gabbro may be applied loosely to a wide range of intrusive rocks, many of which are merely "gabbroic". By rough analogy, gabbro is to basalt as granite is to rhyolite. Etymology The term "gabbro" was used in the 1760s to name a set of rock types that were found in the ophiolites of the Apennine Mountains in Italy. It was named after Gabbro, a hamlet near Rosignano Marittimo in Tuscany. Then, in 1809, the German geologist Christian Leopold von Buch used the term more restrictively in his description of these Italian ophiolitic rocks. He assigned the name "gabbro" to rocks that geologists nowadays would more strictly call "metagabbro" (metamorphosed gabbro). Petrology Gabbro is a coarse-grained (phaneritic) igneous rock that is relatively low in silica and rich in iron, magnesium, and calcium. Such rock is described as mafic. Gabbro is composed of pyroxene (mostly clinopyroxene) and calcium-rich plagioclase, with minor amounts of hornblende, olivine, orthopyroxene and accessory minerals. With significant (>10%) olivine or orthopyroxene it is classified as olivine gabbro or gabbronorite respectively. Where present, hornblende is typically found as a rim around augite crystals or as large grains enclosing smaller grains of other minerals (poikilitic grains). Geologists use rigorous quantitative definitions to classify coarse-grained igneous rocks, based on the mineral content of the rock. For igneous rocks composed mostly of silicate minerals, and in which at least 10% of the mineral content consists of quartz, feldspar, or feldspathoid minerals, classification begins with the QAPF diagram. The relative abundances of quartz (Q), alkali feldspar (A), plagioclase (P), and feldspathoid (F), are used to plot the position of the rock on the diagram. The rock will be classified as either a gabbroid or a dioritoid if quartz makes up less than 20% of the QAPF content, feldspathoid makes up less than 10% of the QAPF content, and plagioclase makes up more than 65% of the total feldspar content. Gabbroids are distinguished from dioritoids by an anorthite (calcium plagioclase) fraction of their total plagioclase of greater than 50%. The composition of the plagioclase cannot easily be determined in the field, and then a preliminary distinction is made between dioritoid and gabbroid based on the content of mafic minerals. A gabbroid typically has over 35% mafic minerals, mostly pyroxenes or olivine, while a dioritoid typically has less than 35% mafic minerals, which typically includes hornblende. Gabbroids form a family of rock types similar to gabbro, such as monzogabbro, quartz gabbro, or nepheline-bearing gabbro. Gabbro itself is more narrowly defined, as a gabbroid in which quartz makes up less than 5% of the QAPF content, feldspathoids are not present, and plagioclase makes up more than 90% of the feldspar content. Gabbro is distinct from anorthosite, which contains less than 10% mafic minerals. Coarse-grained gabbroids are produced by slow crystallization of magma having the same composition as the lava that solidifies rapidly to form fine-grained (aphanitic) basalt. Subtypes There are a number of subtypes of gabbro recognized by geologists. Gabbros can be broadly divided into leucogabbros, with less than 35% mafic mineral content; mesogabbros, with 35% to 65% mafic mineral content; and melagabbros with more than 65% mafic mineral content. A rock with over 90% mafic mineral content will be classified instead as an ultramafic rock. A gabbroic rock with less than 10% mafic mineral content will be classified as an anorthosite. A more detailed classification is based on the relative percentages of plagioclase, pyroxene, hornblende, and olivine. The end members are: Normal gabbro (gabbro sensu stricto) is composed almost entirely of plagioclase and clinopyroxene (typically augite), with less than 5% each of hornblende, olivine, or orthopyroxene. Norite is composed almost entirely of plagioclase and orthopyroxene, with less than 5% each of hornblende, clinopyroxene, or olivine. Troctolite is composed almost entirely of plagioclase and olivine, with less than 5% each of pyroxene or hornblende. Hornblende gabbro is composed almost entirely of plagioclase and hornblende, with less than 5% each of pyroxene or olivine. Gabbros intermediate between these compositions are given names such as gabbronorite (for a gabbro intermediate between normal gabbro and norite, with almost equal amounts of clinopyroxene and orthopyroxene) or olivine gabbro (for a gabbro containing significant olivine, but almost no clinopyroxene or hornblende). A rock similar to normal gabbro but containing more orthopyroxene is called an orthopyroxene gabbro, while a rock similar to norite but containing more clinopyroxene is called a clinopyroxene norite. Gabbros are also sometimes classified as alkali or tholleiitic gabbros, by analogy with alkali or tholeiitic basalts, of which they are considered the intrusive equivalents. Alkali gabbro usually contains olivine, nepheline, or analcime, up to 10% of the mineral content, while tholeiitic gabbro contains both clinopyroxene and orthopyroxene, making it a gabbronorite. Gabbroids Gabbroids (also known as gabbroic-rocks) are a family of coarse-grained igneous rocks similar to gabbro: Quartz gabbro contains 5% to 20% quartz in its QAPF fraction. One example is the cizlakite at Pohorje in northeastern Slovenia, Monzogabbro contains 65% to 90% plagioclase out of its total feldspar content. Quartz monzogabbro combines the features of quartz gabbro and monzogabbro. It contains 5% to 20% quartz in its QAPF fraction, and 65% to 90% of its feldspar is plagioclase. Foid-bearing gabbro contains up to 10% feldspathoids rather than quartz. "Foid" in the name is usually replaced by the specific feldspathoid that is most abundant in the rock. For example, a nepheline-bearing gabbro is a foid-bearing gabbro in which the most abundant feldspathoid is nepheline. Foid-bearing monzogabbro resembles monzogabbro, but containing up to 10% feldspathoids in place of quartz. The same naming conventions apply as for foid-bearing gabbro, so that a gabbroid might be classified as a leucite-bearing monzogabbro. Gabbroids contain minor amounts, typically a few percent, of iron-titanium oxides such as magnetite, ilmenite, and ulvospinel. Apatite, zircon, and biotite may also be present as accessory minerals. Gabbro is generally coarse-grained, with crystals in the size range of 1 mm or larger. Finer-grained equivalents of gabbro are called diabase (also known as dolerite), although the term microgabbro is often used when extra descriptiveness is desired. Gabbro may be extremely coarse-grained to pegmatitic. Some pyroxene-plagioclase cumulates are essentially coarse-grained gabbro, and may exhibit acicular crystal habits. Gabbro is usually equigranular in texture, although it may also show ophitic texture (with laths of plagioclase enclosed in pyroxene). Distribution Nearly all gabbros are found in plutonic bodies, and the term (as the International Union of Geological Sciences recommends) is normally restricted just to plutonic rocks, although gabbro may be found as a coarse-grained interior facies of certain thick lavas. Gabbro can be formed as a massive, uniform intrusion via in-situ crystallisation of pyroxene and plagioclase, or as part of a layered intrusion as a cumulate formed by settling of pyroxene and plagioclase. An alternative name for gabbros formed by crystal settling is pyroxene-plagioclase adcumulate. Gabbro is much less common than more silica-rich intrusive rocks in the continental crust of the Earth. Gabbro and gabbroids occur in some batholiths but these rocks are relatively minor components of these very large intrusions because their iron and calcium content usually makes gabbro and gabbroid magmas too dense to have the necessary buoyancy. However, gabbro is an essential part of the oceanic crust, and can be found in many ophiolite complexes as layered gabbro underling sheeted dike complexes and overlying ultramafic rock derived from the Earth's mantle. These layered gabbros may have formed from relatively small but long-lived magma chambers underlying mid-ocean ridges. Layered gabbros are also characteristic of lopoliths, which are large, saucer-shaped intrusions that are primarily Precambrian in age. Prominent examples of lopoliths include the Bushveld Complex of South Africa, the Muskox intrusion of the Northwest Territories of Canada, the Rum layered intrusion of Scotland, the Stillwater complex of Montana, and the layered gabbros near Stavanger, Norway. Gabbros are also present in stocks associated with alkaline volcanism of continental rifting. Uses Gabbro often contains valuable amounts of chromium, nickel, cobalt, gold, silver, platinum, and copper sulfides. For example, the Merensky Reef is the world's most important source of platinum. Gabbro is known in the construction industry by the trade name of black granite. However, gabbro is hard and difficult to work, which limits its use. The term "indigo gabbro" is used as a common name for a mineralogically complex rock type often found in mottled tones of black and lilac-grey. It is mined in central Madagascar for use as a semi-precious stone. Indigo Gabbro can contain numerous minerals, including quartz and feldspar. Reports state that the dark matrix of the rock is composed of a mafic igneous rock, but whether this is basalt or gabbro is unclear.
Physical sciences
Igneous rocks
Earth science
13152
https://en.wikipedia.org/wiki/Gluten
Gluten
Gluten is a structural protein naturally found in certain cereal grains. The term gluten usually refers to the elastic network of a wheat grain's proteins, gliadin and glutenin primarily, that forms readily with the addition of water and often kneading in the case of bread dough. The types of grains that contain gluten include all species of wheat (common wheat, durum, spelt, khorasan, emmer and einkorn), and barley, rye, and some cultivars of oat; moreover, cross hybrids of any of these cereal grains also contain gluten, e.g. triticale. Gluten makes up 75–85% of the total protein in bread wheat. Glutens, especially Triticeae glutens, have unique viscoelastic and adhesive properties, which give dough its elasticity, helping it rise and keep its shape and often leaving the final product with a chewy texture. These properties, and its relatively low cost, make gluten valuable to both food and non-food industries. Wheat gluten is composed of mainly two types of proteins: the glutenins and the gliadins, which in turn can be divided into high molecular and low molecular glutenins and α/β, γ and Ω gliadins. Its homologous seed storage proteins, in barley, are referred to as hordeins, in rye, secalins, and in oats, avenins. These protein classes are collectively referred to as "gluten". The storage proteins in other grains, such as maize (zeins) and rice (rice protein), are sometimes called gluten, but they do not cause harmful effects in people with celiac disease. Gluten can trigger adverse, inflammatory, immunological, and autoimmune reactions in some people. The spectrum of gluten related disorders includes celiac disease in 1–2% of the general population, non-celiac gluten sensitivity in 0.5–13% of the general population, as well as dermatitis herpetiformis, gluten ataxia and other neurological disorders. These disorders are treated by a gluten-free diet. Uses Bread products Gluten forms when glutenin molecules cross-link via disulfide bonds to form a submicroscopic network attached to gliadin, which contributes viscosity (thickness) and extensibility to the mix. If this dough is leavened with yeast, fermentation produces carbon dioxide bubbles, which, trapped by the gluten network, cause the dough to rise. Baking coagulates the gluten, which, along with starch, stabilizes the shape of the final product. Gluten content has been implicated as a factor in the staling of bread, possibly because it binds water through hydration. The formation of gluten affects the texture of the baked goods. Gluten's attainable elasticity is proportional to its content of glutenins with low molecular weights, as this portion contains the preponderance of the sulfur atoms responsible for the cross-linking in the gluten network. Using flour with higher gluten content leads to chewier doughs such as those found in pizza and bagels, while using flour with less gluten content yields tender baked goods such as pastry products. Generally, bread flours are high in gluten (hard wheat); pastry flours have a lower gluten content. Kneading promotes the formation of gluten strands and cross-links, creating baked products that are chewier (as opposed to more brittle or crumbly). The "chewiness" increases as the dough is kneaded for longer times. An increased moisture content in the dough enhances gluten development, and very wet doughs left to rise for a long time require no kneading (see no-knead bread). Shortening inhibits formation of cross-links and is used, along with diminished water and less kneading, when a tender and flaky product, such as a pie crust, is desired. The strength and elasticity of gluten in flour is measured in the baking industry using a farinograph. This gives the baker a measurement of quality for different varieties of flours when developing recipes for various baked goods. Added gluten In industrial production, a slurry of wheat flour is kneaded vigorously by machinery until the gluten agglomerates into a mass. This mass is collected by centrifugation, then transported through several stages integrated in a continuous process. About 65% of the water in the wet gluten is removed by means of a screw press; the remainder is sprayed through an atomizer nozzle into a drying chamber, where it remains at an elevated temperature for a short time to allow the water to evaporate without denaturing the gluten. The process yields a flour-like powder with a 7% moisture content, which is air cooled and pneumatically transported to a receiving vessel. In the final step, the processed gluten is sifted and milled to produce a uniform product. This flour-like powder, when added to ordinary flour dough, may help improve the dough's ability to increase in volume. The resulting mixture also increases the bread's structural stability and chewiness. Gluten-added dough must be worked vigorously to induce it to rise to its full capacity; an automatic bread machine or food processor may be required for high-gluten kneading. Generally, higher gluten levels are associated with higher overall protein content. Imitation meats Gluten, especially wheat gluten (seitan), is often the basis for imitation meats resembling beef, chicken, duck (see mock duck), fish and pork. When cooked in broth, gluten absorbs some of the surrounding liquid (including the flavor) and becomes firm to the bite. This use of gluten is a popular means of adding supplemental protein to many vegetarian diets. In home or restaurant cooking, wheat gluten is prepared from flour by kneading the flour under water, agglomerating the gluten into an elastic network known as a dough, and then washing out the starch. Other consumer products Gluten is often present in beer and soy sauce, and can be used as a stabilizing agent in more unexpected food products, such as ice cream and ketchup. Foods of this kind may therefore present problems for a small number of consumers because the hidden gluten constitutes a hazard for people with celiac disease and gluten sensitivities. The protein content of some pet foods may also be enhanced by adding gluten. Gluten is also used in cosmetics, hair products and other dermatological preparations. Animal feed Wheat gluten is used both as a protein source and binding ingredient in pet foods. Wheat gluten imported from China adulterated with melamine used in pet foods was considered to have caused harm in many countries in 2007. Disorders "Gluten-related disorders" is the umbrella term for all diseases triggered by gluten, which include celiac disease (CD), non-celiac gluten sensitivity (NCGS), wheat allergy, gluten ataxia and dermatitis herpetiformis (DH). Pathophysiological research The gluten peptides are responsible for triggering gluten-related disorders. In people who have celiac disease, the peptides trigger an immune response that causes injury of the intestines, ranging from inflammation to partial or total destruction of the intestinal villi. To study mechanisms of this damage, laboratory experiments are done in vitro and in vivo. Among the gluten peptides, gliadin has been studied extensively. In vitro and in vivo studies In the context of celiac disease, gliadin peptides are classified in basic and clinical research as immunogenic, depending on their mechanism of action: The peptides are those capable of directly affecting cells and intestinal preparations in vitro, producing cellular damage in vivo and eliciting the innate immune response. In vitro, the peptides promote cell apoptosis (a form of programmed cell death) and inhibit the synthesis of nucleic acids (DNA and RNA) and proteins, reducing the viability of cells. Experiments in vivo with normal mice showed that they cause an increase in cell death and the production of interferon type I (an inflammatory mediator). In vitro, gluten alters cellular morphology and motility, cytoskeleton organization, oxidative balance, and tight junctions. The immunogenic peptides are those able to activate T cells in vitro. At least 50 epitopes of gluten may produce cytotoxic, immunomodulatory, and gut-permeating activities. The effect of oat peptides (avenins) in celiac people depends on the oat cultivar consumed because of prolamin genes, protein amino acid sequences, and the immunotoxicity of prolamins which vary among oat varieties. In addition, oat products may be cross-contaminated with the other gluten-containing cereals. Incidence , gluten-related disorders were increasing in frequency in different geographic areas. Some suggested explanations for this increase include the following: the growing westernization of diets, the increasing use of wheat-based foods included in the Mediterranean diet, the progressive replacement of rice by wheat in many countries in Asia, the Middle East, and North Africa, the higher content of gluten in bread and bakery products due to the reduction of dough fermentation time, and the development in recent years of new types of wheat with a higher amount of cytotoxic gluten peptides, However, a 2020 study that grew and analyzed 60 wheat cultivars from between 1891 and 2010 found no changes in albumin/globulin and gluten contents over time. "Overall, the harvest year had a more significant effect on protein composition than the cultivar. At the protein level, we found no evidence to support an increased immunostimulatory potential of modern winter wheat." Celiac disease Celiac disease (CD) is a chronic, multiple-organ autoimmune disorder primarily affecting the small intestine caused by the ingestion of wheat, barley, rye, oats, and derivatives, that appears in genetically predisposed people of all ages. CD is not only a gastrointestinal disease, because it may involve several organs and cause an extensive variety of non-gastrointestinal symptoms, and most importantly, it may be apparently asymptomatic. Many asymptomatic people become accustomed to living with a chronic bad health status as if it were normal, but they are able to recognize that they actually had symptoms related to celiac disease after starting a gluten-free diet and improvement occurs. Added difficulties for diagnosis are the fact that serological markers (anti-tissue transglutaminase [TG2]) are not always present and many people may have minor mucosal lesions, without atrophy of the intestinal villi. CD affects approximately 1–2% of the general population, but most cases remain unrecognized, undiagnosed and untreated, and at risk for serious long-term health complications. People may suffer severe disease symptoms and be subjected to extensive investigations for many years, before a proper diagnosis is achieved. Untreated CD may cause malabsorption, reduced quality of life, iron deficiency, osteoporosis, an increased risk of intestinal lymphomas, and greater mortality. CD is associated with some other autoimmune diseases, such as diabetes mellitus type 1, thyroiditis, gluten ataxia, psoriasis, vitiligo, autoimmune hepatitis, dermatitis herpetiformis, primary sclerosing cholangitis, and more. CD with "classic symptoms", which include gastrointestinal manifestations such as chronic diarrhea and abdominal distention, malabsorption, loss of appetite, and impaired growth, is currently the least common presentation form of the disease and affects predominantly small children generally younger than two years of age. CD with "non-classic symptoms" is the most common clinical type and occurs in older children (over two years old), adolescents, and adults. It is characterized by milder or even absent gastrointestinal symptoms and a wide spectrum of non-intestinal manifestations that can involve any organ of the body, and very frequently may be completely asymptomatic both in children (at least in 43% of the cases) and adults. Asymptomatic CD (ACD) is present in the majority of affected patients and is characterized by the absence of classical gluten-intolerance signs, such as diarrhea, bloating, and abdominal pain. Nevertheless, these individuals very often develop diseases that can be related with gluten intake. Gluten can be degraded into several morphine-like substances, named gluten exorphins. These compounds have proven opioid effects and could mask the deleterious effects of gluten protein on gastrointestinal lining and function. Non-celiac gluten sensitivity Non-celiac gluten sensitivity (NCGS) is described as a condition of multiple symptoms that improves when switching to a gluten-free diet, after celiac disease and wheat allergy are excluded. Recognized since 2010, it is included among gluten-related disorders. Its pathogenesis is not yet well understood, but the activation of the innate immune system, the direct negative effects of gluten and probably other wheat components, are implicated. NCGS is the most common syndrome of gluten intolerance, with a prevalence estimated to be 6-10%. NCGS is becoming a more common diagnosis, but its true prevalence is difficult to determine because many people self-diagnose and start a gluten-free diet, without having previously tested for celiac disease or having the dietary prescription from a physician. People with NCGS and gastrointestinal symptoms remain habitually in a "no man's land", without being recognized by the specialists and lacking the adequate medical care and treatment. Most of these people have a long history of health complaints and unsuccessful consultations with numerous physicians, trying to get a diagnosis of celiac disease, but they are only labeled as irritable bowel syndrome. A consistent although undefined number of people eliminate gluten because they identify it as responsible for their symptoms and these improve with the gluten-free diet, so they self-diagnose as NCGS. People with NCGS may develop gastrointestinal symptoms, which resemble those of irritable bowel syndrome or wheat allergy, or a wide variety of non-gastrointestinal symptoms, such as headache, chronic fatigue, fibromyalgia, atopic diseases, allergies, neurological diseases, or psychiatric disorders, among others. The results of a 2017 study suggest that NCGS may be a chronic disorder, as is the case with celiac disease. Besides gluten, additional components present in wheat, rye, barley, oats, and their derivatives, including other proteins called amylase-trypsin inhibitors (ATIs) and short-chain carbohydrates known as FODMAPs, may cause NCGS symptoms. As of 2019, reviews conclude that although FODMAPs present in wheat and related grains may play a role in non-celiac gluten sensitivity, they only explain certain gastrointestinal symptoms, such as bloating, but not the extra-digestive symptoms that people with non-celiac gluten sensitivity may develop, such as neurological disorders, fibromyalgia, psychological disturbances, and dermatitis. ATIs may cause toll-like receptor 4 (TLR4)-mediated intestinal inflammation in humans. Wheat allergy People can also experience adverse effects of wheat as result of a wheat allergy. As with most allergies, a wheat allergy causes the immune system to respond abnormally to a component of wheat that it treats as a threatening foreign body. This immune response is often time-limited and does not cause lasting harm to body tissues. Wheat allergy and celiac disease are different disorders. Gastrointestinal symptoms of wheat allergy are similar to those of celiac disease and non-celiac gluten sensitivity, but there is a different interval between exposure to wheat and onset of symptoms. An allergic reaction to wheat has a fast onset (from minutes to hours) after the consumption of food containing wheat and could include anaphylaxis. Gluten ataxia Gluten ataxia is an autoimmune disease triggered by the ingestion of gluten. With gluten ataxia, damage takes place in the cerebellum, the balance center of the brain that controls coordination and complex movements like walking, speaking and swallowing, with loss of Purkinje cells. People with gluten ataxia usually present gait abnormality or incoordination and tremor of the upper limbs. Gaze-evoked nystagmus and other ocular signs of cerebellar dysfunction are common. Myoclonus, palatal tremor, and opsoclonus-myoclonus may also appear. Early diagnosis and treatment with a gluten-free diet can improve ataxia and prevent its progression. The effectiveness of the treatment depends on the elapsed time from the onset of the ataxia until diagnosis, because the death of neurons in the cerebellum as a result of gluten exposure is irreversible. Gluten ataxia accounts for 40% of ataxias of unknown origin and 15% of all ataxias. Less than 10% of people with gluten ataxia present any gastrointestinal symptom, yet about 40% have intestinal damage. Other neurological disorders In addition to gluten ataxia, gluten sensitivity can cause a wide spectrum of neurological disorders, which develop with or without the presence of digestive symptoms or intestinal damage. These include peripheral neuropathy, epilepsy, headache, encephalopathy, vascular dementia, and various movement disorders (restless legs syndrome, chorea, parkinsonism, Tourette syndrome, palatal tremor, myoclonus, dystonia, opsoclonus myoclonus syndrome, paroxysms, dyskinesia, myorhythmia, myokymia). The diagnosis of underlying gluten sensitivity is complicated and delayed when there are no digestive symptoms. People who do experience gastrointestinal problems are more likely to receive a correct diagnosis and treatment. A strict gluten-free diet is the first-line treatment, which should be started as soon as possible. It is effective in most of these disorders. When dementia has progressed to an advanced degree, the diet has no beneficial effect. Cortical myoclonus appears to be treatment-resistant on both gluten-free diet and immunosuppression. Labeling People with gluten-related disorders have to remove gluten from their diet strictly, so they need clear labeling rules. The term "gluten-free" is generally used to indicate a supposed harmless level of gluten rather than a complete absence. The exact level at which gluten is harmless is uncertain and controversial. A 2008 systematic review tentatively concluded that consumption of less than 10 mg of gluten per day is unlikely to cause intestinal damage in people with celiac disease, although it noted that few reliable studies had been done. Regulation of the label "gluten-free" varies. International standards The Codex Alimentarius international standards for food labeling has a standard relating to the labeling of products as "gluten-free". It only applies to foods that would normally contain gluten. Brazil By law in Brazil, all food products must display labels clearly indicating whether or not they contain gluten. Canada Labels for all food products sold in Canada must clearly identify the presence of gluten if it is present at a level greater than 20 parts per million. European Union and United Kingdom In the European Union, all prepackaged foods and non-prepacked foods from a restaurant, take-out food wrapped just before sale, or unpackaged food served in institutions must be identified if gluten-free. "Gluten-free" is defined as 20 parts per million of gluten or less and "very low gluten" is 100 parts per million of gluten or less; only foods with cereal ingredients processed to remove gluten can claim "very low gluten" on labels. It is not allowed to label food as "gluten-free" when all similar food is naturally gluten-free, such as in the case of milk. All foods containing gluten as an ingredient must be labelled accordingly as gluten is defined as one of the 14 recognised EU allergens. United States In the United States, gluten is not listed on labels unless added as a standalone ingredient. Wheat or other allergens are listed after the ingredient line. The US Food and Drug Administration (FDA) has historically classified gluten as "generally recognized as safe" (GRAS). In August 2013, the FDA issued a final ruling, effective August 2014, that defined the term "gluten-free" for voluntary use in the labeling of foods as meaning that the amount of gluten contained in the food is below 20 parts per million.
Biology and health sciences
Proteins
Biology
13160
https://en.wikipedia.org/wiki/Gelatin
Gelatin
Gelatin or gelatine () is a translucent, colorless, flavorless food ingredient, commonly derived from collagen taken from animal body parts. It is brittle when dry and rubbery when moist. It may also be referred to as hydrolyzed collagen, collagen hydrolysate, gelatine hydrolysate, hydrolyzed gelatine, and collagen peptides after it has undergone hydrolysis. It is commonly used as a gelling agent in food, beverages, medications, drug or vitamin capsules, photographic films, papers, and cosmetics. Substances containing gelatin or functioning in a similar way are called gelatinous substances. Gelatin is an irreversibly hydrolyzed form of collagen, wherein the hydrolysis reduces protein fibrils into smaller peptides; depending on the physical and chemical methods of denaturation, the molecular weight of the peptides falls within a broad range. Gelatin is present in gelatin desserts, most gummy candy and marshmallows, ice creams, dips, and yogurts. Gelatin for cooking comes as powder, granules, and sheets. Instant types can be added to the food as they are; others must soak in water beforehand. Characteristics Properties Gelatin is a collection of peptides and proteins produced by partial hydrolysis of collagen extracted from the skin, bones, and connective tissues of animals such as domesticated cattle, chicken, pigs, and fish. During hydrolysis, some of the bonds between and within component proteins are broken. Its chemical composition is, in many aspects, closely similar to that of its parent collagen. Photographic and pharmaceutical grades of gelatin generally are sourced from cattle bones and pig skin. Gelatin is classified as a hydrogel. Gelatin is nearly tasteless and odorless with a colorless or slightly yellow appearance. It is transparent and brittle, and it can come as sheets, flakes, or as a powder. Polar solvents like hot water, glycerol, and acetic acid can dissolve gelatin, but it is insoluble in organic solvents like alcohol. Gelatin absorbs 5–10 times its weight in water to form a gel. The gel formed by gelatin can be melted by reheating, and it has an increasing viscosity under stress (thixotropic). The upper melting point of gelatin is below human body temperature, a factor that is important for mouthfeel of foods produced with gelatin. The viscosity of the gelatin-water mixture is greatest when the gelatin concentration is high and the mixture is kept cool at about . Commercial gelatin will have a gel strength of around 90 to 300 grams Bloom using the Bloom test of gel strength. Gelatin's strength (but not viscosity) declines if it is subjected to temperatures above , or if it is held at temperatures near 100 °C for an extended period of time. Gelatins have diverse melting points and gelation temperatures, depending on the source. For example, gelatin derived from fish has a lower melting and gelation point than gelatin derived from beef or pork. Composition When dry, gelatin consists of 98–99% protein, but it is not a nutritionally complete protein since it is missing tryptophan and is deficient in isoleucine, threonine, and methionine. The amino acid content of hydrolyzed collagen is the same as collagen. Hydrolyzed collagen contains 19 amino acids, predominantly glycine (Gly) 26–34%, proline (Pro) 10–18%, and hydroxyproline (Hyp) 7–15%, which together represent around 50% of the total amino acid content. Glycine is responsible for close packing of the chains. Presence of proline restricts the conformation. This is important for gelation properties of gelatin. Other amino acids that contribute highly include: alanine (Ala) 8–11%; arginine (Arg) 8–9%; aspartic acid (Asp) 6–7%; and glutamic acid (Glu) 10–12%. Research In 2011, the European Food Safety Authority Panel on Dietetic Products, Nutrition and Allergies concluded that "a cause and effect relationship has not been established between the consumption of collagen hydrolysate and maintenance of joints". Hydrolyzed collagen has been investigated as a type of wound dressing aimed at correcting imbalances in the wound microenvironment and the treatment of refractory wounds (chronic wounds that do not respond to normal treatment), as well as deep second-degree burn wounds. Safety concerns Hydrolyzed collagen, like gelatin, is made from animal by-products from the meat industry or sometimes animal carcasses removed and cleared by knackers, including skin, bones, and connective tissue. In 1997, the U.S. Food and Drug Administration (FDA), with support from the TSE (transmissible spongiform encephalopathy) Advisory Committee, began monitoring the potential risk of transmitting animal diseases, especially bovine spongiform encephalopathy (BSE), commonly known as mad cow disease. An FDA study from that year stated: "... steps such as heat, alkaline treatment, and filtration could be effective in reducing the level of contaminating TSE agents; however, scientific evidence is insufficient at this time to demonstrate that these treatments would effectively remove the BSE infectious agent if present in the source material." On 18 March 2016, the FDA finalized three previously issued interim final rules designed to further reduce the potential risk of BSE in human food. The final rule clarified that "gelatin is not considered a prohibited cattle material if it is manufactured using the customary industry processes specified." The Scientific Steering Committee (SSC) of the European Union in 2003 stated that the risk associated with bovine bone gelatin is very low or zero. In 2006, the European Food Safety Authority stated that the SSC opinion was confirmed, that the BSE risk of bone-derived gelatin was small, and that it recommended removal of the 2003 request to exclude the skull, brain, and vertebrae of bovine origin older than 12 months from the material used in gelatin manufacturing. Production In 2019, the worldwide demand of gelatin was about . On a commercial scale, gelatin is made from by-products of the meat and leather industries. Most gelatin is derived from pork skins, pork and cattle bones, or split cattle hides. Gelatin made from fish by-products avoids some of the religious objections to gelatin consumption. The raw materials are prepared by different curing, acid, and alkali processes that are employed to extract the dried collagen hydrolysate. These processes may take several weeks, and differences in such processes have great effects on the properties of the final gelatin products. Gelatin also can be prepared at home. Boiling certain cartilaginous cuts of meat or bones results in gelatin being dissolved into the water. Depending on the concentration, the resulting stock (when cooled) will form a jelly or gel naturally. This process is used for aspic. While many processes exist whereby collagen may be converted to gelatin, they all have several factors in common. The intermolecular and intramolecular bonds that stabilize insoluble collagen must be broken, and also, the hydrogen bonds that stabilize the collagen helix must be broken. The manufacturing processes of gelatin consists of several main stages: Pretreatments to make the raw materials ready for the main extraction step and to remove impurities that may have negative effects on physicochemical properties of the final gelatin product. Hydrolysis of collagen into gelatin. Extraction of gelatin from the hydrolysis mixture, which usually is done with hot water or dilute acid solutions as a multistage process. The refining and recovering treatments including filtration, clarification, evaporation, sterilization, drying, rutting, grinding, and sifting to remove the water from the gelatin solution, to blend the gelatin extracted, and to obtain dried, blended, ground final product. Pretreatments If the raw material used in the production of the gelatin is derived from bones, dilute acid solutions are used to remove calcium and other salts. Hot water or several solvents may be used to reduce the fat content, which should not exceed 1% before the main extraction step. If the raw material consists of hides and skin, then size reduction, washing, hair removal, and degreasing are necessary to prepare the materials for the hydrolysis step. Hydrolysis After preparation of the raw material, i.e., removing some of the impurities such as fat and salts, partially purified collagen is converted into gelatin through hydrolysis. Collagen hydrolysis is performed by one of three different methods: acid-, alkali-, and enzymatic hydrolysis. Acid treatment is especially suitable for less fully cross-linked materials such as pig skin collagen and normally requires 10 to 48 hours. Alkali treatment is suitable for more complex collagen such as that found in bovine hides and requires more time, normally several weeks. The purpose of the alkali treatment is to destroy certain chemical crosslinks still present in collagen. Within the gelatin industry, the gelatin obtained from acid-treated raw material has been called type-A gelatin and the gelatin obtained from alkali-treated raw material is referred to as type-B gelatin. Advances are occurring to optimize the yield of gelatin using enzymatic hydrolysis of collagen. The treatment time is shorter than that required for alkali treatment, and results in almost complete conversion to the pure product. The physical properties of the final gelatin product are considered better. Extraction Extraction is performed with either water or acid solutions at appropriate temperatures. All industrial processes are based on neutral or acid pH values because although alkali treatments speed up conversion, they also promote degradation processes. Acidic extraction conditions are extensively used in the industry, but the degree of acid varies with different processes. This extraction step is a multistage process, and the extraction temperature usually is increased in later extraction steps, which ensures minimum thermal degradation of the extracted gelatin. Recovery This process includes several steps such as filtration, evaporation, drying, grinding, and sifting. These operations are concentration-dependent and also dependent on the particular gelatin used. Gelatin degradation should be avoided and minimized, so the lowest temperature possible is used for the recovery process. Most recoveries are rapid, with all of the processes being done in several stages to avoid extensive deterioration of the peptide structure. A deteriorated peptide structure would result in a low gel strength, which is not generally desired. Uses Early history of food applications The 10th-century Kitab al-Tabikh includes a recipe for a fish aspic, made by boiling fish heads. A recipe for jelled meat broth is found in Le Viandier, written in or around 1375. In 15th century Britain, cattle hooves were boiled to produce a gel. By the late 17th century, the French inventor Denis Papin had discovered another method of gelatin extraction via boiling of bones. An English patent for gelatin production was granted in 1754. In 1812, the chemist further experimented with the use of hydrochloric acid to extract gelatin from bones, and later with steam extraction, which was much more efficient. The French government viewed gelatin as a potential source of cheap, accessible protein for the poor, particularly in Paris. Food applications in France and the United States during the 19th century appear to have established the versatility of gelatin, including the origin of its popularity in the US as Jell-O. In the mid-19th century, the American industrialist and inventor, Peter Cooper, registered a patent for a gelatin dessert powder he called "Portable Gelatin", which only needed the addition of water. In the late 19th century, Charles and Rose Knox set up the Charles B. Knox Gelatin Company in New York, which promoted and popularized the use of gelatin. Culinary uses Probably best known as a gelling agent in cooking, different types and grades of gelatin are used in a wide range of food and nonfood products. Common examples of foods that contain gelatin are gelatin desserts, trifles, aspic, marshmallows, candy corn, and confections such as Peeps, gummy bears, fruit snacks, and jelly babies. Gelatin may be used as a stabilizer, thickener, or texturizer in foods such as yogurt, cream cheese, and margarine; it is used, as well, in fat-reduced foods to simulate the mouthfeel of fat and to create volume. It also is used in the production of several types of Chinese soup dumplings, specifically Shanghainese soup dumplings, or xiaolongbao, as well as Shengjian mantou, a type of fried and steamed dumpling. The fillings of both are made by combining ground pork with gelatin cubes, and in the process of cooking, the gelatin melts, creating a soupy interior with a characteristic gelatinous stickiness. Gelatin is used for the clarification of juices, such as apple juice, and of vinegar. Isinglass is obtained from the swim bladders of fish. It is used as a fining agent for wine and beer. Besides hartshorn jelly, from deer antlers (hence the name "hartshorn"), isinglass was one of the oldest sources of gelatin. Cosmetics In cosmetics, hydrolyzed collagen may be found in topical creams, acting as a product texture conditioner, and moisturizer. Collagen implants or dermal fillers are also used to address the appearance of wrinkles, contour deficiencies, and acne scars, among others. The U.S. Food and Drug Administration has approved its use, and identifies cow (bovine) and human cells as the sources of these fillers. According to the FDA, the desired effects can last for 3–4 months, which is relatively the most short-lived compared to other materials used for the same purpose. Medicine Stabilizer in vaccines. Originally, gelatin constituted the shells of all drug and vitamin capsules to make them easier to swallow. Now, a vegetarian-acceptable alternative to gelatin, hypromellose, is also used, and is less expensive than gelatin to produce. Other technical uses Certain professional and theatrical lighting equipment use color gels to change the beam color. Historically, these were made with gelatin, hence the term, color gel. Some animal glues such as hide glue may be unrefined gelatin. It is used to hold silver halide crystals in an emulsion in virtually all photographic films and photographic papers. Despite significant effort, no suitable substitutes with the stability and low cost of gelatin have been found. Used as a carrier, coating, or separating agent for other substances, for example, it makes β-carotene water-soluble, thus imparting a yellow color to any soft drinks containing β-carotene. Ballistic gelatin is used to test and measure the performance of bullets shot from firearms. Gelatin is used as a binder in match heads and sandpaper. Cosmetics may contain a non-gelling variant of gelatin under the name hydrolyzed collagen (hydrolysate). Gelatin was first used as an external surface sizing for paper in 1337 and continued as a dominant sizing agent of all European papers through the mid-nineteenth century. In modern times, it is mostly found in watercolor paper, and occasionally in glossy printing papers, artistic papers, and playing cards. It maintains the wrinkles in crêpe paper. Biotechnology: Gelatin is also used in synthesizing hydrogels for tissue engineering applications. Gelatin is also used as a saturating agent in immunoassays, and as a coat. Gelatin degradation assay allows visualizing and quantifying invasion at the subcellular level instead of analyzing the invasive behavior of whole cells, for the study of cellular protrusions called invadopodia and podosomes, which are protrusive structures in cancer cells and play an important role in cell attachment and remodeling of the extracellular matrix (ECM). Religious considerations The consumption of gelatin from particular animals may be forbidden by religious rules or cultural taboos. Islamic halal and Jewish kosher customs generally require gelatin from sources other than pigs, such as cattle that have been slaughtered according to religious regulations (halal or kosher), or fish (that Jews and Muslims are allowed to consume). On the other hand, some Islamic jurists have argued that the chemical treatment "purifies" the gelatin enough to always be halal, an argument most common in the field of medicine. It has similarly been argued that gelatin in medicine is permissible in Judaism, as it is not used as food. According to The Jewish Dietary Laws, the book of kosher guidelines published by the Rabbinical Assembly, the organization of Conservative Jewish rabbis, all gelatin is kosher and pareve because the chemical transformation undergone in the manufacturing process renders it a different physical and chemical substance. Buddhist, Hindu, and Jain customs may require gelatin alternatives from sources other than animals, as many Hindus, almost all Jains and some Buddhists are vegetarian.
Biology and health sciences
Proteins
Biology
13169
https://en.wikipedia.org/wiki/Gneiss
Gneiss
Gneiss ( ) is a common and widely distributed type of metamorphic rock. It is formed by high-temperature and high-pressure metamorphic processes acting on formations composed of igneous or sedimentary rocks. This rock is formed under pressures ranging from 2 to 15 kbar, sometimes even more, and temperatures over 300 °C (572 °F). Gneiss nearly always shows a banded texture characterized by alternating darker and lighter colored bands and without a distinct cleavage. Gneisses are common in the ancient crust of continental shields. Some of the oldest rocks on Earth are gneisses, such as the Acasta Gneiss. Description In traditional English and North American usage, a gneiss is a coarse-grained metamorphic rock showing compositional banding (gneissic banding) but poorly developed schistosity and indistinct cleavage. In other words, it is a metamorphic rock composed of mineral grains easily seen with the unaided eye, which form obvious compositional layers, but which has only a weak tendency to fracture along these layers. In Europe, the term has been more widely applied to any coarse, mica-poor, high-grade metamorphic rock. The British Geological Survey (BGS) and the International Union of Geological Sciences (IUGS) both use gneiss as a broad textural category for medium- to coarse-grained metamorphic rock that shows poorly developed schistosity, with compositional layering over thick and tending to split into plates over thick. Neither definition depends on composition or origin, though rocks poor in platy minerals are more likely to produce gneissose texture. Gneissose rocks thus are largely recrystallized but do not carry large quantities of micas, chlorite or other platy minerals. Metamorphic rock showing stronger schistosity is classified as schist, while metamorphic rock devoid of schistosity is called a granofels. Gneisses that are metamorphosed igneous rocks or their equivalent are termed granite gneisses, diorite gneisses, and so forth. Gneiss rocks may also be named after a characteristic component such as garnet gneiss, biotite gneiss, albite gneiss, and so forth. Orthogneiss designates a gneiss derived from an igneous rock, and paragneiss is one from a sedimentary rock. Both the BGS and the IUGS use gneissose to describe rocks with the texture of gneiss, though gneissic also remains in common use. For example, a gneissose metagranite or a gneissic metagranite both mean a granite that has been metamorphosed and thereby acquired gneissose texture. Gneissic banding The minerals in gneiss are arranged into layers that appear as bands in cross section. This is called gneissic banding. The darker bands have relatively more mafic minerals (those containing more magnesium and iron). The lighter bands contain relatively more felsic minerals (minerals such as feldspar or quartz, which contain more of the lighter elements, such as aluminium, sodium, and potassium). The banding is developed at high temperature when the rock is more strongly compressed in one direction than in other directions (nonhydrostatic stress). The bands develop perpendicular to the direction of greatest compression, also called the shortening direction, as platy minerals are rotated or recrystallized into parallel layers. A common cause of nonhydrodynamic stress is the subjection of the protolith (the original rock material that undergoes metamorphism) to extreme shearing force, a sliding force similar to the pushing of the top of a deck of cards in one direction, and the bottom of the deck in the other direction. These forces stretch out the rock like a plastic, and the original material is spread out into sheets. Per the polar decomposition theorem, the deformation produced by such shearing force is equivalent to rotation of the rock combined with shortening in one direction and extension in another. Some banding is formed from original rock material (protolith) that is subjected to extreme temperature and pressure and is composed of alternating layers of sandstone (lighter) and shale (darker), which is metamorphosed into bands of quartzite and mica. Another cause of banding is "metamorphic differentiation", which separates different materials into different layers through chemical reactions, a process not fully understood. Augen gneiss Augen gneiss, from the , meaning "eyes", is a gneiss resulting from metamorphism of granite, which contains characteristic elliptic or lenticular shear-bound grains (porphyroclasts), normally feldspar, surrounded by finer grained material. The finer grained material deforms around the more resistant feldspar grains to produce this texture. Migmatite Migmatite is a gneiss consisting of two or more distinct rock types, one of which has the appearance of an ordinary gneiss (the mesosome), and another of which has the appearance of an intrusive rock such pegmatite, aplite, or granite (the leucosome). The rock may also contain a melanosome of mafic rock complementary to the leucosome. Migmatites are often interpreted as rock that has been partially melted, with the leucosome representing the silica-rich melt, the melanosome the residual solid rock left after partial melting, and the mesosome the original rock that has not yet experienced partial melting. Occurrences Gneisses are characteristic of areas of regional metamorphism that reaches the middle amphibolite to granulite metamorphic facies. In other words, the rock was metamorphosed at a temperature in excess of at pressures between about 2 to 24 kbar. Many different varieties of rock can be metamorphosed to gneiss, so geologists are careful to add descriptions of the color and mineral composition to the name of any gneiss, such as garnet-biotite paragneiss or grayish-pink orthogneiss. Granite-greenstone belts Continental shields are regions of exposed ancient rock that make up the stable cores of continents. The rock exposed in the oldest regions of shields, which is of Archean age (over 2500 million years old), mostly belong to granite-greenstone belts. The greenstone belts contain metavolcanic and metasedimentary rock that has undergone a relatively mild grade of metamorphism, at temperatures of and pressures of . The greenstone belts are surrounded by high-grade gneiss terrains showing highly deformed low-pressure, high-temperature (over ) metamorphism to the amphibolite or granulite facies. These form most of the exposed rock in Archean cratons. Gneiss domes Gneiss domes are common in orogenic belts (regions of mountain formation). They consist of a dome of gneiss intruded by younger granite and migmatite and mantled with sedimentary rock. These have been interpreted as a geologic record of two distinct mountain-forming events, with the first producing the granite basement and the second deforming and melting this basement to produce the domes. However, some gneiss domes may actually be the cores of metamorphic core complexes, regions of the deep crust brought to the surface and exposed during extension of the Earth's crust. Examples The Acasta Gneiss is found in the Northwest Territories, Canada, on an island about north of Yellowknife. This is one of the most ancient intact crustal fragments on Earth, metamorphosed 3.58 to 4.031 billion years ago. The Lewisian gneiss is found throughout the Outer Hebrides of Scotland, on the Scottish mainland west of the Moine Thrust, and on the islands of Coll and Tiree. These rocks are largely igneous in origin, mixed with metamorphosed marble, quartzite and mica schist with later intrusions of basaltic dikes and granite magma. The Morton Gneiss is an Archean-age gneiss exposed in the Minnesota River Valley of southwestern Minnesota, United States. It is thought to be the oldest intact block of continental crust in the United States. The Peninsular Gneiss is a sequence of Archean gneisses found throughout the Indian Shield and ranging in age from 3400 to 2500 million years old. Etymology The word gneiss has been used in English since at least 1757. It is borrowed from the German word , formerly also spelled , which is probably derived from the Middle High German noun "spark" (so called because the rock glitters). Uses Gneiss is used as a building material, such as the Facoidal gneiss. It's used extensively in Rio de Janeiro. Gneiss has also been used as construction aggregate for asphalt pavement.
Physical sciences
Petrology
null
13191
https://en.wikipedia.org/wiki/HTML
HTML
Hypertext Markup Language (HTML) is the standard markup language for documents designed to be displayed in a web browser. It defines the content and structure of web content. It is often assisted by technologies such as Cascading Style Sheets (CSS) and scripting languages such as JavaScript, a programming language. Web browsers receive HTML documents from a web server or from local storage and render the documents into multimedia web pages. HTML describes the structure of a web page semantically and originally included cues for its appearance. HTML elements are the building blocks of HTML pages. With HTML constructs, images and other objects such as interactive forms may be embedded into the rendered page. HTML provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists, links, quotes, and other items. HTML elements are delineated by tags, written using angle brackets. Tags such as and directly introduce content into the page. Other tags such as and surround and provide information about document text and may include sub-element tags. Browsers do not display the HTML tags but use them to interpret the content of the page. HTML can embed programs written in a scripting language such as JavaScript, which affects the behavior and content of web pages. The inclusion of CSS defines the look and layout of content. The World Wide Web Consortium (W3C), former maintainer of the HTML and current maintainer of the CSS standards, has encouraged the use of CSS over explicit presentational HTML A form of HTML, known as HTML5, is used to display video and audio, primarily using the element, together with JavaScript. History Development In 1980, physicist Tim Berners-Lee, a contractor at CERN, proposed and prototyped ENQUIRE, a system for CERN researchers to use and share documents. In 1989, Berners-Lee wrote a memo proposing an Internet-based hypertext system. Berners-Lee specified HTML and wrote the browser and server software in late 1990. That year, Berners-Lee and CERN data systems engineer Robert Cailliau collaborated on a joint request for funding, but the project was not formally adopted by CERN. In his personal notes of 1990, Berners-Lee listed "some of the many areas in which hypertext is used"; an encyclopedia is the first entry. The first publicly available description of HTML was a document called "HTML Tags", first mentioned on the Internet by Tim Berners-Lee in late 1991. It describes 18 elements comprising the initial, relatively simple design of HTML. Except for the hyperlink tag, these were strongly influenced by SGMLguid, an in-house Standard Generalized Markup Language (SGML)-based documentation format at CERN. Eleven of these elements still exist in HTML 4. HTML is a markup language that web browsers use to interpret and compose text, images, and other material into visible or audible web pages. Default characteristics for every item of HTML markup are defined in the browser, and these characteristics can be altered or enhanced by the web page designer's additional use of CSS. Many of the text elements are mentioned in the 1988 ISO technical report TR 9537 Techniques for using SGML, which describes the features of early text formatting languages such as that used by the RUNOFF command developed in the early 1960s for the CTSS (Compatible Time-Sharing System) operating system. These formatting commands were derived from the commands used by typesetters to manually format documents. However, the SGML concept of generalized markup is based on elements (nested annotated ranges with attributes) rather than merely print effects, with separate structure and markup. HTML has been progressively moved in this direction with CSS. Berners-Lee considered HTML to be an application of SGML. It was formally defined as such by the Internet Engineering Task Force (IETF) with the mid-1993 publication of the first proposal for an HTML specification, the "Hypertext Markup Language (HTML)" Internet Draft by Berners-Lee and Dan Connolly, which included an SGML Document type definition to define the syntax. The draft expired after six months, but was notable for its acknowledgment of the NCSA Mosaic browser's custom tag for embedding in-line images, reflecting the IETF's philosophy of basing standards on successful prototypes. Similarly, Dave Raggett's competing Internet Draft, "HTML+ (Hypertext Markup Format)", from late 1993, suggested standardizing already-implemented features like tables and fill-out forms. After the HTML and HTML+ drafts expired in early 1994, the IETF created an HTML Working Group. In 1995, this working group completed "HTML 2.0", the first HTML specification intended to be treated as a standard against which future implementations should be based. Further development under the auspices of the IETF was stalled by competing interests. the HTML specifications have been maintained, with input from commercial software vendors, by the World Wide Web Consortium (W3C). In 2000, HTML became an international standard (ISO/IEC 15445:2000). HTML 4.01 was published in late 1999, with further errata published through 2001. In 2004, development began on HTML5 in the Web Hypertext Application Technology Working Group (WHATWG), which became a joint deliverable with the W3C in 2008, and was completed and standardized on 28 October 2014. HTML version timeline HTML 2 November 24, 1995 HTML 2.0 was published as . Supplemental RFCs added capabilities: November 25, 1995: (form-based file upload) May 1996: (tables) August 1996: (client-side image maps) January 1997: (internationalization) HTML 3 January 14, 1997 HTML 3.2 was published as a W3C Recommendation. It was the first version developed and standardized exclusively by the W3C, as the IETF had closed its HTML Working Group on September 12, 1996. Initially code-named "Wilbur", HTML 3.2 dropped math formulas entirely, reconciled overlap among various proprietary extensions and adopted most of Netscape's visual markup tags. Netscape's blink element and Microsoft's marquee element were omitted due to a mutual agreement between the two companies. A markup for mathematical formulas similar to that of HTML was standardized 14 months later in MathML. HTML 4 December 18, 1997 HTML 4.0 was published as a W3C Recommendation. It offers three variations: Strict, in which deprecated elements are forbidden Transitional, in which deprecated elements are allowed Frameset, in which mostly only frame related elements are allowed. Initially code-named "Cougar", HTML 4.0 adopted many browser-specific element types and attributes, but also sought to phase out Netscape's visual markup features by marking them as deprecated in favor of style sheets. HTML 4 is an SGML application conforming to ISO 8879 – SGML. April 24, 1998 HTML 4.0 was reissued with minor edits without incrementing the version number. December 24, 1999 HTML 4.01 was published as a W3C Recommendation. It offers the same three variations as HTML 4.0 and its last errata were published on May 12, 2001. May 2000 ISO/IEC 15445:2000 ("ISO HTML", based on HTML 4.01 Strict) was published as an ISO/IEC international standard. In the ISO, this standard is in the domain of the ISO/IEC JTC 1/SC 34 (ISO/IEC Joint Technical Committee 1, Subcommittee 34 – Document description and processing languages). After HTML 4.01, there were no new versions of HTML for many years, as the development of the parallel, XML-based language XHTML occupied the W3C's HTML Working Group. HTML 5 October 28, 2014 HTML5 was published as a W3C Recommendation. November 1, 2016 HTML 5.1 was published as a W3C Recommendation. December 14, 2017 HTML 5.2 was published as a W3C Recommendation. HTML draft version timeline October 1991 HTML Tags, an informal CERN document listing 18 HTML tags, was first mentioned in public. June 1992 First informal draft of the HTML DTD, with seven subsequent revisions (July 15, August 6, August 18, November 17, November 19, November 20, November 22) November 1992 HTML DTD 1.1 (the first with a version number, based on RCS revisions, which start with 1.1 rather than 1.0), an informal draft June 1993 Hypertext Markup Language was published by the IETF IIIR Working Group as an Internet Draft (a rough proposal for a standard). It was replaced by a second version one month later. November 1993 HTML+ was published by the IETF as an Internet Draft and was a competing proposal to the Hypertext Markup Language draft. It expired in July 1994. November 1994 First draft (revision 00) of HTML 2.0 published by IETF itself (called as "HTML 2.0" from revision 02), that finally led to the publication of in November 1995. April 1995 (authored March 1995) HTML 3.0 was proposed as a standard to the IETF, but the proposal expired five months later (28 September 1995) without further action. It included many of the capabilities that were in Raggett's HTML+ proposal, such as support for tables, text flow around figures, and the display of complex mathematical formulas. W3C began development of its own Arena browser as a test bed for HTML 3 and Cascading Style Sheets, but HTML 3.0 did not succeed for several reasons. The draft was considered very large at 150 pages and the pace of browser development, as well as the number of interested parties, had outstripped the resources of the IETF. Browser vendors, including Microsoft and Netscape at the time, chose to implement different subsets of HTML 3's draft features as well as to introduce their own extensions to it. (See browser wars.) These included extensions to control stylistic aspects of documents, contrary to the "belief [of the academic engineering community] that such things as text color, background texture, font size, and font face were definitely outside the scope of a language when their only intent was to specify how a document would be organized." Dave Raggett, who has been a W3C Fellow for many years, has commented for example: "To a certain extent, Microsoft built its business on the Web by extending HTML features." January 2008 HTML5 was published as a Working Draft by the W3C. Although its syntax closely resembles that of SGML, HTML5 has abandoned any attempt to be an SGML application and has explicitly defined its own "html" serialization, in addition to an alternative XML-based XHTML5 serialization. 2011 HTML5 – Last Call On 14 February 2011, the W3C extended the charter of its HTML Working Group with clear milestones for HTML5. In May 2011, the working group advanced HTML5 to "Last Call", an invitation to communities inside and outside W3C to confirm the technical soundness of the specification. The W3C developed a comprehensive test suite to achieve broad interoperability for the full specification by 2014, which was the target date for recommendation. In January 2011, the WHATWG renamed its "HTML5" living standard to "HTML". The W3C nevertheless continues its project to release HTML5. 2012 HTML5 – Candidate Recommendation In July 2012, WHATWG and W3C decided on a degree of separation. W3C will continue the HTML5 specification work, focusing on a single definitive standard, which is considered a "snapshot" by WHATWG. The WHATWG organization will continue its work with HTML5 as a "Living Standard". The concept of a living standard is that it is never complete and is always being updated and improved. New features can be added but functionality will not be removed. In December 2012, W3C designated HTML5 as a Candidate Recommendation. The criterion for advancement to W3C Recommendation is "two 100% complete and fully interoperable implementations". 2014 HTML5 – Proposed Recommendation and Recommendation In September 2014, W3C moved HTML5 to Proposed Recommendation. On 28 October 2014, HTML5 was released as a stable W3C Recommendation, meaning the specification process is complete. XHTML versions XHTML is a separate language that began as a reformulation of HTML 4.01 using XML 1.0. It is now referred to as the XML syntax for HTML and is no longer being developed as a separate standard. XHTML 1.0 was published as a W3C Recommendation on January 26, 2000, and was later revised and republished on August 1, 2002. It offers the same three variations as HTML 4.0 and 4.01, reformulated in XML, with minor restrictions. XHTML 1.1 was published as a W3C Recommendation on May 31, 2001. It is based on XHTML 1.0 Strict, but includes minor changes, can be customized, and is reformulated using modules in the W3C recommendation "Modularization of XHTML", which was published on April 10, 2001. XHTML 2.0 was a working draft. Work on it was abandoned in 2009 in favor of work on HTML5 and XHTML5. XHTML 2.0 was incompatible with XHTML 1.x and, therefore, would be more accurately characterized as an XHTML-inspired new language than an update to XHTML 1.x. Transition of HTML publication to WHATWG On 28 May 2019, the W3C announced that WHATWG would be the sole publisher of the HTML and DOM standards. The W3C and WHATWG had been publishing competing standards since 2012. While the W3C standard was identical to the WHATWG in 2007 the standards have since progressively diverged due to different design decisions. The WHATWG "Living Standard" had been the de facto web standard for some time. Markup HTML markup consists of several key components, including those called tags (and their attributes), character-based data types, character references and entity references. HTML tags most commonly come in pairs like and , although some represent empty elements and so are unpaired, for example . The first tag in such a pair is the start tag, and the second is the end tag (they are also called opening tags and closing tags). Another important component is the HTML document type declaration, which triggers standards mode rendering. The following is an example of the classic "Hello, World!" program: <!DOCTYPE html> <html> <head> <title>This is a title</title> </head> <body> <div> <p>Hello world!</p> </div> </body> </html> The text between and describes the web page, and the text between and is the visible page content. The markup text defines the browser page title shown on browser tabs and window titles and the tag defines a division of the page used for easy styling. Between and , a element can be used to define webpage metadata. The Document Type Declaration is for HTML5. If a declaration is not included, various browsers will revert to "quirks mode" for rendering. Elements HTML documents imply a structure of nested HTML elements. These are indicated in the document by HTML tags, enclosed in angle brackets thus: . In the simple, general case, the extent of an element is indicated by a pair of tags: a "start tag" and "end tag" . The text content of the element, if any, is placed between these tags. Tags may also enclose further tag markup between the start and end, including a mixture of tags and text. This indicates further (nested) elements, as children of the parent element. The start tag may also include the element's attributes within the tag. These indicate other information, such as identifiers for sections within the document, identifiers used to bind style information to the presentation of the document, and for some tags such as the used to embed images, the reference to the image resource in the format like this: Some elements, such as the line break do not permit any embedded content, either text or further tags. These require only a single empty tag (akin to a start tag) and do not use an end tag. Many tags, particularly the closing end tag for the very commonly used paragraph element , are optional. An HTML browser or other agent can infer the closure for the end of an element from the context and the structural rules defined by the HTML standard. These rules are complex and not widely understood by most HTML authors. The general form of an HTML element is therefore: . Some HTML elements are defined as empty elements and take the form . Empty elements may enclose no content, for instance, the tag or the inline tag. The name of an HTML element is the name used in the tags. The end tag's name is preceded by a slash character, /, and that in empty elements the end tag is neither required nor allowed. If attributes are not mentioned, default values are used in each case. Element examples Header of the HTML document: . The title is included in the head, for example: <head> <title>The Title</title> <link rel="stylesheet" href="stylebyjimbowales.css"> <!-- Imports Stylesheets --> </head> Headings HTML headings are defined with the to tags with H1 being the highest (or most important) level and H6 the least: <h1>Heading level 1</h1> <h2>Heading level 2</h2> <h3>Heading level 3</h3> <h4>Heading level 4</h4> <h5>Heading level 5</h5> <h6>Heading level 6</h6> The effects are: Heading Level 1 Heading Level 2 Heading Level 3 Heading Level 4 Heading Level 5 Heading Level 6 CSS can substantially change the rendering. Paragraphs:<p>Paragraph 1</p> <p>Paragraph 2</p> Line breaks . The difference between and is that breaks a line without altering the semantic structure of the page, whereas sections the page into paragraphs. The element is an empty element in that, although it may have attributes, it can take no content and it may not have an end tag. <p>This <br> is a paragraph <br> with <br> line breaks</p> Links This is a link in HTML. To create a link the tag is used. The href attribute holds the URL address of the link. <a href="https://www.wikipedia.org/">A link to Wikipedia!</a> Inputs There are many possible ways a user can give inputs like:<input type="text"> <!-- This is for text input --> <input type="file"> <!-- This is for uploading files --> <input type="checkbox"> <!-- This is for checkboxes --> Comments: <!-- This is a comment --> Comments can help in the understanding of the markup and do not display in the webpage. There are several types of markup elements used in HTML: Structural markup indicates the purpose of text For example, establishes "Golf" as a second-level heading. Structural markup does not denote any specific rendering, but most web browsers have default styles for element formatting. Content may be further styled using Cascading Style Sheets (CSS). Presentational markup indicates the appearance of the text, regardless of its purpose For example, indicates that visual output devices should render "boldface" in bold text, but gives a little indication what devices that are unable to do this (such as aural devices that read the text aloud) should do. In the case of both and , there are other elements that may have equivalent visual renderings but that are more semantic in nature, such as and respectively. It is easier to see how an aural user agent should interpret the latter two elements. However, they are not equivalent to their presentational counterparts: it would be undesirable for a screen reader to emphasize the name of a book, for instance, but on a screen, such a name would be italicized. Most presentational markup elements have become deprecated under the HTML 4.0 specification in favor of using CSS for styling. Hypertext markup makes parts of a document into links to other documents An anchor element creates a hyperlink in the document and its href attribute sets the link's target URL. For example, the HTML markup , will render the word "Wikipedia" as a hyperlink. To render an image as a hyperlink, an img element is inserted as content into the a element. Like br, img is an empty element with attributes but no content or closing tag. . Attributes Most of the attributes of an element are name–value pairs, separated by = and written within the start tag of an element after the element's name. The value may be enclosed in single or double quotes, although values consisting of certain characters can be left unquoted in HTML (but not XHTML). Leaving attribute values unquoted is considered unsafe. In contrast with name-value pair attributes, there are some attributes that affect the element simply by their presence in the start tag of the element, like the ismap attribute for the img element. There are several common attributes that may appear in many elements : The id attribute provides a document-wide unique identifier for an element. This is used to identify the element so that stylesheets can alter its presentational properties, and scripts may alter, animate or delete its contents or presentation. Appended to the URL of the page, it provides a globally unique identifier for the element, typically a sub-section of the page. For example, the ID "Attributes" in https://en.wikipedia.org/wiki/HTML#Attributes. The class attribute provides a way of classifying similar elements. This can be used for semantic or presentation purposes. For example, an HTML document might semantically use the designation to indicate that all elements with this class value are subordinate to the main text of the document. In presentation, such elements might be gathered together and presented as footnotes on a page instead of appearing in the place where they occur in the HTML source. Class attributes are used semantically in microformats. Multiple class values may be specified; for example puts the element into both the notation and the important classes. An author may use the style attribute to assign presentational properties to a particular element. It is considered better practice to use an element's id or class attributes to select the element from within a stylesheet, though sometimes this can be too cumbersome for a simple, specific, or ad hoc styling. The title attribute is used to attach a subtextual explanation to an element. In most browsers this attribute is displayed as a tooltip. The lang attribute identifies the natural language of the element's contents, which may be different from that of the rest of the document. For example, in an English-language document: <p>Oh well, <span lang="fr">c'est la vie</span>, as they say in France.</p> The abbreviation element, abbr, can be used to demonstrate some of these attributes: <abbr id="anId" class="jargon" style="color:purple;" title="Hypertext Markup Language">HTML</abbr> This example displays as HTML; in most browsers, pointing the cursor at the abbreviation should display the title text "Hypertext Markup Language." Most elements take the language-related attribute dir to specify text direction, such as with "rtl" for right-to-left text in, for example, Arabic, Persian or Hebrew. Character and entity references As of version 4.0, HTML defines a set of 252 character entity references and a set of 1,114,050 numeric character references, both of which allow individual characters to be written via simple markup, rather than literally. A literal character and its markup counterpart are considered equivalent and are rendered identically. The ability to "escape" characters in this way allows for the characters < and & (when written as &lt; and &amp;, respectively) to be interpreted as character data, rather than markup. For example, a literal < normally indicates the start of a tag, and & normally indicates the start of a character entity reference or numeric character reference; writing it as &amp; or &#x26; or &#38; allows & to be included in the content of an element or in the value of an attribute. The double-quote character ("), when not used to quote an attribute value, must also be escaped as &quot; or &#x22; or &#34; when it appears within the attribute value itself. Equivalently, the single-quote character ('), when not used to quote an attribute value, must also be escaped as &#x27; or &#39; (or as &apos; in HTML5 or XHTML documents) when it appears within the attribute value itself. If document authors overlook the need to escape such characters, some browsers can be very forgiving and try to use context to guess their intent. The result is still invalid markup, which makes the document less accessible to other browsers and to other user agents that may try to parse the document for search and indexing purposes for example. Escaping also allows for characters that are not easily typed, or that are not available in the document's character encoding, to be represented within the element and attribute content. For example, the acute-accented e (é), a character typically found only on Western European and South American keyboards, can be written in any HTML document as the entity reference &eacute; or as the numeric references &#xE9; or &#233;, using characters that are available on all keyboards and are supported in all character encodings. Unicode character encodings such as UTF-8 are compatible with all modern browsers and allow direct access to almost all the characters of the world's writing systems. Data types HTML defines several data types for element content, such as script data and stylesheet data, and a plethora of types for attribute values, including IDs, names, URIs, numbers, units of length, languages, media descriptors, colors, character encodings, dates and times, and so on. All of these data types are specializations of character data. Document type declaration HTML documents are required to start with a document type declaration (informally, a "doctype"). In browsers, the doctype helps to define the rendering mode—particularly whether to use quirks mode. The original purpose of the doctype was to enable the parsing and validation of HTML documents by SGML tools based on the document type definition (DTD). The DTD to which the DOCTYPE refers contains a machine-readable grammar specifying the permitted and prohibited content for a document conforming to such a DTD. Browsers, on the other hand, do not implement HTML as an application of SGML and as consequence do not read the DTD. HTML5 does not define a DTD; therefore, in HTML5 the doctype declaration is simpler and shorter: <!DOCTYPE html> An example of an HTML 4 doctype <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "https://www.w3.org/TR/html4/strict.dtd"> This declaration references the DTD for the "strict" version of HTML 4.01. SGML-based validators read the DTD in order to properly parse the document and to perform validation. In modern browsers, a valid doctype activates standards mode as opposed to quirks mode. In addition, HTML 4.01 provides Transitional and Frameset DTDs, as explained below. The transitional type is the most inclusive, incorporating current tags as well as older or "deprecated" tags, with the Strict DTD excluding deprecated tags. The frameset has all tags necessary to make frames on a page along with the tags included in transitional type. Semantic HTML Semantic HTML is a way of writing HTML that emphasizes the meaning of the encoded information over its presentation (look). HTML has included semantic markup from its inception, but has also included presentational markup, such as , and tags. There are also the semantically neutral div and span tags. Since the late 1990s, when Cascading Style Sheets were beginning to work in most browsers, web authors have been encouraged to avoid the use of presentational HTML markup with a view to the separation of content and presentation. In a 2001 discussion of the Semantic Web, Tim Berners-Lee and others gave examples of ways in which intelligent software "agents" may one day automatically crawl the web and find, filter, and correlate previously unrelated, published facts for the benefit of human users. Such agents are not commonplace even now, but some of the ideas of Web 2.0, mashups and price comparison websites may be coming close. The main difference between these web application hybrids and Berners-Lee's semantic agents lies in the fact that the current aggregation and hybridization of information is usually designed by web developers, who already know the web locations and the API semantics of the specific data they wish to mash, compare and combine. An important type of web agent that does crawl and read web pages automatically, without prior knowledge of what it might find, is the web crawler or search-engine spider. These software agents are dependent on the semantic clarity of web pages they find as they use various techniques and algorithms to read and index millions of web pages a day and provide web users with search facilities without which the World Wide Web's usefulness would be greatly reduced. In order for search engine spiders to be able to rate the significance of pieces of text they find in HTML documents, and also for those creating mashups and other hybrids as well as for more automated agents as they are developed, the semantic structures that exist in HTML need to be widely and uniformly applied to bring out the meaning of the published text. Presentational markup tags are deprecated in current HTML and XHTML recommendations. The majority of presentational features from previous versions of HTML are no longer allowed as they lead to poorer accessibility, higher cost of site maintenance, and larger document sizes. Good semantic HTML also improves the accessibility of web documents (see also Web Content Accessibility Guidelines). For example, when a screen reader or audio browser can correctly ascertain the structure of a document, it will not waste the visually impaired user's time by reading out repeated or irrelevant information when it has been marked up correctly. Delivery HTML documents can be delivered by the same means as any other computer file. However, they are most often delivered either by HTTP from a web server or by email. HTTP The World Wide Web is composed primarily of HTML documents transmitted from web servers to web browsers using the Hypertext Transfer Protocol (HTTP). However, HTTP is used to serve images, sound, and other content, in addition to HTML. To allow the web browser to know how to handle each document it receives, other information is transmitted along with the document. This meta data usually includes the MIME type (e.g., text/html or application/xhtml+xml) and the character encoding (see Character encodings in HTML). In modern browsers, the MIME type that is sent with the HTML document may affect how the document is initially interpreted. A document sent with the XHTML MIME type is expected to be well-formed XML; syntax errors may cause the browser to fail to render it. The same document sent with the HTML MIME type might be displayed successfully since some browsers are more lenient with HTML. The W3C recommendations state that XHTML 1.0 documents that follow guidelines set forth in the recommendation's Appendix C may be labeled with either MIME Type. XHTML 1.1 also states that XHTML 1.1 documents should be labeled with either MIME type. HTML e-mail Most graphical email clients allow the use of a subset of HTML (often ill-defined) to provide formatting and semantic markup not available with plain text. This may include typographic information like colored headings, emphasized and quoted text, inline images and diagrams. Many such clients include both a GUI editor for composing HTML e-mail messages and a rendering engine for displaying them. Use of HTML in e-mail is criticized by some because of compatibility issues, because it can help disguise phishing attacks, because of accessibility issues for blind or visually impaired people, because it can confuse spam filters and because the message size is larger than plain text. Naming conventions The most common filename extension for files containing HTML is .html. A common abbreviation of this is .htm, which originated because some early operating systems and file systems, such as DOS and the limitations imposed by FAT data structure, limited file extensions to three letters. HTML Application An HTML Application (HTA; file extension .hta) is a Microsoft Windows application that uses HTML and Dynamic HTML in a browser to provide the application's graphical interface. A regular HTML file is confined to the security model of the web browser's security, communicating only to web servers and manipulating only web page objects and site cookies. An HTA runs as a fully trusted application and therefore has more privileges, like creation/editing/removal of files and Windows Registry entries. Because they operate outside the browser's security model, HTAs cannot be executed via HTTP, but must be downloaded (just like an EXE file) and executed from local file system. HTML4 variations Since its inception, HTML and its associated protocols gained acceptance relatively quickly. However, no clear standards existed in the early years of the language. Though its creators originally conceived of HTML as a semantic language devoid of presentation details, practical uses pushed many presentational elements and attributes into the language, driven largely by the various browser vendors. The latest standards surrounding HTML reflect efforts to overcome the sometimes chaotic development of the language and to create a rational foundation for building both meaningful and well-presented documents. To return HTML to its role as a semantic language, the W3C has developed style languages such as CSS and XSL to shoulder the burden of presentation. In conjunction, the HTML specification has slowly reined in the presentational elements. There are two axes differentiating various variations of HTML as currently specified: SGML-based HTML versus XML-based HTML (referred to as XHTML) on one axis, and strict versus transitional (loose) versus frameset on the other axis. SGML-based versus XML-based HTML One difference in the latest HTML specifications lies in the distinction between the SGML-based specification and the XML-based specification. The XML-based specification is usually called XHTML to distinguish it clearly from the more traditional definition. However, the root element name continues to be "html" even in the XHTML-specified HTML. The W3C intended XHTML 1.0 to be identical to HTML 4.01 except where limitations of XML over the more complex SGML require workarounds. Because XHTML and HTML are closely related, they are sometimes documented in parallel. In such circumstances, some authors conflate the two names as (X)HTML or X(HTML). Like HTML 4.01, XHTML 1.0 has three sub-specifications: strict, transitional, and frameset. Aside from the different opening declarations for a document, the differences between an HTML 4.01 and XHTML 1.0 document—in each of the corresponding DTDs—are largely syntactic. The underlying syntax of HTML allows many shortcuts that XHTML does not, such as elements with optional opening or closing tags, and even empty elements which must not have an end tag. By contrast, XHTML requires all elements to have an opening tag and a closing tag. XHTML, however, also introduces a new shortcut: an XHTML tag may be opened and closed within the same tag, by including a slash before the end of the tag like this: . The introduction of this shorthand, which is not used in the SGML declaration for HTML 4.01, may confuse earlier software unfamiliar with this new convention. A fix for this is to include a space before closing the tag, as such: . To understand the subtle differences between HTML and XHTML, consider the transformation of a valid and well-formed XHTML 1.0 document that adheres to Appendix C (see below) into a valid HTML 4.01 document. Making this translation requires the following steps: The language for an element should be specified with a lang attribute rather than the XHTML xml:lang attribute. XHTML uses XML's built-in language-defining functionality attribute. Remove the XML namespace (xmlns=URI). HTML has no facilities for namespaces. Change the document type declaration from XHTML 1.0 to HTML 4.01. (see DTD section for further explanation). If present, remove the XML declaration. (Typically this is: ). Ensure that the document's MIME type is set to text/html. For both HTML and XHTML, this comes from the HTTP Content-Type header sent by the server. Change the XML empty-element syntax to an HTML style empty element ( to ). Those are the main changes necessary to translate a document from XHTML 1.0 to HTML 4.01. To translate from HTML to XHTML would also require the addition of any omitted opening or closing tags. Whether coding in HTML or XHTML it may just be best to always include the optional tags within an HTML document rather than remembering which tags can be omitted. A well-formed XHTML document adheres to all the syntax requirements of XML. A valid document adheres to the content specification for XHTML, which describes the document structure. The W3C recommends several conventions to ensure an easy migration between HTML and XHTML (see HTML Compatibility Guidelines). The following steps can be applied to XHTML 1.0 documents only: Include both xml:lang and lang attributes on any elements assigning language. Use the empty-element syntax only for elements specified as empty in HTML. Include an extra space in empty-element tags: for example instead of . Include explicit close tags for elements that permit content but are left empty (for example, , not ). Omit the XML declaration. By carefully following the W3C's compatibility guidelines, a user agent should be able to interpret the document equally as HTML or XHTML. For documents that are XHTML 1.0 and have been made compatible in this way, the W3C permits them to be served either as HTML (with a text/html MIME type), or as XHTML (with an application/xhtml+xml or application/xml MIME type). When delivered as XHTML, browsers should use an XML parser, which adheres strictly to the XML specifications for parsing the document's contents. Transitional versus strict HTML 4 defined three different versions of the language: Strict, Transitional (once called Loose), and Frameset. The Strict version is intended for new documents and is considered best practice, while the Transitional and Frameset versions were developed to make it easier to transition documents that conformed to older HTML specifications or did not conform to any specification to a version of HTML 4. The Transitional and Frameset versions allow for presentational markup, which is omitted in the Strict version. Instead, cascading style sheets are encouraged to improve the presentation of HTML documents. Because XHTML 1 only defines an XML syntax for the language defined by HTML 4, the same differences apply to XHTML 1 as well. The Transitional version allows the following parts of the vocabulary, which are not included in the Strict version: A looser content model Inline elements and plain text are allowed directly in: body, blockquote, form, noscript and noframes Presentation related elements underline (u) (Deprecated. can confuse a visitor with a hyperlink.) strike-through (s) center (Deprecated. use CSS instead.) font (Deprecated. use CSS instead.) basefont (Deprecated. use CSS instead.) Presentation related attributes background (Deprecated. use CSS instead.) and bgcolor (Deprecated. use CSS instead.) attributes for body (required element according to the W3C.) element. align (Deprecated. use CSS instead.) attribute on div, form, paragraph (p) and heading (h1...h6) elements align (Deprecated. use CSS instead.), noshade (Deprecated. use CSS instead.), size (Deprecated. use CSS instead.) and width (Deprecated. use CSS instead.) attributes on hr element align (Deprecated. use CSS instead.), border, vspace and hspace attributes on img and object (caution: the object element is only supported in Internet Explorer (from the major browsers)) elements align (Deprecated. use CSS instead.) attribute on legend and caption elements align (Deprecated. use CSS instead.) and bgcolor (Deprecated. use CSS instead.) on table element nowrap (Obsolete), bgcolor (Deprecated. use CSS instead.), width, height on td and th elements bgcolor (Deprecated. use CSS instead.) attribute on tr element clear (Obsolete) attribute on br element compact attribute on dl, dir and menu elements type (Deprecated. use CSS instead.), compact (Deprecated. use CSS instead.) and start (Deprecated. use CSS instead.) attributes on ol and ul elements type and value attributes on li element width attribute on pre element Additional elements in Transitional specification menu (Deprecated. use CSS instead.) list (no substitute, though the unordered list, is recommended) dir (Deprecated. use CSS instead.) list (no substitute, though the unordered list is recommended) isindex (Deprecated.) (element requires server-side support and is typically added to documents server-side, form and input elements can be used as a substitute) applet (Deprecated. use the object element instead.) The language (Obsolete) attribute on script element (redundant with the type attribute). Frame related entities iframe noframes target (Deprecated in the map, link and form elements.) attribute on a, client-side image-map (map), link, form and base elements The Frameset version includes everything in the Transitional version, as well as the frameset element (used instead of body) and the frame element. Frameset versus transitional In addition to the above transitional differences, the frameset specifications (whether XHTML 1.0 or HTML 4.01) specify a different content model, with frameset replacing body, that contains either frame elements, or optionally noframes with a body. Summary of specification versions As this list demonstrates, the loose versions of the specification are maintained for legacy support. However, contrary to popular misconceptions, the move to XHTML does not imply a removal of this legacy support. Rather the X in XML stands for extensible and the W3C is modularizing the entire specification and opens it up to independent extensions. The primary achievement in the move from XHTML 1.0 to XHTML 1.1 is the modularization of the entire specification. The strict version of HTML is deployed in XHTML 1.1 through a set of modular extensions to the base XHTML 1.1 specification. Likewise, someone looking for the loose (transitional) or frameset specifications will find similar extended XHTML 1.1 support (much of it is contained in the legacy or frame modules). Modularization also allows for separate features to develop on their own timetable. So for example, XHTML 1.1 will allow quicker migration to emerging XML standards such as MathML (a presentational and semantic math language based on XML) and XForms—a new highly advanced web-form technology to replace the existing HTML forms. In summary, the HTML 4 specification primarily reined in all the various HTML implementations into a single clearly written specification based on SGML. XHTML 1.0, ported this specification, as is, to the new XML-defined specification. Next, XHTML 1.1 takes advantage of the extensible nature of XML and modularizes the whole specification. XHTML 2.0 was intended to be the first step in adding new features to the specification in a standards-body-based approach. WHATWG HTML versus HTML5 The HTML Living Standard, which is developed by WHATWG, is the official version, while W3C HTML5 is no longer separate from WHATWG. WYSIWYG editors There are some WYSIWYG editors (what you see is what you get), in which the user lays out everything as it is to appear in the HTML document using a graphical user interface (GUI), often similar to word processors. The editor renders the document rather than showing the code, so authors do not require extensive knowledge of HTML. The WYSIWYG editing model has been criticized, primarily because of the low quality of the generated code; there are voices advocating a change to the WYSIWYM model (what you see is what you mean). WYSIWYG editors remain a controversial topic because of their perceived flaws such as: Relying mainly on the layout as opposed to meaning, often using markup that does not convey the intended meaning but simply copies the layout. Often producing extremely verbose and redundant code that fails to make use of the cascading nature of HTML and CSS. Often producing ungrammatical markup, called tag soup or semantically incorrect markup (such as for italics). As a great deal of the information in HTML documents is not in the layout, the model has been criticized for its "what you see is all you get"-nature.
Technology
Programming
null
13255
https://en.wikipedia.org/wiki/Hydrogen
Hydrogen
Hydrogen is a chemical element; it has symbol H and atomic number 1. It is the lightest element and, at standard conditions, is a gas of diatomic molecules with the formula , sometimes called dihydrogen, hydrogen gas, molecular hydrogen, or simply hydrogen. It is colorless, odorless, non-toxic, and highly combustible. Constituting about 75% of all normal matter, hydrogen is the most abundant chemical element in the universe. Stars, including the Sun, mainly consist of hydrogen in a plasma state, while on Earth, hydrogen is found in water, organic compounds, as dihydrogen, and in other molecular forms. The most common isotope of hydrogen (protium, H) consists of one proton, one electron, and no neutrons. In the early universe, the formation of hydrogen's protons occurred in the first second after the Big Bang; neutral hydrogen atoms only formed about 370,000 years later during the recombination epoch as the universe expanded and plasma had cooled enough for electrons to remain bound to protons. Hydrogen gas was first produced artificially in the early 16th century by the reaction of acids with metals. Henry Cavendish, in 1766–81, identified hydrogen gas as a distinct substance and discovered its property of producing water when burned; hence its name means "water-former" in Greek. Understanding the colors of light absorbed and emitted by hydrogen was a crucial part of developing quantum mechanics. Hydrogen, typically nonmetallic except under extreme pressure, readily forms covalent bonds with most nonmetals, contributing to the formation of compounds like water and various organic substances. Its role is crucial in acid-base reactions, which mainly involve proton exchange among soluble molecules. In ionic compounds, hydrogen can take the form of either a negatively charged anion, where it is known as hydride, or as a positively charged cation, H, called a proton. Although tightly bonded to water molecules, protons strongly affect the behavior of aqueous solutions, as reflected in the importance of pH. Hydride on the other hand, is rarely observed because it tends to deprotonate solvents, yielding H2. Industrial hydrogen production occurs through steam reforming of natural gas. The more familiar electrolysis of water is uncommon because it is energy-intensive, i.e. expensive. Its main industrial uses include fossil fuel processing, such as hydrocracking and hydrodesulfurization. Ammonia production also is a major consumer of hydrogen. Fuel cells for electricity generation from hydrogen is rapidly emerging. Properties Combustion Hydrogen gas is highly flammable: (572 kJ/2 mol = 286 kJ/mol = 141.865 MJ/kg) Enthalpy of combustion: −286 kJ/mol. Hydrogen gas forms explosive mixtures with air in concentrations from 4–74% and with chlorine at 5–95%. The hydrogen autoignition temperature, the temperature of spontaneous ignition in air, is . Flame Pure hydrogen-oxygen flames emit ultraviolet light and with high oxygen mix are nearly invisible to the naked eye, as illustrated by the faint plume of the Space Shuttle Main Engine, compared to the highly visible plume of a Space Shuttle Solid Rocket Booster, which uses an ammonium perchlorate composite. The detection of a burning hydrogen leak, may require a flame detector; such leaks can be very dangerous. Hydrogen flames in other conditions are blue, resembling blue natural gas flames. The destruction of the Hindenburg airship was a notorious example of hydrogen combustion and the cause is still debated. The visible flames in the photographs were the result of carbon compounds in the airship skin burning. Electron energy levels The ground state energy level of the electron in a hydrogen atom is −13.6 eV, equivalent to an ultraviolet photon of roughly 91 nm wavelength. The energy levels of hydrogen are referred to by consecutive quantum numbers, with being the ground state. The hydrogen spectral series corresponds to emission of light due to transitions from higher to lower energy levels. The energy levels of hydrogen can be calculated fairly accurately using the Bohr model of the atom, in which the electron "orbits" the proton, like how Earth orbits the Sun. However, the electron and proton are held together by electrostatic attraction, while planets and celestial objects are held by gravity. Due to the discretization of angular momentum postulated in early quantum mechanics by Bohr, the electron in the Bohr model can only occupy certain allowed distances from the proton, and therefore only certain allowed energies. An accurate description of the hydrogen atom comes from a quantum analysis that uses the Schrödinger equation, Dirac equation or Feynman path integral formulation to calculate the probability density of the electron around the proton. The most complex formulas include the small effects of special relativity and vacuum polarization. In the quantum mechanical treatment, the electron in a ground state hydrogen atom has no angular momentum—illustrating how the "planetary orbit" differs from electron motion. Spin isomers Molecular exists as two nuclear isomers that differ in the spin states of their nuclei. In the orthohydrogen form, the spins of the two nuclei are parallel, forming a spin triplet state having a total molecular spin ; in the parahydrogen form the spins are antiparallel and form a spin singlet state having spin . The equilibrium ratio of ortho- to para-hydrogen depends on temperature. At room temperature or warmer, equilibrium hydrogen gas contains about 25% of the para form and 75% of the ortho form. The ortho form is an excited state, having higher energy than the para form by 1.455 kJ/mol, and it converts to the para form over the course of several minutes when cooled to low temperature. The thermal properties of these isomers differ because each has distinct rotational quantum states. The ortho-to-para ratio in is an important consideration in the liquefaction and storage of liquid hydrogen: the conversion from ortho to para is exothermic and produces sufficient heat to evaporate most of the liquid if not converted first to parahydrogen during the cooling process. Catalysts for the ortho-para interconversion, such as ferric oxide and activated carbon compounds, are used during hydrogen cooling to avoid this loss of liquid. Phases Liquid hydrogen can exist at temperatures below hydrogen's critical point of 33 K. However, for it to be in a fully liquid state at atmospheric pressure, H2 needs to be cooled to . Hydrogen was liquefied by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. Liquid hydrogen is a common rocket propellant, and it can also be used as the fuel for an internal combustion engine or fuel cell. Solid hydrogen can be made at standard pressure, by decreasing the temperature below hydrogen's melting point of . It was collected for the first time by James Dewar in 1899. Multiple distinct solid phases exist, known as Phase I through Phase V, each exhibiting a characteristic molecular arrangement. Liquid and solid phases can exist in combination at the triple point, a substance known as slush hydrogen. Metallic hydrogen, a phase obtained at extremely high pressures (in excess of ), is an electrical conductor. It is believed to exist deep within giant planets like Jupiter. When ionized, hydrogen becomes a plasma. This is the form in which hydrogen exists within stars. Isotopes Hydrogen has three naturally occurring isotopes, denoted , and . Other, highly unstable nuclei ( to ) have been synthesized in the laboratory but not observed in nature. is the most common hydrogen isotope, with an abundance of >99.98%. Because the nucleus of this isotope consists of only a single proton, it is given the descriptive but rarely used formal name protium. It is the only stable isotope with no neutrons; see diproton for a discussion of why others do not exist. , the other stable hydrogen isotope, is known as deuterium and contains one proton and one neutron in the nucleus. Nearly all deuterium in the universe is thought to have been produced at the time of the Big Bang, and has endured since then. Deuterium is not radioactive, and is not a significant toxicity hazard. Water enriched in molecules that include deuterium instead of normal hydrogen is called heavy water. Deuterium and its compounds are used as a non-radioactive label in chemical experiments and in solvents for -NMR spectroscopy. Heavy water is used as a neutron moderator and coolant for nuclear reactors. Deuterium is also a potential fuel for commercial nuclear fusion. is known as tritium and contains one proton and two neutrons in its nucleus. It is radioactive, decaying into helium-3 through beta decay with a half-life of 12.32 years. It is radioactive enough to be used in luminous paint to enhance the visibility of data displays, such as for painting the hands and dial-markers of watches. The watch glass prevents the small amount of radiation from escaping the case. Small amounts of tritium are produced naturally by cosmic rays striking atmospheric gases; tritium has also been released in nuclear weapons tests. It is used in nuclear fusion, as a tracer in isotope geochemistry, and in specialized self-powered lighting devices. Tritium has also been used in chemical and biological labeling experiments as a radiolabel. Unique among the elements, distinct names are assigned to its isotopes in common use. During the early study of radioactivity, heavy radioisotopes were given their own names, but these are mostly no longer used. The symbols D and T (instead of and ) are sometimes used for deuterium and tritium, but the symbol P was already used for phosphorus and thus was not available for protium. In its nomenclatural guidelines, the International Union of Pure and Applied Chemistry (IUPAC) allows any of D, T, , and to be used, though and are preferred. The exotic atom muonium (symbol Mu), composed of an antimuon and an electron, can also be considered a light radioisotope of hydrogen. Because muons decay with lifetime , muonium is too unstable for observable chemistry. Nevertheless, muonium compounds are important test cases for quantum simulation, due to the mass difference between the antimuon and the proton, and IUPAC nomenclature incorporates such hypothetical compounds as muonium chloride (MuCl) and sodium muonide (NaMu), analogous to hydrogen chloride and sodium hydride respectively. Antihydrogen () is the antimatter counterpart to hydrogen. It consists of an antiproton with a positron. Antihydrogen is the only type of antimatter atom to have been produced . Thermal and physical properties Table of thermal and physical properties of hydrogen (H) at atmospheric pressure: History 18th century In 1671, Irish scientist Robert Boyle discovered and described the reaction between iron filings and dilute acids, which results in the production of hydrogen gas. Boyle did not note that the gas was inflammable, but hydrogen would play a key role in overturning the phlogiston theory of combustion. In 1766, Henry Cavendish was the first to recognize hydrogen gas as a discrete substance, by naming the gas from a metal-acid reaction "inflammable air". He speculated that "inflammable air" was in fact identical to the hypothetical substance "phlogiston" and further finding in 1781 that the gas produces water when burned. He is usually given credit for the discovery of hydrogen as an element. In 1783, Antoine Lavoisier identified the element that came to be known as hydrogen when he and Laplace reproduced Cavendish's finding that water is produced when hydrogen is burned. Lavoisier produced hydrogen for his experiments on mass conservation by treating metallic iron with a steam of H2 through an incandescent iron tube heated in a fire. Anaerobic oxidation of iron by the protons of water at high temperature can be schematically represented by the set of following reactions: 1) 2) 3) Many metals react similarly with water leading to the production of hydrogen. In some situations, this H2-producing process is problematic as is the case of zirconium cladding on nuclear fuel rods. 19th century By 1806 hydrogen was used to fill balloons. François Isaac de Rivaz built the first de Rivaz engine, an internal combustion engine powered by a mixture of hydrogen and oxygen in 1806. Edward Daniel Clarke invented the hydrogen gas blowpipe in 1819. The Döbereiner's lamp and limelight were invented in 1823.Hydrogen was liquefied for the first time by James Dewar in 1898 by using regenerative cooling and his invention, the vacuum flask. He produced solid hydrogen the next year. One of the first quantum effects to be explicitly noticed (but not understood at the time) was James Clerk Maxwell's observation that the specific heat capacity of unaccountably departs from that of a diatomic gas below room temperature and begins to increasingly resemble that of a monatomic gas at cryogenic temperatures. According to quantum theory, this behavior arises from the spacing of the (quantized) rotational energy levels, which are particularly wide-spaced in because of its low mass. These widely spaced levels inhibit equal partition of heat energy into rotational motion in hydrogen at low temperatures. Diatomic gases composed of heavier atoms do not have such widely spaced levels and do not exhibit the same effect. 20th century The existence of the hydride anion was suggested by Gilbert N. Lewis in 1916 for group 1 and 2 salt-like compounds. In 1920, Moers electrolyzed molten lithium hydride (LiH), producing a stoichiometric quantity of hydrogen at the anode. Because of its simple atomic structure, consisting only of a proton and an electron, the hydrogen atom, together with the spectrum of light produced from it or absorbed by it, has been central to the development of the theory of atomic structure. Hydrogen's unique position as the only neutral atom for which the Schrödinger equation can be directly solved, has significantly contributed to the understanding of quantum mechanics through the exploration of its energetics. Furthermore, study of the corresponding simplicity of the hydrogen molecule and the corresponding cation brought understanding of the nature of the chemical bond, which followed shortly after the quantum mechanical treatment of the hydrogen atom had been developed in the mid-1920s. Hydrogen-lifted airship The first hydrogen-filled balloon was invented by Jacques Charles in 1783. Hydrogen provided the lift for the first reliable form of air-travel following the 1852 invention of the first hydrogen-lifted airship by Henri Giffard. German count Ferdinand von Zeppelin promoted the idea of rigid airships lifted by hydrogen that later were called Zeppelins; the first of which had its maiden flight in 1900. Regularly scheduled flights started in 1910 and by the outbreak of World War I in August 1914, they had carried 35,000 passengers without a serious incident. Hydrogen-lifted airships were used as observation platforms and bombers during the war. The first non-stop transatlantic crossing was made by the British airship R34 in 1919. Regular passenger service resumed in the 1920s and the discovery of helium reserves in the United States promised increased safety, but the U.S. government refused to sell the gas for this purpose. Therefore, was used in the Hindenburg airship, which was destroyed in a midair fire over New Jersey on 6 May 1937. The incident was broadcast live on radio and filmed. Ignition of leaking hydrogen is widely assumed to be the cause, but later investigations pointed to the ignition of the aluminized fabric coating by static electricity. But the damage to hydrogen's reputation as a lifting gas was already done and commercial hydrogen airship travel ceased. Hydrogen is still used, in preference to non-flammable but more expensive helium, as a lifting gas for weather balloons. Deuterium and tritium Deuterium was discovered in December 1931 by Harold Urey, and tritium was prepared in 1934 by Ernest Rutherford, Mark Oliphant, and Paul Harteck. Heavy water, which consists of deuterium in the place of regular hydrogen, was discovered by Urey's group in 1932. Hydrogen-cooled turbogenerator The first hydrogen-cooled turbogenerator went into service using gaseous hydrogen as a coolant in the rotor and the stator in 1937 at Dayton, Ohio, owned by the Dayton Power & Light Co. This was justified by the high thermal conductivity and very low viscosity of hydrogen gas, thus lower drag than air. This is the most common coolant used for generators 60 MW and larger; smaller generators are usually air-cooled. Nickel–hydrogen battery The nickel–hydrogen battery was used for the first time in 1977 aboard the U.S. Navy's Navigation technology satellite-2 (NTS-2). The International Space Station, Mars Odyssey and the Mars Global Surveyor are equipped with nickel-hydrogen batteries. In the dark part of its orbit, the Hubble Space Telescope is also powered by nickel-hydrogen batteries, which were finally replaced in May 2009, more than 19 years after launch and 13 years beyond their design life. Chemistry Laboratory syntheses is produced in labs, often as a by-product of other reactions. Many metals react with water to produce , but the rate of hydrogen evolution depends on the metal, the pH, and the presence of alloying agents. Most often, hydrogen evolution is induced by acids. The alkali and alkaline earth metals, aluminium, zinc, manganese, and iron react readily with aqueous acids. This reaction is the basis of the Kipp's apparatus, which once was used as a laboratory gas source: In the absence of acid, the evolution of is slower. Because iron is widely used structural material, its anaerobic corrosion is of technological significance: Many metals, such as aluminium, are slow to react with water because they form passivated oxide coatings of oxides. An alloy of aluminium and gallium, however, does react with water. At high pH, aluminium can produce : Reactions of H2 is relatively unreactive. The thermodynamic basis of this low reactivity is the very strong H–H bond, with a bond dissociation energy of 435.7 kJ/mol. It does form coordination complexes called dihydrogen complexes. These species provide insights into the early steps in the interactions of hydrogen with metal catalysts. According to neutron diffraction, the metal and two H atoms form a triangle in these complexes. The H-H bond remains intact but is elongated. They are acidic. Although exotic on Earth, the ion is common in the universe. It is a triangular species, like the aforementioned dihydrogen complexes. It is known as protonated molecular hydrogen or the trihydrogen cation. Hydrogen directly reacts with chlorine, fluorine and bromine to give HF, HCl, and HBr, respectively. The conversion involves a radical chain mechanism. With heating, H2 reacts efficiently with the alkali and alkaline earth metals to give the saline hydrides of the formula MH and MH2, respectively. One of the striking properties of H2 is its inertness toward unsaturated organic compounds, such as alkenes and alkynes. These species only react with H2 in the presence of catalysts. Especially active catalysts are the platinum metals (platinum, rhodium, palladium, etc.). A major driver for the mining of these rare and expensive elements is their use as catalysts. Hydrogen-containing compounds Most known compounds contain hydrogen, not as H2, but as covalently bonded H atoms. This interaction is the basis of organic chemistry and biochemistry.Hydrogen forms many compounds with carbon, called the hydrocarbons. Hydrocarbons are called organic compounds. In nature, they almost always contain "heteroatoms" such as nitrogen, oxygen, and sulfur. The study of their properties is known as organic chemistry and their study in the context of living organisms is called biochemistry. By some definitions, "organic" compounds are only required to contain carbon. However, most of them also contain hydrogen, and because it is the carbon-hydrogen bond that gives this class of compounds most of its particular chemical characteristics, carbon-hydrogen bonds are required in some definitions of the word "organic" in chemistry. Millions of hydrocarbons are known, and they are usually formed by complicated pathways that seldom involve elemental hydrogen. Hydrides Hydrogen forms compounds with less electronegative elements, such as metals and main group elements. In these compounds, hydrogen takes on a partial negative charge. The term "hydride" suggests that the H atom has acquired a negative or anionic character, denoted . Usually hydride refers to hydrogen in a compound with a more electropositive element. For hydrides other than group 1 and 2 metals, the term can be misleading, considering the low electronegativity of hydrogen. A well known hydride is lithium aluminium hydride, the anion carries hydridic centers firmly attached to the Al(III). Perhaps the most extensive series of hydrides are the boranes, compounds consisting only of boron and hydrogen. Hydrides can bond to these electropositive elements not only as a terminal ligand but also as bridging ligands. In diborane (), four H's are terminal and two bridge between the two B atoms. Protons and acids When bonded to a more electronegative element, particularly fluorine, oxygen, or nitrogen, hydrogen can participate in a form of medium-strength noncovalent bonding with another electronegative element with a lone pair, a phenomenon called hydrogen bonding that is critical to the stability of many biological molecules. can also be obtained by oxidation of H2. Under the Brønsted–Lowry acid–base theory, acids are proton donors, while bases are proton acceptors. A bare proton, essentially cannot exist in anything other than a vacuum. Otherwise it attaches to other atoms, ions, or molecules. Even species as inert as methane can be protonated. The term 'proton' is used loosely and metaphorically to refer to refer to solvated " without any implication that any single protons exist freely as a species. To avoid the implication of the naked proton in solution, acidic aqueous solutions are sometimes considered to contain the "hydronium ion" () or still more accurately, . Other oxonium ions are found when water is in acidic solution with other solvents. Occurrence Cosmic Hydrogen, as atomic H, is the most abundant chemical element in the universe, making up 75% of normal matter by mass and >90% by number of atoms. In astrophysics, neutral hydrogen in the interstellar medium is called H I and ionized hydrogen is called H II. Radiation from stars ionizes H I to H II, creating spheres of ionized H II around stars. In the chronology of the universe neutral hydrogen dominated until the birth of stars during the era of reionization led to bubbles of ionized hydrogen that grew and merged over 500 million of years. They are the source of the 21-cm hydrogen line at 1420 MHz that is detected in order to probe primordial hydrogen. The large amount of neutral hydrogen found in the damped Lyman-alpha systems is thought to dominate the cosmological baryonic density of the universe up to a redshift of z = 4. Hydrogen is found in great abundance in stars and gas giant planets. Molecular clouds of are associated with star formation. Hydrogen plays a vital role in powering stars through the proton-proton reaction in lower-mass stars, and through the CNO cycle of nuclear fusion in case of stars more massive than the Sun. Hydrogen plasma states have properties quite distinct from those of molecular or atomic hydrogen. As a plasma, hydrogen's electron and proton are not bound together, resulting in very high electrical conductivity and high emissivity (producing the light from the Sun and other stars). The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind they interact with the Earth's magnetosphere giving rise to Birkeland currents and the aurora. A molecular form called protonated molecular hydrogen () is found in the interstellar medium, where it is generated by ionization of molecular hydrogen from cosmic rays. This ion has also been observed in the upper atmosphere of Jupiter. The ion is long-lived in outer space due to the low temperature and density. is one of the most abundant ions in the universe, and it plays a notable role in the chemistry of the interstellar medium. Neutral triatomic hydrogen can exist only in an excited form and is unstable. By contrast, the positive hydrogen molecular ion () is a rare in the universe. Terrestrial Under ordinary conditions on Earth, elemental hydrogen exists as the diatomic gas, . Hydrogen gas is very rare in Earth's atmosphere (around 0.53 ppm on a molar basis) because of its light weight, which enables it to escape the atmosphere more rapidly than heavier gases. However, hydrogen, usually in the form of water, is the third most abundant element on the Earth's surface, mostly in the form of chemical compounds such as hydrocarbons and water. Despite its low concentration in our atmosphere, terrestrial hydrogen is sufficiently abundant to support the metabolism of several bacteria. Deposits of hydrogen gas have been discovered in several countries including Mali, France and Australia. Production and storage Industrial routes Many methods exist for producing H2, but three dominate commercially: steam reforming often coupled to water-gas shift, partial oxidation of hydrocarbons, and water electrolysis. Steam reforming Hydrogen is mainly produced by steam methane reforming (SMR), the reaction of water and methane. Thus, at high temperature (1000–1400 K, 700–1100 °C or 1300–2000 °F), steam (water vapor) reacts with methane to yield carbon monoxide and . Steam reforming is also used for the industrial preparation of ammonia. This reaction is favored at low pressures, Nonetheless, conducted at high pressures (2.0 MPa, 20 atm or 600 inHg) because high-pressure is the most marketable product, and pressure swing adsorption (PSA) purification systems work better at higher pressures. The product mixture is known as "synthesis gas" because it is often used directly for the production of methanol and many other compounds. Hydrocarbons other than methane can be used to produce synthesis gas with varying product ratios. One of the many complications to this highly optimized technology is the formation of coke or carbon: Therefore, steam reforming typically employs an excess of . Additional hydrogen can be recovered from the steam by using carbon monoxide through the water gas shift reaction (WGS). This process requires an iron oxide catalyst: Hydrogen is sometimes produced and consumed in the same industrial process, without being separated. In the Haber process for ammonia production, hydrogen is generated from natural gas. Partial oxidation of hydrocarbons Other methods for CO and production include partial oxidation of hydrocarbons: Although less important commercially, coal can serve as a prelude to the shift reaction above: Olefin production units may produce substantial quantities of byproduct hydrogen particularly from cracking light feedstocks like ethane or propane. Water electrolysis Electrolysis of water is a conceptually simple method of producing hydrogen. Commercial electrolyzers use nickel-based catalysts in strongly alkaline solution. Platinum is a better catalyst but is expensive. Electrolysis of brine to yield chlorine also produces high purity hydrogen as a co-product, which is used for a variety of transformations such as hydrogenations. The electrolysis process is more expensive than producing hydrogen from methane without CCS and the efficiency of energy conversion is inherently low. Innovation in hydrogen electrolyzers could make large-scale production of hydrogen from electricity more cost-competitive. Hydrogen produced in this manner could play a significant role in decarbonizing energy systems where there are challenges and limitations to replacing fossil fuels with direct use of electricity. Methane pyrolysis Hydrogen can be produced by pyrolysis of natural gas (methane). This route has a lower carbon footprint than commercial hydrogen production processes. Developing a commercial methane pyrolysis process could expedite the expanded use of hydrogen in industrial and transportation applications. Methane pyrolysis is accomplished by passing methane through a molten metal catalyst containing dissolved nickel. Methane is converted to hydrogen gas and solid carbon. (ΔH° = 74 kJ/mol) The carbon may be sold as a manufacturing feedstock or fuel, or landfilled. Further research continues in several laboratories, including at Karlsruhe Liquid-metal Laboratory and at University of California – Santa Barbara. BASF built a methane pyrolysis pilot plant. Thermochemical Water splitting is the process by which water is decomposed into its components. Relevant to the biological scenario is this simple equation: The reaction occurs in the light reactions in all photosynthetic organisms. A few organisms, including the alga Chlamydomonas reinhardtii and cyanobacteria, have evolved a second step in the dark reactions in which protons and electrons are reduced to form gas by specialized hydrogenases in the chloroplast. Efforts have been undertaken to genetically modify cyanobacterial hydrogenases to more efficiently generate gas even in the presence of oxygen. Efforts have also been undertaken with genetically modified alga in a bioreactor. Relevant to the thermal water-splitting scenario is this simple equation: More than 200 thermochemical cycles can be used for water splitting. Many of these cycles such as the iron oxide cycle, cerium(IV) oxide–cerium(III) oxide cycle, zinc zinc-oxide cycle, sulfur-iodine cycle, copper-chlorine cycle and hybrid sulfur cycle have been evaluated for their commercial potential to produce hydrogen and oxygen from water and heat without using electricity. A number of labs (including in France, Germany, Greece, Japan, and the United States) are developing thermochemical methods to produce hydrogen from solar energy and water. Natural routes Biohydrogen is produced by enzymes called hydrogenases. This process allows the host organism to use fermentation as a source of energy. These same enzymes also can oxidize H2, such that the host organisms can subsist by reducing oxidized substrates using electrons extracted from H2. The hydrogenase enzyme feature iron or nickel-iron centers at their active sites. The natural cycle of hydrogen production and consumption by organisms is called the hydrogen cycle. Some bacteria such as Mycobacterium smegmatis can use the small amount of hydrogen in the atmosphere as a source of energy when other sources are lacking. Their hydrogenase are designed with small channels that exclude oxygen and so permits the reaction to occur even though the hydrogen concentration is very low and the oxygen concentration is as in normal air. Confirming the existence of hydrogenases in the human gut, occurs in human breath. The concentration in the breath of fasting people at rest is typically less than 5 parts per million (ppm) but can be 50 ppm when people with intestinal disorders consume molecules they cannot absorb during diagnostic hydrogen breath tests. Serpentinization Serpentinization is a geological mechanism that produce highly reducing conditions. Under these conditions, water is capable of oxidizing ferrous () ions in fayalite. The process is of interest because it generates hydrogen gas: Closely related to this geological process is the Schikorr reaction: This process also is relevant to the corrosion of iron and steel in oxygen-free groundwater and in reducing soils below the water table. Storage Hydrogen produced when there is a surplus of variable renewable electricity could in principle be stored and later used to generate heat or to re-generate electricity. The hydrogen created through electrolysis using renewable energy is commonly referred to as "green hydrogen". It can be further transformed into synthetic fuels such as ammonia and methanol. Disadvantages of hydrogen as an energy carrier include high costs of storage and distribution due to hydrogen's explosivity, its large volume compared to other fuels, and its tendency to make pipes brittle. If H2 is to used as an energy source, its storage is important. It dissolves only poorly in solvents. For example, at room temperature and 0.1 Mpascal, ca. 0.05 moles dissolves in one kilogram of diethyl ether. The H2 can be stored in compressed form, although compressing costs energy. Liquifaction is impractical given its low critical temperature. In contrast, ammonia and many hydrocarbons can be liquified at room temperature under pressure. For these reasons, hydrogen carriers - materials that reversibly bind H2 - have attracted much attention. The key question is then the weight percent of H2-equivalents within the carrier material. For example, hydrogen can be reversibly absorbed into many rare earth and transition metals and is soluble in both nanocrystalline and amorphous metals. Hydrogen solubility in metals is influenced by local distortions or impurities in the crystal lattice. These properties may be useful when hydrogen is purified by passage through hot palladium disks, but the gas's high solubility is also a metallurgical problem, contributing to the embrittlement of many metals, complicating the design of pipelines and storage tanks. The most problematic aspect of metal hydrides for storage is their modest H2 content, often on the order of 1%. For this reason, there is interest in storage of H2 in compounds of low molecular weight. For example, ammonia borane () contains 19.8 weight percent of H2. The problem with this material is that after release of H2, the resulting boron nitride does not re-add H2, i.e. ammonia borane is an irreversible hydrogen carrier. More attractive, somewhat ironically, are hydrocarbons such as tetrahydroquinoline, which reversibly release some H2 when heated in the presence of a catalyst: Applications Petrochemical industry Large quantities of are used in the "upgrading" of fossil fuels. Key consumers of include hydrodesulfurization, and hydrocracking. Many of these reactions can be classified as hydrogenolysis, i.e., the cleavage of bonds by hydrogen. Illustrative is the separation of sulfur from liquid fossil fuels: Hydrogenation Hydrogenation, the addition of to various substrates, is done on a large scale. Hydrogenation of to produce ammonia by the Haber process, consumes a few percent of the energy budget in the entire industry. The resulting ammonia is used to supply most of the protein consumed by humans. Hydrogenation is used to convert unsaturated fats and oils to saturated fats and oils. The major application is the production of margarine. Methanol is produced by hydrogenation of carbon dioxide. It is similarly the source of hydrogen in the manufacture of hydrochloric acid. is also used as a reducing agent for the conversion of some ores to the metals. Coolant Hydrogen is commonly used in power stations as a coolant in generators due to a number of favorable properties that are a direct result of its light diatomic molecules. These include low density, low viscosity, and the highest specific heat and thermal conductivity of all gases. Fuel Hydrogen (H2) is widely discussed as a carrier of energy with potential to help to decarbonize economies and mitigate greenhouse gas emissions. This scenario requires the efficient production and storage of hydrogen. Hydrogen fuel can produce the intense heat required for industrial production of steel, cement, glass, and chemicals, thus contributing to the decarbonisation of industry alongside other technologies, such as electric arc furnaces for steelmaking. However, it is likely to play a larger role in providing industrial feedstock for cleaner production of ammonia and organic chemicals. For example, in steelmaking, hydrogen could function as a clean energy carrier and also as a low-carbon catalyst, replacing coal-derived coke (carbon): vs Hydrogen used to decarbonise transportation is likely to find its largest applications in shipping, aviation and, to a lesser extent, heavy goods vehicles, through the use of hydrogen-derived synthetic fuels such as ammonia and methanol and fuel cell technology. For light-duty vehicles including cars, hydrogen is far behind other alternative fuel vehicles, especially compared with the rate of adoption of battery electric vehicles, and may not play a significant role in future. Liquid hydrogen and liquid oxygen together serve as cryogenic propellants in liquid-propellant rockets, as in the Space Shuttle main engines. NASA has investigated the use of rocket propellant made from atomic hydrogen, boron or carbon that is frozen into solid molecular hydrogen particles suspended in liquid helium. Upon warming, the mixture vaporizes to allow the atomic species to recombine, heating the mixture to high temperature. Semiconductor industry Hydrogen is employed to saturate broken ("dangling") bonds of amorphous silicon and amorphous carbon that helps stabilizing material properties. It is also a potential electron donor in various oxide materials, including ZnO, , CdO, MgO, , , , , , , , , , , , and . Niche and evolving uses Shielding gas: Hydrogen is used as a shielding gas in welding methods such as atomic hydrogen welding. Cryogenic research: Liquid is used in cryogenic research, including superconductivity studies. Buoyant lifting: Because is only 7% the density of air, it was once widely used as a lifting gas in balloons and airships. Leak detection: Pure or mixed with nitrogen (sometimes called forming gas), hydrogen is a tracer gas for detection of minute leaks. Applications can be found in the automotive, chemical, power generation, aerospace, and telecommunications industries. Hydrogen is an authorized food additive (E 949) that allows food package leak testing, as well as having anti-oxidizing properties. Neutron moderation: Deuterium (hydrogen-2) is used in nuclear fission applications as a moderator to slow neutrons. Nuclear fusion fuel: Deuterium is used in nuclear fusion reactions. Isotopic labeling: Deuterium compounds have applications in chemistry and biology in studies of isotope effects on reaction rates. Tritium uses: Tritium (hydrogen-3), produced in nuclear reactors, is used in the production of hydrogen bombs, as an isotopic label in the biosciences, and as a source of beta radiation in radioluminescent paint for instrument dials and emergency signage. Safety and precautions Hydrogen poses few hazards to human safety. The chief hazards are for detonations and asphyxiation, but both are mitigated by its high diffusivity. Because hydrogen has been intensively investigated as a fuel, there is extensive documentation on the risks. Because H2 reacts with very few substrates, it is nontoxic as evidenced by the fact that humans exhale small amounts of it.
Physical sciences
Chemistry
null
13256
https://en.wikipedia.org/wiki/Helium
Helium
Helium (from ) is a chemical element; it has symbol He and atomic number 2. It is a colorless, odorless, non-toxic, inert, monatomic gas and the first in the noble gas group in the periodic table. Its boiling point is the lowest among all the elements, and it does not have a melting point at standard pressures. It is the second-lightest and second most abundant element in the observable universe, after hydrogen. It is present at about 24% of the total elemental mass, which is more than 12 times the mass of all the heavier elements combined. Its abundance is similar to this in both the Sun and Jupiter, because of the very high nuclear binding energy (per nucleon) of helium-4, with respect to the next three elements after helium. This helium-4 binding energy also accounts for why it is a product of both nuclear fusion and radioactive decay. The most common isotope of helium in the universe is helium-4, the vast majority of which was formed during the Big Bang. Large amounts of new helium are created by nuclear fusion of hydrogen in stars. Helium was first detected as an unknown, yellow spectral line signature in sunlight during a solar eclipse in 1868 by Georges Rayet, Captain C. T. Haig, Norman R. Pogson, and Lieutenant John Herschel, and was subsequently confirmed by French astronomer Jules Janssen. Janssen is often jointly credited with detecting the element, along with Norman Lockyer. Janssen recorded the helium spectral line during the solar eclipse of 1868, while Lockyer observed it from Britain. However, only Lockyer proposed that the line was due to a new element, which he named after the Sun. The formal discovery of the element was made in 1895 by chemists Sir William Ramsay, Per Teodor Cleve, and Nils Abraham Langlet, who found helium emanating from the uranium ore cleveite, which is now not regarded as a separate mineral species, but as a variety of uraninite. In 1903, large reserves of helium were found in natural gas fields in parts of the United States, by far the largest supplier of the gas today. Liquid helium is used in cryogenics (its largest single use, consuming about a quarter of production), and in the cooling of superconducting magnets, with its main commercial application in MRI scanners. Helium's other industrial uses—as a pressurizing and purge gas, as a protective atmosphere for arc welding, and in processes such as growing crystals to make silicon wafers—account for half of the gas produced. A small but well-known use is as a lifting gas in balloons and airships. As with any gas whose density differs from that of air, inhaling a small volume of helium temporarily changes the timbre and quality of the human voice. In scientific research, the behavior of the two fluid phases of helium-4 (helium I and helium II) is important to researchers studying quantum mechanics (in particular the property of superfluidity) and to those looking at the phenomena, such as superconductivity, produced in matter near absolute zero. On Earth, it is relatively rare—5.2 ppm by volume in the atmosphere. Most terrestrial helium present today is created by the natural radioactive decay of heavy radioactive elements (thorium and uranium, although there are other examples), as the alpha particles emitted by such decays consist of helium-4 nuclei. This radiogenic helium is trapped with natural gas in concentrations as great as 7% by volume, from which it is extracted commercially by a low-temperature separation process called fractional distillation. Terrestrial helium is a non-renewable resource because once released into the atmosphere, it promptly escapes into space. Its supply is thought to be rapidly diminishing. However, some studies suggest that helium produced deep in the Earth by radioactive decay can collect in natural gas reserves in larger-than-expected quantities, in some cases having been released by volcanic activity. History Scientific discoveries The first evidence of helium was observed on August 18, 1868, as a bright yellow line with a wavelength of 587.49 nanometers in the spectrum of the chromosphere of the Sun. The line was detected by French astronomer Jules Janssen during a total solar eclipse in Guntur, India. This line was initially assumed to be sodium. On October 20 of the same year, English astronomer Norman Lockyer observed a yellow line in the solar spectrum, which he named the D3 because it was near the known D1 and D2 Fraunhofer lines of sodium. He concluded that it was caused by an element in the Sun unknown on Earth. Lockyer named the element with the Greek word for the Sun, ἥλιος (helios). It is sometimes said that English chemist Edward Frankland was also involved in the naming, but this is unlikely as he doubted the existence of this new element. The ending "-ium" is unusual, as it normally applies only to metallic elements; probably Lockyer, being an astronomer, was unaware of the chemical conventions. In 1881, Italian physicist Luigi Palmieri detected helium on Earth for the first time through its D3 spectral line, when he analyzed a material that had been sublimated during a recent eruption of Mount Vesuvius. On March 26, 1895, Scottish chemist Sir William Ramsay isolated helium on Earth by treating the mineral cleveite (a variety of uraninite with at least 10% rare-earth elements) with mineral acids. Ramsay was looking for argon but, after separating nitrogen and oxygen from the gas, liberated by sulfuric acid, he noticed a bright yellow line that matched the D3 line observed in the spectrum of the Sun. These samples were identified as helium by Lockyer and British physicist William Crookes. It was independently isolated from cleveite in the same year by chemists Per Teodor Cleve and Abraham Langlet in Uppsala, Sweden, who collected enough of the gas to accurately determine its atomic weight. Helium was also isolated by American geochemist William Francis Hillebrand prior to Ramsay's discovery, when he noticed unusual spectral lines while testing a sample of the mineral uraninite. Hillebrand, however, attributed the lines to nitrogen. His letter of congratulations to Ramsay offers an interesting case of discovery, and near-discovery, in science. In 1907, Ernest Rutherford and Thomas Royds demonstrated that alpha particles are helium nuclei by allowing the particles to penetrate the thin glass wall of an evacuated tube, then creating a discharge in the tube, to study the spectrum of the new gas inside. In 1908, helium was first liquefied by Dutch physicist Heike Kamerlingh Onnes by cooling the gas to less than . He tried to solidify it by further reducing the temperature but failed, because helium does not solidify at atmospheric pressure. Onnes' student Willem Hendrik Keesom was eventually able to solidify 1 cm3 of helium in 1926 by applying additional external pressure. In 1913, Niels Bohr published his "trilogy" on atomic structure that included a reconsideration of the Pickering–Fowler series as central evidence in support of his model of the atom. This series is named for Edward Charles Pickering, who in 1896 published observations of previously unknown lines in the spectrum of the star ζ Puppis (these are now known to occur with Wolf–Rayet and other hot stars). Pickering attributed the observation (lines at 4551, 5411, and 10123 Å) to a new form of hydrogen with half-integer transition levels. In 1912, Alfred Fowler managed to produce similar lines from a hydrogen-helium mixture, and supported Pickering's conclusion as to their origin. Bohr's model does not allow for half-integer transitions (nor does quantum mechanics) and Bohr concluded that Pickering and Fowler were wrong, and instead assigned these spectral lines to ionised helium, He+. Fowler was initially skeptical but was ultimately convinced that Bohr was correct, and by 1915 "spectroscopists had transferred [the Pickering–Fowler series] definitively [from hydrogen] to helium." Bohr's theoretical work on the Pickering series had demonstrated the need for "a re-examination of problems that seemed already to have been solved within classical theories" and provided important confirmation for his atomic theory. In 1938, Russian physicist Pyotr Leonidovich Kapitsa discovered that helium-4 has almost no viscosity at temperatures near absolute zero, a phenomenon now called superfluidity. This phenomenon is related to Bose–Einstein condensation. In 1972, the same phenomenon was observed in helium-3, but at temperatures much closer to absolute zero, by American physicists Douglas D. Osheroff, David M. Lee, and Robert C. Richardson. The phenomenon in helium-3 is thought to be related to pairing of helium-3 fermions to make bosons, in analogy to Cooper pairs of electrons producing superconductivity. In 1961, Vignos and Fairbank reported the existence of a different phase of solid helium-4, designated the gamma-phase. It exists for a narrow range of pressure between 1.45 and 1.78 K. Extraction and use After an oil drilling operation in 1903 in Dexter, Kansas produced a gas geyser that would not burn, Kansas state geologist Erasmus Haworth collected samples of the escaping gas and took them back to the University of Kansas at Lawrence where, with the help of chemists Hamilton Cady and David McFarland, he discovered that the gas consisted of, by volume, 72% nitrogen, 15% methane (a combustible percentage only with sufficient oxygen), 1% hydrogen, and 12% an unidentifiable gas. With further analysis, Cady and McFarland discovered that 1.84% of the gas sample was helium. This showed that despite its overall rarity on Earth, helium was concentrated in large quantities under the American Great Plains, available for extraction as a byproduct of natural gas. Following a suggestion by Sir Richard Threlfall, the United States Navy sponsored three small experimental helium plants during World War I. The goal was to supply barrage balloons with the non-flammable, lighter-than-air gas. A total of of 92% helium was produced in the program even though less than a cubic meter of the gas had previously been obtained. Some of this gas was used in the world's first helium-filled airship, the U.S. Navy's C-class blimp C-7, which flew its maiden voyage from Hampton Roads, Virginia, to Bolling Field in Washington, D.C., on December 1, 1921, nearly two years before the Navy's first rigid helium-filled airship, the Naval Aircraft Factory-built USS Shenandoah, flew in September 1923. Although the extraction process using low-temperature gas liquefaction was not developed in time to be significant during World War I, production continued. Helium was primarily used as a lifting gas in lighter-than-air craft. During World War II, the demand increased for helium for lifting gas and for shielded arc welding. The helium mass spectrometer was also vital in the atomic bomb Manhattan Project. The government of the United States set up the National Helium Reserve in 1925 at Amarillo, Texas, with the goal of supplying military airships in time of war and commercial airships in peacetime. Because of the Helium Act of 1925, which banned the export of scarce helium on which the US then had a production monopoly, together with the prohibitive cost of the gas, German Zeppelins were forced to use hydrogen as lifting gas, which would gain infamy in the Hindenburg disaster. The helium market after World War II was depressed but the reserve was expanded in the 1950s to ensure a supply of liquid helium as a coolant to create oxygen/hydrogen rocket fuel (among other uses) during the Space Race and Cold War. Helium use in the United States in 1965 was more than eight times the peak wartime consumption. After the Helium Acts Amendments of 1960 (Public Law 86–777), the U.S. Bureau of Mines arranged for five private plants to recover helium from natural gas. For this helium conservation program, the Bureau built a pipeline from Bushton, Kansas, to connect those plants with the government's partially depleted Cliffside gas field near Amarillo, Texas. This helium-nitrogen mixture was injected and stored in the Cliffside gas field until needed, at which time it was further purified. By 1995, a billion cubic meters of the gas had been collected and the reserve was US$1.4 billion in debt, prompting the Congress of the United States in 1996 to discontinue the reserve. The resulting Helium Privatization Act of 1996 (Public Law 104–273) directed the United States Department of the Interior to empty the reserve, with sales starting by 2005. Helium produced between 1930 and 1945 was about 98.3% pure (2% nitrogen), which was adequate for airships. In 1945, a small amount of 99.9% helium was produced for welding use. By 1949, commercial quantities of Grade A 99.95% helium were available. For many years, the United States produced more than 90% of commercially usable helium in the world, while extraction plants in Canada, Poland, Russia, and other nations produced the remainder. In the mid-1990s, a new plant in Arzew, Algeria, producing began operation, with enough production to cover all of Europe's demand. Meanwhile, by 2000, the consumption of helium within the U.S. had risen to more than 15 million kg per year. In 2004–2006, additional plants in Ras Laffan, Qatar, and Skikda, Algeria were built. Algeria quickly became the second leading producer of helium. Through this time, both helium consumption and the costs of producing helium increased. From 2002 to 2007 helium prices doubled. , the United States National Helium Reserve accounted for 30 percent of the world's helium. The reserve was expected to run out of helium in 2018. Despite that, a proposed bill in the United States Senate would allow the reserve to continue to sell the gas. Other large reserves were in the Hugoton in Kansas, United States, and nearby gas fields of Kansas and the panhandles of Texas and Oklahoma. New helium plants were scheduled to open in 2012 in Qatar, Russia, and the US state of Wyoming, but they were not expected to ease the shortage. In 2013, Qatar started up the world's largest helium unit, although the 2017 Qatar diplomatic crisis severely affected helium production there. 2014 was widely acknowledged to be a year of over-supply in the helium business, following years of renowned shortages. Nasdaq reported (2015) that for Air Products, an international corporation that sells gases for industrial use, helium volumes remain under economic pressure due to feedstock supply constraints. Characteristics Atom In quantum mechanics In the perspective of quantum mechanics, helium is the second simplest atom to model, following the hydrogen atom. Helium is composed of two electrons in atomic orbitals surrounding a nucleus containing two protons and (usually) two neutrons. As in Newtonian mechanics, no system that consists of more than two particles can be solved with an exact analytical mathematical approach (see 3-body problem) and helium is no exception. Thus, numerical mathematical methods are required, even to solve the system of one nucleus and two electrons. Such computational chemistry methods have been used to create a quantum mechanical picture of helium electron binding which is accurate to within < 2% of the correct value, in a few computational steps. Such models show that each electron in helium partly screens the nucleus from the other, so that the effective nuclear charge Zeff which each electron sees is about 1.69 units, not the 2 charges of a classic "bare" helium nucleus. Related stability of the helium-4 nucleus and electron shell The nucleus of the helium-4 atom is identical with an alpha particle. High-energy electron-scattering experiments show its charge to decrease exponentially from a maximum at a central point, exactly as does the charge density of helium's own electron cloud. This symmetry reflects similar underlying physics: the pair of neutrons and the pair of protons in helium's nucleus obey the same quantum mechanical rules as do helium's pair of electrons (although the nuclear particles are subject to a different nuclear binding potential), so that all these fermions fully occupy 1s orbitals in pairs, none of them possessing orbital angular momentum, and each cancelling the other's intrinsic spin. This arrangement is thus energetically extremely stable for all these particles and has astrophysical implications. Namely, adding another particle – proton, neutron, or alpha particle – would consume rather than release energy; all systems with mass number 5, as well as beryllium-8 (comprising two alpha particles), are unbound. For example, the stability and low energy of the electron cloud state in helium accounts for the element's chemical inertness, and also the lack of interaction of helium atoms with each other, producing the lowest melting and boiling points of all the elements. In a similar way, the particular energetic stability of the helium-4 nucleus, produced by similar effects, accounts for the ease of helium-4 production in atomic reactions that involve either heavy-particle emission or fusion. Some stable helium-3 (two protons and one neutron) is produced in fusion reactions from hydrogen, though its estimated abundance in the universe is about relative to helium-4. The unusual stability of the helium-4 nucleus is also important cosmologically: it explains the fact that in the first few minutes after the Big Bang, as the "soup" of free protons and neutrons which had initially been created in about 6:1 ratio cooled to the point that nuclear binding was possible, almost all first compound atomic nuclei to form were helium-4 nuclei. Owing to the relatively tight binding of helium-4 nuclei, its production consumed nearly all of the free neutrons in a few minutes, before they could beta-decay, and thus few neutrons were available to form heavier atoms such as lithium, beryllium, or boron. Helium-4 nuclear binding per nucleon is stronger than in any of these elements (see nucleogenesis and binding energy) and thus, once helium had been formed, no energetic drive was available to make elements 3, 4 and 5. It is barely energetically favorable for helium to fuse into the next element with a lower energy per nucleon, carbon. However, due to the short lifetime of the intermediate beryllium-8, this process requires three helium nuclei striking each other nearly simultaneously (see triple-alpha process). There was thus no time for significant carbon to be formed in the few minutes after the Big Bang, before the early expanding universe cooled to the temperature and pressure point where helium fusion to carbon was no longer possible. This left the early universe with a very similar ratio of hydrogen/helium as is observed today (3 parts hydrogen to 1 part helium-4 by mass), with nearly all the neutrons in the universe trapped in helium-4. All heavier elements (including those necessary for rocky planets like the Earth, and for carbon-based or other life) have thus been created since the Big Bang in stars which were hot enough to fuse helium itself. All elements other than hydrogen and helium today account for only 2% of the mass of atomic matter in the universe. Helium-4, by contrast, comprises about 24% of the mass of the universe's ordinary matter—nearly all the ordinary matter that is not hydrogen. Gas and plasma phases Helium is the second least reactive noble gas after neon, and thus the second least reactive of all elements. It is chemically inert and monatomic in all standard conditions. Because of helium's relatively low molar (atomic) mass, its thermal conductivity, specific heat, and sound speed in the gas phase are all greater than any other gas except hydrogen. For these reasons and the small size of helium monatomic molecules, helium diffuses through solids at a rate three times that of air and around 65% that of hydrogen. Helium is the least water-soluble monatomic gas, and one of the least water-soluble of any gas (CF4, SF6, and C4F8 have lower mole fraction solubilities: 0.3802, 0.4394, and 0.2372 x2/10−5, respectively, versus helium's 0.70797 x2/10−5), and helium's index of refraction is closer to unity than that of any other gas. Helium has a negative Joule–Thomson coefficient at normal ambient temperatures, meaning it heats up when allowed to freely expand. Only below its Joule–Thomson inversion temperature (of about 32 to 50 K at 1 atmosphere) does it cool upon free expansion. Once precooled below this temperature, helium can be liquefied through expansion cooling. Most extraterrestrial helium is plasma in stars, with properties quite different from those of atomic helium. In a plasma, helium's electrons are not bound to its nucleus, resulting in very high electrical conductivity, even when the gas is only partially ionized. The charged particles are highly influenced by magnetic and electric fields. For example, in the solar wind together with ionized hydrogen, the particles interact with the Earth's magnetosphere, giving rise to Birkeland currents and the aurora. Liquid phase Helium liquifies when cooled below 4.2 K at atmospheric pressure. Unlike any other element, however, helium remains liquid down to a temperature of absolute zero. This is a direct effect of quantum mechanics: specifically, the zero point energy of the system is too high to allow freezing. Pressures above about 25 atmospheres are required to freeze it. There are two liquid phases: Helium I is a conventional liquid, and Helium II, which occurs at a lower temperature, is a superfluid. Helium I Below its boiling point of and above the lambda point of , the isotope helium-4 exists in a normal colorless liquid state, called helium I. Like other cryogenic liquids, helium I boils when it is heated and contracts when its temperature is lowered. Below the lambda point, however, helium does not boil, and it expands as the temperature is lowered further. Helium I has a gas-like index of refraction of 1.026 which makes its surface so hard to see that floats of Styrofoam are often used to show where the surface is. This colorless liquid has a very low viscosity and a density of 0.145–0.125 g/mL (between about 0 and 4 K), which is only one-fourth the value expected from classical physics. Quantum mechanics is needed to explain this property and thus both states of liquid helium (helium I and helium II) are called quantum fluids, meaning they display atomic properties on a macroscopic scale. This may be an effect of its boiling point being so close to absolute zero, preventing random molecular motion (thermal energy) from masking the atomic properties. Helium II Liquid helium below its lambda point (called helium II) exhibits very unusual characteristics. Due to its high thermal conductivity, when it boils, it does not bubble but rather evaporates directly from its surface. Helium-3 also has a superfluid phase, but only at much lower temperatures; as a result, less is known about the properties of the isotope. Helium II is a superfluid, a quantum mechanical state of matter with strange properties. For example, when it flows through capillaries as thin as 10 to 100 nm it has no measurable viscosity. However, when measurements were done between two moving discs, a viscosity comparable to that of gaseous helium was observed. Existing theory explains this using the two-fluid model for helium II. In this model, liquid helium below the lambda point is viewed as containing a proportion of helium atoms in a ground state, which are superfluid and flow with exactly zero viscosity, and a proportion of helium atoms in an excited state, which behave more like an ordinary fluid. In the fountain effect, a chamber is constructed which is connected to a reservoir of helium II by a sintered disc through which superfluid helium leaks easily but through which non-superfluid helium cannot pass. If the interior of the container is heated, the superfluid helium changes to non-superfluid helium. In order to maintain the equilibrium fraction of superfluid helium, superfluid helium leaks through and increases the pressure, causing liquid to fountain out of the container. The thermal conductivity of helium II is greater than that of any other known substance, a million times that of helium I and several hundred times that of copper. This is because heat conduction occurs by an exceptional quantum mechanism. Most materials that conduct heat well have a valence band of free electrons which serve to transfer the heat. Helium II has no such valence band but nevertheless conducts heat well. The flow of heat is governed by equations that are similar to the wave equation used to characterize sound propagation in air. When heat is introduced, it moves at 20 meters per second at 1.8 K through helium II as waves in a phenomenon known as second sound. Helium II also exhibits a creeping effect. When a surface extends past the level of helium II, the helium II moves along the surface, against the force of gravity. Helium II will escape from a vessel that is not sealed by creeping along the sides until it reaches a warmer region where it evaporates. It moves in a 30 nm-thick film regardless of surface material. This film is called a Rollin film and is named after the man who first characterized this trait, Bernard V. Rollin. As a result of this creeping behavior and helium II's ability to leak rapidly through tiny openings, it is very difficult to confine. Unless the container is carefully constructed, the helium II will creep along the surfaces and through valves until it reaches somewhere warmer, where it will evaporate. Waves propagating across a Rollin film are governed by the same equation as gravity waves in shallow water, but rather than gravity, the restoring force is the van der Waals force. These waves are known as third sound. Solid phases Helium remains liquid down to absolute zero at atmospheric pressure, but it freezes at high pressure. Solid helium requires a temperature of 1–1.5 K (about −272 °C or −457 °F) at about 25 bar (2.5 MPa) of pressure. It is often hard to distinguish solid from liquid helium since the refractive index of the two phases are nearly the same. The solid has a sharp melting point and has a crystalline structure, but it is highly compressible; applying pressure in a laboratory can decrease its volume by more than 30%. With a bulk modulus of about 27 MPa it is ~100 times more compressible than water. Solid helium has a density of at 1.15 K and 66 atm; the projected density at 0 K and 25 bar (2.5 MPa) is . At higher temperatures, helium will solidify with sufficient pressure. At room temperature, this requires about 114,000 atm. Helium-4 and helium-3 both form several crystalline solid phases, all requiring at least 25 bar. They both form an α phase, which has a hexagonal close-packed (hcp) crystal structure, a β phase, which is face-centered cubic (fcc), and a γ phase, which is body-centered cubic (bcc). Isotopes There are nine known isotopes of helium of which two, helium-3 and helium-4, are stable. In the Earth's atmosphere, one atom is for every million that are . Unlike most elements, helium's isotopic abundance varies greatly by origin, due to the different formation processes. The most common isotope, helium-4, is produced on Earth by alpha decay of heavier radioactive elements; the alpha particles that emerge are fully ionized helium-4 nuclei. Helium-4 is an unusually stable nucleus because its nucleons are arranged into complete shells. It was also formed in enormous quantities during Big Bang nucleosynthesis. Helium-3 is present on Earth only in trace amounts. Most of it has been present since Earth's formation, though some falls to Earth trapped in cosmic dust. Trace amounts are also produced by the beta decay of tritium. Rocks from the Earth's crust have isotope ratios varying by as much as a factor of ten, and these ratios can be used to investigate the origin of rocks and the composition of the Earth's mantle. is much more abundant in stars as a product of nuclear fusion. Thus in the interstellar medium, the proportion of to is about 100 times higher than on Earth. Extraplanetary material, such as lunar and asteroid regolith, have trace amounts of helium-3 from being bombarded by solar winds. The Moon's surface contains helium-3 at concentrations on the order of 10 ppb, much higher than the approximately 5 ppt found in the Earth's atmosphere. A number of people, starting with Gerald Kulcinski in 1986, have proposed to explore the Moon, mine lunar regolith, and use the helium-3 for fusion. Liquid helium-4 can be cooled to about using evaporative cooling in a 1-K pot. Similar cooling of helium-3, which has a lower boiling point, can achieve about in a helium-3 refrigerator. Equal mixtures of liquid and below separate into two immiscible phases due to their dissimilarity (they follow different quantum statistics: helium-4 atoms are bosons while helium-3 atoms are fermions). Dilution refrigerators use this immiscibility to achieve temperatures of a few millikelvins. It is possible to produce exotic helium isotopes, which rapidly decay into other substances. The shortest-lived heavy helium isotope is the unbound helium-10 with a half-life of . Helium-6 decays by emitting a beta particle and has a half-life of 0.8 second. Helium-7 and helium-8 are created in certain nuclear reactions. Helium-6 and helium-8 are known to exhibit a nuclear halo. Properties Table of thermal and physical properties of helium gas at atmospheric pressure: Compounds Helium has a valence of zero and is chemically unreactive under all normal conditions. It is an electrical insulator unless ionized. As with the other noble gases, helium has metastable energy levels that allow it to remain ionized in an electrical discharge with a voltage below its ionization potential. Helium can form unstable compounds, known as excimers, with tungsten, iodine, fluorine, sulfur, and phosphorus when it is subjected to a glow discharge, to electron bombardment, or reduced to plasma by other means. The molecular compounds HeNe, HgHe10, and WHe2, and the molecular ions , , , and have been created this way. HeH+ is also stable in its ground state but is extremely reactive—it is the strongest Brønsted acid known, and therefore can exist only in isolation, as it will protonate any molecule or counteranion it contacts. This technique has also produced the neutral molecule He2, which has a large number of band systems, and HgHe, which is apparently held together only by polarization forces. Van der Waals compounds of helium can also be formed with cryogenic helium gas and atoms of some other substance, such as LiHe and He2. Theoretically, other true compounds may be possible, such as helium fluorohydride (HHeF), which would be analogous to HArF, discovered in 2000. Calculations show that two new compounds containing a helium-oxygen bond could be stable. Two new molecular species, predicted using theory, CsFHeO and N(CH3)4FHeO, are derivatives of a metastable FHeO− anion first theorized in 2005 by a group from Taiwan. Helium atoms have been inserted into the hollow carbon cage molecules (the fullerenes) by heating under high pressure. The endohedral fullerene molecules formed are stable at high temperatures. When chemical derivatives of these fullerenes are formed, the helium stays inside. If helium-3 is used, it can be readily observed by helium nuclear magnetic resonance spectroscopy. Many fullerenes containing helium-3 have been reported. Although the helium atoms are not attached by covalent or ionic bonds, these substances have distinct properties and a definite composition, like all stoichiometric chemical compounds. Under high pressures helium can form compounds with various other elements. Helium-nitrogen clathrate (He(N2)11) crystals have been grown at room temperature at pressures ca. 10 GPa in a diamond anvil cell. The insulating electride Na2He has been shown to be thermodynamically stable at pressures above 113 GPa. It has a fluorite structure. Occurrence and production Natural abundance Although it is rare on Earth, helium is the second most abundant element in the known Universe, constituting 23% of its baryonic mass. Only hydrogen is more abundant. The vast majority of helium was formed by Big Bang nucleosynthesis one to three minutes after the Big Bang. As such, measurements of its abundance contribute to cosmological models. In stars, it is formed by the nuclear fusion of hydrogen in proton–proton chain reactions and the CNO cycle, part of stellar nucleosynthesis. In the Earth's atmosphere, the concentration of helium by volume is only 5.2 parts per million. The concentration is low and fairly constant despite the continuous production of new helium because most helium in the Earth's atmosphere escapes into space by several processes. In the Earth's heterosphere, a part of the upper atmosphere, helium and hydrogen are the most abundant elements. Most helium on Earth is a result of radioactive decay. Helium is found in large amounts in minerals of uranium and thorium, including uraninite and its varieties cleveite and pitchblende, carnotite and monazite (a group name; "monazite" usually refers to monazite-(Ce)), because they emit alpha particles (helium nuclei, He2+) to which electrons immediately combine as soon as the particle is stopped by the rock. In this way an estimated 3000 metric tons of helium are generated per year throughout the lithosphere. In the Earth's crust, the concentration of helium is 8 parts per billion. In seawater, the concentration is only 4 parts per trillion. There are also small amounts in mineral springs, volcanic gas, and meteoric iron. Because helium is trapped in the subsurface under conditions that also trap natural gas, the greatest natural concentrations of helium on the planet are found in natural gas, from which most commercial helium is extracted. The concentration varies in a broad range from a few ppm to more than 7% in a small gas field in San Juan County, New Mexico. , the world's helium reserves were estimated at 31 billion cubic meters, with a third of that being in Qatar. In 2015 and 2016 additional probable reserves were announced to be under the Rocky Mountains in North America and in the East African Rift. The Bureau of Land Management (BLM) has proposed an October 2024 plan for managing natural resources in western Colorado. The plan involves closing 543,000 acres to oil and gas leasing while keeping 692,300 acres open. Among the open areas, 165,700 acres have been identified as suitable for helium recovery. The United States possesses an estimated 306 billion cubic feet of recoverable helium, sufficient to meet current consumption rates of 2.15 billion cubic feet per year for approximately 150 years. Modern extraction and distribution For large-scale use, helium is extracted by fractional distillation from natural gas, which can contain as much as 7% helium. Since helium has a lower boiling point than any other element, low temperatures and high pressure are used to liquefy nearly all the other gases (mostly nitrogen and methane). The resulting crude helium gas is purified by successive exposures to lowering temperatures, in which almost all of the remaining nitrogen and other gases are precipitated out of the gaseous mixture. Activated charcoal is used as a final purification step, usually resulting in 99.995% pure Grade-A helium. The principal impurity in Grade-A helium is neon. In a final production step, most of the helium that is produced is liquefied via a cryogenic process. This is necessary for applications requiring liquid helium and also allows helium suppliers to reduce the cost of long-distance transportation, as the largest liquid helium containers have more than five times the capacity of the largest gaseous helium tube trailers. In 2008, approximately 169 million standard cubic meters (SCM) of helium were extracted from natural gas or withdrawn from helium reserves, with approximately 78% from the United States, 10% from Algeria, and most of the remainder from Russia, Poland, and Qatar. By 2013, increases in helium production in Qatar (under the company Qatargas managed by Air Liquide) had increased Qatar's fraction of world helium production to 25%, making it the second largest exporter after the United States. An estimated deposit of helium was found in Tanzania in 2016. A large-scale helium plant was opened in Ningxia, China in 2020. In the United States, most helium is extracted from the natural gas of the Hugoton and nearby gas fields in Kansas, Oklahoma, and the Panhandle Field in Texas. Much of this gas was once sent by pipeline to the National Helium Reserve, but since 2005, this reserve has been depleted and sold off, and it is expected to be largely depleted by 2021 under the October 2013 Responsible Helium Administration and Stewardship Act (H.R. 527). The helium fields of the western United States are emerging as an alternate source of helium supply, particularly those of the "Four Corners" region (the states of Arizona, Colorado, New Mexico and Utah). Diffusion of crude natural gas through special semipermeable membranes and other barriers is another method to recover and purify helium. In 1996, the U.S. had proven helium reserves in such gas well complexes of about 147 billion standard cubic feet (4.2 billion SCM). At rates of use at that time (72 million SCM per year in the U.S.; see pie chart below) this would have been enough helium for about 58 years of U.S. use, and less than this (perhaps 80% of the time) at world use rates, although factors in saving and processing impact effective reserve numbers. Helium is generally extracted from natural gas because it is present in air at only a fraction of that of neon, yet the demand for it is far higher. It is estimated that if all neon production were retooled to save helium, 0.1% of the world's helium demands would be satisfied. Similarly, only 1% of the world's helium demands could be satisfied by re-tooling all air distillation plants. Helium can be synthesized by bombardment of lithium or boron with high-velocity protons, or by bombardment of lithium with deuterons, but these processes are a completely uneconomical method of production. Helium is commercially available in either liquid or gaseous form. As a liquid, it can be supplied in small insulated containers called dewars which hold as much as 1,000 liters of helium, or in large ISO containers, which have nominal capacities as large as 42 m3 (around 11,000 U.S. gallons). In gaseous form, small quantities of helium are supplied in high-pressure cylinders holding as much as 8 m3 (approximately . 282 standard cubic feet), while large quantities of high-pressure gas are supplied in tube trailers, which have capacities of as much as 4,860 m3 (approx. 172,000 standard cubic feet). Conservation advocates According to helium conservationists like Nobel laureate physicist Robert Coleman Richardson, writing in 2010, the free market price of helium has contributed to "wasteful" usage (e.g. for helium balloons). Prices in the 2000s had been lowered by the decision of the U.S. Congress to sell off the country's large helium stockpile by 2015. According to Richardson, the price needed to be multiplied by 20 to eliminate the excessive wasting of helium. In the 2012 Nuttall et al. paper titled "Stop squandering helium", it was also proposed to create an International Helium Agency that would build a sustainable market for "this precious commodity". Applications While balloons are perhaps the best-known use of helium, they are a minor part of all helium use. Helium is used for many purposes that require some of its unique properties, such as its low boiling point, low density, low solubility, high thermal conductivity, or inertness. Of the 2014 world helium total production of about 32 million kg (180 million standard cubic meters) helium per year, the largest use (about 32% of the total in 2014) is in cryogenic applications, most of which involves cooling the superconducting magnets in medical MRI scanners and NMR spectrometers. Other major uses were pressurizing and purging systems, welding, maintenance of controlled atmospheres, and leak detection. Other uses by category were relatively minor fractions. Controlled atmospheres Helium is used as a protective gas in growing silicon and germanium crystals, in titanium and zirconium production, and in gas chromatography, because it is inert. Because of its inertness, thermally and calorically perfect nature, high speed of sound, and high value of the heat capacity ratio, it is also useful in supersonic wind tunnels and impulse facilities. Gas tungsten arc welding Helium is used as a shielding gas in arc welding processes on materials that, at welding temperatures are contaminated and weakened by air or nitrogen. A number of inert shielding gases are used in gas tungsten arc welding, but helium is used instead of cheaper argon especially for welding materials that have higher heat conductivity, like aluminium or copper. Minor uses Industrial leak detection One industrial application for helium is leak detection. Because helium diffuses through solids three times faster than air, it is used as a tracer gas to detect leaks in high-vacuum equipment (such as cryogenic tanks) and high-pressure containers. The tested object is placed in a chamber, which is then evacuated and filled with helium. The helium that escapes through the leaks is detected by a sensitive device (helium mass spectrometer), even at the leak rates as small as 10−9 mbar·L/s (10−10 Pa·m3/s). The measurement procedure is normally automatic and is called helium integral test. A simpler procedure is to fill the tested object with helium and to manually search for leaks with a hand-held device. Helium leaks through cracks should not be confused with gas permeation through a bulk material. While helium has documented permeation constants (thus a calculable permeation rate) through glasses, ceramics, and synthetic materials, inert gases such as helium will not permeate most bulk metals. Flight Because it is lighter than air, airships and balloons are inflated with helium for lift. While hydrogen gas is more buoyant and escapes permeating through a membrane at a lower rate, helium has the advantage of being non-flammable, and indeed fire-retardant. Another minor use is in rocketry, where helium is used as an ullage medium to backfill rocket propellant tanks in flight and to condense hydrogen and oxygen to make rocket fuel. It is also used to purge fuel and oxidizer from ground support equipment prior to launch and to pre-cool liquid hydrogen in space vehicles. For example, the Saturn V rocket used in the Apollo program needed about of helium to launch. Minor commercial and recreational uses Helium as a breathing gas has no narcotic properties, so helium mixtures such as trimix, heliox and heliair are used for deep diving to reduce the effects of narcosis, which worsen with increasing depth. As pressure increases with depth, the density of the breathing gas also increases, and the low molecular weight of helium is found to considerably reduce the effort of breathing by lowering the density of the mixture. This reduces the Reynolds number of flow, leading to a reduction of turbulent flow and an increase in laminar flow, which requires less breathing. At depths below divers breathing helium-oxygen mixtures begin to experience tremors and a decrease in psychomotor function, symptoms of high-pressure nervous syndrome. This effect may be countered to some extent by adding an amount of narcotic gas such as hydrogen or nitrogen to a helium–oxygen mixture. Helium–neon lasers, a type of low-powered gas laser producing a red beam, had various practical applications which included barcode readers and laser pointers, before they were almost universally replaced by cheaper diode lasers. For its inertness and high thermal conductivity, neutron transparency, and because it does not form radioactive isotopes under reactor conditions, helium is used as a heat-transfer medium in some gas-cooled nuclear reactors. Helium, mixed with a heavier gas such as xenon, is useful for thermoacoustic refrigeration due to the resulting high heat capacity ratio and low Prandtl number. The inertness of helium has environmental advantages over conventional refrigeration systems which contribute to ozone depletion or global warming. Helium is also used in some hard disk drives. Scientific uses The use of helium reduces the distorting effects of temperature variations in the space between lenses in some telescopes due to its extremely low index of refraction. This method is especially used in solar telescopes where a vacuum tight telescope tube would be too heavy. Helium is a commonly used carrier gas for gas chromatography. The age of rocks and minerals that contain uranium and thorium can be estimated by measuring the level of helium with a process known as helium dating. Helium at low temperatures is used in cryogenics and in certain cryogenic applications. As examples of applications, liquid helium is used to cool certain metals to the extremely low temperatures required for superconductivity, such as in superconducting magnets for magnetic resonance imaging. The Large Hadron Collider at CERN uses 96 metric tons of liquid helium to maintain the temperature at . Medical uses Helium was approved for medical use in the United States in April 2020 for humans and animals. As a contaminant While chemically inert, helium contamination impairs the operation of microelectromechanical systems (MEMS) such that iPhones may fail. Inhalation and safety Effects Neutral helium at standard conditions is non-toxic, plays no biological role and is found in trace amounts in human blood. The speed of sound in helium is nearly three times the speed of sound in air. Because the natural resonance frequency of a gas-filled cavity is proportional to the speed of sound in the gas, when helium is inhaled, a corresponding increase occurs in the resonant frequencies of the vocal tract, which is the amplifier of vocal sound. This increase in the resonant frequency of the amplifier (the vocal tract) gives increased amplification to the high-frequency components of the sound wave produced by the direct vibration of the vocal folds, compared to the case when the voice box is filled with air. When a person speaks after inhaling helium gas, the muscles that control the voice box still move in the same way as when the voice box is filled with air; therefore the fundamental frequency (sometimes called pitch) produced by direct vibration of the vocal folds does not change. However, the high-frequency-preferred amplification causes a change in timbre of the amplified sound, resulting in a reedy, duck-like vocal quality. The opposite effect, lowering resonant frequencies, can be obtained by inhaling a dense gas such as sulfur hexafluoride or xenon. Hazards Inhaling helium can be dangerous if done to excess, since helium is a simple asphyxiant and so displaces oxygen needed for normal respiration. Fatalities have been recorded, including a youth who suffocated in Vancouver in 2003 and two adults who suffocated in South Florida in 2006. In 1998, an Australian girl from Victoria fell unconscious and temporarily turned blue after inhaling the entire contents of a party balloon. Inhaling helium directly from pressurized cylinders or even balloon filling valves is extremely dangerous, as high flow rate and pressure can result in barotrauma, fatally rupturing lung tissue. Death caused by helium is rare. The first media-recorded case was that of a 15-year-old girl from Texas who died in 1998 from helium inhalation at a friend's party; the exact type of helium death is unidentified. In the United States, only two fatalities were reported between 2000 and 2004, including a man who died in North Carolina of barotrauma in 2002. A youth asphyxiated in Vancouver during 2003, and a 27-year-old man in Australia had an embolism after breathing from a cylinder in 2000. Since then, two adults asphyxiated in South Florida in 2006, and there were cases in 2009 and 2010, one of whom was a Californian youth who was found with a bag over his head, attached to a helium tank, and another teenager in Northern Ireland died of asphyxiation. At Eagle Point, Oregon a teenage girl died in 2012 from barotrauma at a party. A girl from Michigan died from hypoxia later in the year. On February 4, 2015, it was revealed that, during the recording of their main TV show on January 28, a 12-year-old member (name withheld) of Japanese all-girl singing group 3B Junior suffered from air embolism, losing consciousness and falling into a coma as a result of air bubbles blocking the flow of blood to the brain after inhaling huge quantities of helium as part of a game. The incident was not made public until a week later. The staff of TV Asahi held an emergency press conference to communicate that the member had been taken to the hospital and is showing signs of rehabilitation such as moving eyes and limbs, but her consciousness has not yet been sufficiently recovered. Police have launched an investigation due to a neglect of safety measures. The safety issues for cryogenic helium are similar to those of liquid nitrogen; its extremely low temperatures can result in cold burns, and the liquid-to-gas expansion ratio can cause explosions if no pressure-relief devices are installed. Containers of helium gas at 5 to 10 K should be handled as if they contain liquid helium due to the rapid and significant thermal expansion that occurs when helium gas at less than 10 K is warmed to room temperature. At high pressures (more than about 20 atm or two MPa), a mixture of helium and oxygen (heliox) can lead to high-pressure nervous syndrome, a sort of reverse-anesthetic effect; adding a small amount of nitrogen to the mixture can alleviate the problem.
Physical sciences
Chemical elements_2
null
13257
https://en.wikipedia.org/wiki/Hydrocarbon
Hydrocarbon
In organic chemistry, a hydrocarbon is an organic compound consisting entirely of hydrogen and carbon. Hydrocarbons are examples of group 14 hydrides. Hydrocarbons are generally colourless and hydrophobic; their odor is usually faint, and may be similar to that of gasoline or lighter fluid. They occur in a diverse range of molecular structures and phases: they can be gases (such as methane and propane), liquids (such as hexane and benzene), low melting solids (such as paraffin wax and naphthalene) or polymers (such as polyethylene and polystyrene). In the fossil fuel industries, hydrocarbon refers to naturally occurring petroleum, natural gas and coal, or their hydrocarbon derivatives and purified forms. Combustion of hydrocarbons is the main source of the world's energy. Petroleum is the dominant raw-material source for organic commodity chemicals such as solvents and polymers. Most anthropogenic (human-generated) emissions of greenhouse gases are either carbon dioxide released by the burning of fossil fuels, or methane released from the handling of natural gas or from agriculture. Types As defined by the International Union of Pure and Applied Chemistry's nomenclature of organic chemistry, hydrocarbons are classified as follows: Saturated hydrocarbons, which are the simplest of the hydrocarbon types. They are composed entirely of single bonds and are saturated with hydrogen. The formula for acyclic saturated hydrocarbons (i.e., alkanes) is CH. The most general form of saturated hydrocarbons, (whether linear or branched species, and whether with or without one or more rings) is CH, where r is the number of rings. Those with exactly one ring are the cycloalkanes. Saturated hydrocarbons are the basis of petroleum fuels and may be either linear or branched species. One or more of the hydrogen atoms can be replaced with other atoms, for example chlorine or another halogen: this is called a substitution reaction. An example is the conversion of methane to chloroform using a chlorination reaction. Halogenating a hydrocarbon produces something that is not a hydrocarbon. It is a very common and useful process. Hydrocarbons with the same molecular formula but different structural formulae are called structural isomers. As given in the example of 3-methylhexane and its higher homologues, branched hydrocarbons can be chiral. Chiral saturated hydrocarbons constitute the side chains of biomolecules such as chlorophyll and tocopherol. Unsaturated hydrocarbons, which have one or more double or triple bonds between carbon atoms. Those with one or more double bonds are called alkenes. Those with one double bond have the formula CH (assuming non-cyclic structures). Those containing triple bonds are called alkyne. Those with one triple bond have the formula CH. Aromatic hydrocarbons, also known as arenes, which are hydrocarbons that have at least one aromatic ring. 10% of total nonmethane organic carbon emission are aromatic hydrocarbons from the exhaust of gasoline-powered vehicles. The term 'aliphatic' refers to non-aromatic hydrocarbons. Saturated aliphatic hydrocarbons are sometimes referred to as 'paraffins'. Aliphatic hydrocarbons containing a double bond between carbon atoms are sometimes referred to as 'olefins'. Usage The predominant use of hydrocarbons is as a combustible fuel source. Methane is the predominant component of natural gas. C6 through C10 alkanes, alkenes, cycloalkanes, and aromatic hydrocarbons are the main components of gasoline, naphtha, jet fuel, and specialized industrial solvent mixtures. With the progressive addition of carbon units, the simple non-ring structured hydrocarbons have higher viscosities, lubricating indices, boiling points, and solidification temperatures. At the opposite extreme from methane lie the heavy tars that remain as the lowest fraction in a crude oil refining retort. They are collected and widely utilized as roofing compounds, pavement material (bitumen), wood preservatives (the creosote series) and as extremely high viscosity shear-resisting liquids. Some large-scale non-fuel applications of hydrocarbons begin with ethane and propane, which are obtained from petroleum and natural gas. These two gases are converted either to syngas or to ethylene and propylene respectively. Global consumption of benzene in 2021 is estimated at more than 58 million metric tons, which will increase to 60 million tons in 2022. Hydrocarbons are also prevalent in nature. Some eusocial arthropods, such as the Brazilian stingless bee, Schwarziana quadripunctata, use unique cuticular hydrocarbon "scents" in order to determine kin from non-kin. This hydrocarbon composition varies between age, sex, nest location, and hierarchal position. There is also potential to harvest hydrocarbons from plants like Euphorbia lathyris and E. tirucalli as an alternative and renewable energy source for vehicles that use diesel. Furthermore, endophytic bacteria from plants that naturally produce hydrocarbons have been used in hydrocarbon degradation in attempts to deplete hydrocarbon concentration in polluted soils. Reactions Saturated hydrocarbons are notable for their inertness. Unsaturated hydrocarbons (alkanes, alkenes and aromatic compounds) react more readily, by means of substitution, addition, polymerization. At higher temperatures they undergo dehydrogenation, oxidation and combustion. Saturated hydrocarbons Cracking The cracking of saturated hydrocarbons is the main industrial route to alkenes and alkyne. These reactions require heterogeneous catalysts and temperatures >500 °C. Oxidation Widely practice conversions of hydrocarbons involves their reaction with oxygen. In the presence of excess oxygen, hydrocarbons combust. With, however, careful conditions, which have been optimized for many years, partial oxidation results. Useful compounds can obtained in this way: maleic acid from butane, terephthalic acid from xylenes, acetone together with phenol from cumene (isopropylbenzene), and cyclohexanone from cyclohexane]]. The process, which is called autoxidation, begins with the formation of hydroperoxides (ROOH). Combustion Combustion of hydrocarbons is currently the main source of the world's energy for electric power generation, heating (such as home heating), and transportation. Often this energy is used directly as heat such as in home heaters, which use either petroleum or natural gas. The hydrocarbon is burnt and the heat is used to heat water, which is then circulated. A similar principle is used to create electrical energy in power plants. Both saturated and unsaturated hydrocarbons undergo this process. Common properties of hydrocarbons are the facts that they produce steam, carbon dioxide and heat during combustion and that oxygen is required for combustion to take place. The simplest hydrocarbon, methane, burns as follows: \underset{methane}{CH4} + 2O2 -> CO2 + 2H2O In inadequate supply of air, carbon black and water vapour are formed: \underset{methane}{CH4} + O2 -> C + 2H2O And finally, for any linear alkane of n carbon atoms, Partial oxidation characterizes the reactions of alkenes and oxygen. This process is the basis of rancidification and paint drying. Benzene burns with sooty flame when heated in air: \underset{benzene}{C6H6} + {15\over 2}O2 -> 6CO2 {+} 3H2O Halogenation Saturated hydrocarbons react with chlorine and fluorine. In the case of chlorination, one of the chlorine atoms replaces a hydrogen atom. The reactions proceed via free-radical pathways, in which the halogen first dissociates into a two neutral radical atoms (homolytic fission). CH + Cl → CHCl + HCl CHCl + Cl → CHCl + HCl all the way to CCl (carbon tetrachloride) CH + Cl → CHCl + HCl CHCl + Cl → CHCl + HCl all the way to CCl (hexachloroethane) Unsaturated hydrocarbons Substitution Aromatic compounds, almost uniquely for hydrocarbons, undergo substitution reactions. The chemical process practiced on the largest scale is the reaction of benzene and ethene to give ethylbenzene: The resulting ethylbenzene is dehydrogenated to styrene and then polymerized to manufacture polystyrene, a common thermoplastic material. Addition Addition reactions apply to alkenes and alkynes. It is because they add reagents that they are called unsaturated. In this reaction a variety of reagents add "across" the pi-bond(s). Chlorine, hydrogen chloride, water, and hydrogen are illustrative reagents. Polymerization is a form of addition. Alkenes and some alkynes also undergo polymerization by opening of the multiple bonds to produce polyethylene, polybutylene, and polystyrene. The alkyne acetylene polymerizes to produce polyacetylene. Oligomers (chains of a few monomers) may be produced, for example in the Shell higher olefin process, where α-olefins are extended to make longer α-olefins by adding ethylene repeatedly. Metathesis Some hydrocarbons undergo metathesis, in which substituents attached by C–C bonds are exchanged between molecules. For a single C–C bond it is alkane metathesis, for a double C–C bond it is alkene metathesis (olefin metathesis), and for a triple C–C bond it is alkyne metathesis. Origin The vast majority of hydrocarbons found on Earth occur in crude oil, petroleum, coal, and natural gas. For thousands of years they have been exploited and used for a vast range of purposes. Petroleum () and coal are generally thought to be products of decomposition of organic matter. Coal, in contrast to petroleum, is richer in carbon and poorer in hydrogen. Natural gas is the product of methanogenesis. A seemingly limitless variety of compounds comprise petroleum, hence the necessity of refineries. These hydrocarbons consist of saturated hydrocarbons, aromatic hydrocarbons, or combinations of the two. Missing in petroleum are alkenes and alkynes. Their production requires refineries. Petroleum-derived hydrocarbons are mainly consumed for fuel, but they are also the source of virtually all synthetic organic compounds, including plastics and pharmaceuticals. Natural gas is consumed almost exclusively as fuel. Coal is used as a fuel and as a reducing agent in metallurgy. A small fraction of hydrocarbon found on earth, and all currently known hydrocarbon found on other planets and moons, is thought to be abiological. Hydrocarbons such as ethylene, isoprene, and monoterpenes are emitted by living vegetation. Some hydrocarbons also are widespread and abundant in the Solar System. Lakes of liquid methane and ethane have been found on Titan, Saturn's largest moon, as confirmed by the Cassini–Huygens space probe. Hydrocarbons are also abundant in nebulae forming polycyclic aromatic hydrocarbon compounds. Environmental impact Burning hydrocarbons as fuel, which produces carbon dioxide and water, is a major contributor to anthropogenic global warming. Hydrocarbons are introduced into the environment through their extensive use as fuels and chemicals as well as through leaks or accidental spills during exploration, production, refining, or transport of fossil fuels. Anthropogenic hydrocarbon contamination of soil is a serious global issue due to contaminant persistence and the negative impact on human health. When soil is contaminated by hydrocarbons, it can have a significant impact on its microbiological, chemical, and physical properties. This can serve to prevent, slow down or even accelerate the growth of vegetation depending on the exact changes that occur. Crude oil and natural gas are the two largest sources of hydrocarbon contamination of soil. Bioremediation Bioremediation of hydrocarbon from soil or water contaminated is a formidable challenge because of the chemical inertness that characterize hydrocarbons (hence they survived millions of years in the source rock). Nonetheless, many strategies have been devised, bioremediation being prominent. The basic problem with bioremediation is the paucity of enzymes that act on them. Nonetheless, the area has received regular attention. Bacteria in the gabbroic layer of the ocean's crust can degrade hydrocarbons; but the extreme environment makes research difficult. Other bacteria such as Lutibacterium anuloederans can also degrade hydrocarbons. Mycoremediation or breaking down of hydrocarbon by mycelium and mushrooms is possible. Safety Hydrocarbons are generally of low toxicity, hence the widespread use of gasoline and related volatile products. Aromatic compounds such as benzene and toluene are narcotic and chronic toxins, and benzene in particular is known to be carcinogenic. Certain rare polycyclic aromatic compounds are carcinogenic. Hydrocarbons are highly flammable.
Physical sciences
Hydrocarbons
null
13258
https://en.wikipedia.org/wiki/Halogen
Halogen
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 2 | |- ! 3 | |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} The halogens () are a group in the periodic table consisting of six chemically related elements: fluorine (F), chlorine (Cl), bromine (Br), iodine (I), and the radioactive elements astatine (At) and tennessine (Ts), though some authors would exclude tennessine as its chemistry is unknown and is theoretically expected to be more like that of gallium. In the modern IUPAC nomenclature, this group is known as group 17. The word "halogen" means "salt former" or "salt maker". When halogens react with metals, they produce a wide range of salts, including calcium fluoride, sodium chloride (common table salt), silver bromide and potassium iodide. The group of halogens is the only periodic table group that contains elements in three of the main states of matter at standard temperature and pressure, though not far above room temperature the same becomes true of groups 1 and 15, assuming white phosphorus is taken as the standard state. All of the halogens form acids when bonded to hydrogen. Most halogens are typically produced from minerals or salts. The middle halogens—chlorine, bromine, and iodine—are often used as disinfectants. Organobromides are the most important class of flame retardants, while elemental halogens are dangerous and can be toxic. History The fluorine mineral fluorospar was known as early as 1529. Early chemists realized that fluorine compounds contain an undiscovered element, but were unable to isolate it. In 1860, George Gore, an English chemist, ran a current of electricity through hydrofluoric acid and probably produced fluorine, but he was unable to prove his results at the time. In 1886, Henri Moissan, a chemist in Paris, performed electrolysis on potassium bifluoride dissolved in anhydrous hydrogen fluoride, and successfully isolated fluorine. Hydrochloric acid was known to alchemists and early chemists. However, elemental chlorine was not produced until 1774, when Carl Wilhelm Scheele heated hydrochloric acid with manganese dioxide. Scheele called the element "dephlogisticated muriatic acid", which is how chlorine was known for 33 years. In 1807, Humphry Davy investigated chlorine and discovered that it is an actual element. Chlorine gas was used as a poisonous gas during World War I. It displaced oxygen in contaminated areas and replaced common oxygenated air with the toxic chlorine gas. The gas would burn human tissue externally and internally, especially the lungs, making breathing difficult or impossible depending on the level of contamination. Bromine was discovered in the 1820s by Antoine Jérôme Balard. Balard discovered bromine by passing chlorine gas through a sample of brine. He originally proposed the name muride for the new element, but the French Academy changed the element's name to bromine. Iodine was discovered by Bernard Courtois, who was using seaweed ash as part of a process for saltpeter manufacture. Courtois typically boiled the seaweed ash with water to generate potassium chloride. However, in 1811, Courtois added sulfuric acid to his process and found that his process produced purple fumes that condensed into black crystals. Suspecting that these crystals were a new element, Courtois sent samples to other chemists for investigation. Iodine was proven to be a new element by Joseph Gay-Lussac. In 1931, Fred Allison claimed to have discovered element 85 with a magneto-optical machine, and named the element Alabamine, but was mistaken. In 1937, Rajendralal De claimed to have discovered element 85 in minerals, and called the element dakine, but he was also mistaken. An attempt at discovering element 85 in 1939 by Horia Hulubei and Yvette Cauchois via spectroscopy was also unsuccessful, as was an attempt in the same year by Walter Minder, who discovered an iodine-like element resulting from beta decay of polonium. Element 85, now named astatine, was produced successfully in 1940 by Dale R. Corson, K.R. Mackenzie, and Emilio G. Segrè, who bombarded bismuth with alpha particles. In 2010, a team led by nuclear physicist Yuri Oganessian involving scientists from the JINR, Oak Ridge National Laboratory, Lawrence Livermore National Laboratory, and Vanderbilt University successfully bombarded berkelium-249 atoms with calcium-48 atoms to make tennessine. Etymology In 1811, the German chemist Johann Schweigger proposed that the name "halogen" – meaning "salt producer", from αλς [hals] "salt" and γενειν [genein] "to beget" – replace the name "chlorine", which had been proposed by the English chemist Humphry Davy. Davy's name for the element prevailed. However, in 1826, the Swedish chemist Baron Jöns Jacob Berzelius proposed the term "halogen" for the elements fluorine, chlorine, and iodine, which produce a sea-salt-like substance when they form a compound with an alkaline metal. The English names of these elements all have the ending -ine. Fluorine's name comes from the Latin word fluere, meaning "to flow", because it was derived from the mineral fluorite, which was used as a flux in metalworking. Chlorine's name comes from the Greek word chloros, meaning "greenish-yellow". Bromine's name comes from the Greek word bromos, meaning "stench". Iodine's name comes from the Greek word iodes, meaning "violet". Astatine's name comes from the Greek word astatos, meaning "unstable". Tennessine is named after the US state of Tennessee, where it was synthesized. Characteristics Chemical The halogens fluorine, chlorine, bromine, and iodine are nonmetals; the chemical properties of astatine and tennessine, two heaviest group 17 members, have not been conclusively investigated. The halogens show trends in chemical bond energy moving from top to bottom of the periodic table column with fluorine deviating slightly. It follows a trend in having the highest bond energy in compounds with other atoms, but it has very weak bonds within the diatomic F2 molecule. This means that further down group 17 in the periodic table, the reactivity of elements decreases because of the increasing size of the atoms. Halogens are highly reactive, and as such can be harmful or lethal to biological organisms in sufficient quantities. This high reactivity is due to the high electronegativity of the atoms due to their high effective nuclear charge. Because the halogens have seven valence electrons in their outermost energy level, they can gain an electron by reacting with atoms of other elements to satisfy the octet rule. Fluorine is the most reactive of all elements; it is the only element more electronegative than oxygen, it attacks otherwise-inert materials such as glass, and it forms compounds with the usually inert noble gases. It is a corrosive and highly toxic gas. The reactivity of fluorine is such that, if used or stored in laboratory glassware, it can react with glass in the presence of small amounts of water to form silicon tetrafluoride (SiF4). Thus, fluorine must be handled with substances such as Teflon (which is itself an organofluorine compound), extremely dry glass, or metals such as copper or steel, which form a protective layer of fluoride on their surface. The high reactivity of fluorine allows some of the strongest bonds possible, especially to carbon. For example, Teflon is fluorine bonded with carbon and is extremely resistant to thermal and chemical attacks and has a high melting point. Molecules Diatomic halogen molecules The stable halogens form homonuclear diatomic molecules. Due to relatively weak intermolecular forces, chlorine and fluorine form part of the group known as "elemental gases". The elements become less reactive and have higher melting points as the atomic number increases. The higher melting points are caused by stronger London dispersion forces resulting from more electrons. Compounds Hydrogen halides All of the halogens have been observed to react with hydrogen to form hydrogen halides. For fluorine, chlorine, and bromine, this reaction is in the form of: H2 + X2 → 2HX However, hydrogen iodide and hydrogen astatide can split back into their constituent elements. The hydrogen-halogen reactions get gradually less reactive toward the heavier halogens. A fluorine-hydrogen reaction is explosive even when it is dark and cold. A chlorine-hydrogen reaction is also explosive, but only in the presence of light and heat. A bromine-hydrogen reaction is even less explosive; it is explosive only when exposed to flames. Iodine and astatine only partially react with hydrogen, forming equilibria. All halogens form binary compounds with hydrogen known as the hydrogen halides: hydrogen fluoride (HF), hydrogen chloride (HCl), hydrogen bromide (HBr), hydrogen iodide (HI), and hydrogen astatide (HAt). All of these compounds form acids when mixed with water. Hydrogen fluoride is the only hydrogen halide that forms hydrogen bonds. Hydrochloric acid, hydrobromic acid, hydroiodic acid, and acid are all strong acids, but hydrofluoric acid is a weak acid. All of the hydrogen halides are irritants. Hydrogen fluoride and hydrogen chloride are highly acidic. Hydrogen fluoride is used as an industrial chemical, and is highly toxic, causing pulmonary edema and damaging cells. Hydrogen chloride is also a dangerous chemical. Breathing in gas with more than fifty parts per million of hydrogen chloride can cause death in humans. Hydrogen bromide is even more toxic and irritating than hydrogen chloride. Breathing in gas with more than thirty parts per million of hydrogen bromide can be lethal to humans. Hydrogen iodide, like other hydrogen halides, is toxic. Metal halides All the halogens are known to react with sodium to form sodium fluoride, sodium chloride, sodium bromide, sodium iodide, and sodium astatide. Heated sodium's reaction with halogens produces bright-orange flames. Sodium's reaction with chlorine is in the form of: Iron reacts with fluorine, chlorine, and bromine to form iron(III) halides. These reactions are in the form of: However, when iron reacts with iodine, it forms only iron(II) iodide. Iron wool can react rapidly with fluorine to form the white compound iron(III) fluoride even in cold temperatures. When chlorine comes into contact with a heated iron, they react to form the black iron(III) chloride. However, if the reaction conditions are moist, this reaction will instead result in a reddish-brown product. Iron can also react with bromine to form iron(III) bromide. This compound is reddish-brown in dry conditions. Iron's reaction with bromine is less reactive than its reaction with fluorine or chlorine. A hot iron can also react with iodine, but it forms iron(II) iodide. This compound may be gray, but the reaction is always contaminated with excess iodine, so it is not known for sure. Iron's reaction with iodine is less vigorous than its reaction with the lighter halogens. Interhalogen compounds Interhalogen compounds are in the form of XYn where X and Y are halogens and n is one, three, five, or seven. Interhalogen compounds contain at most two different halogens. Large interhalogens, such as can be produced by a reaction of a pure halogen with a smaller interhalogen such as . All interhalogens except can be produced by directly combining pure halogens in various conditions. Interhalogens are typically more reactive than all diatomic halogen molecules except F2 because interhalogen bonds are weaker. However, the chemical properties of interhalogens are still roughly the same as those of diatomic halogens. Many interhalogens consist of one or more atoms of fluorine bonding to a heavier halogen. Chlorine and bromine can bond with up to five fluorine atoms, and iodine can bond with up to seven fluorine atoms. Most interhalogen compounds are covalent gases. However, some interhalogens are liquids, such as BrF3, and many iodine-containing interhalogens are solids. Organohalogen compounds Many synthetic organic compounds such as plastic polymers, and a few natural ones, contain halogen atoms; these are known as halogenated compounds or organic halides. Chlorine is by far the most abundant of the halogens in seawater, and the only one needed in relatively large amounts (as chloride ions) by humans. For example, chloride ions play a key role in brain function by mediating the action of the inhibitory transmitter GABA and are also used by the body to produce stomach acid. Iodine is needed in trace amounts for the production of thyroid hormones such as thyroxine. Organohalogens are also synthesized through the nucleophilic abstraction reaction. Polyhalogenated compounds Polyhalogenated compounds are industrially created compounds substituted with multiple halogens. Many of them are very toxic and bioaccumulate in humans, and have a very wide application range. They include PCBs, PBDEs, and perfluorinated compounds (PFCs), as well as numerous other compounds. Reactions Reactions with water Fluorine reacts vigorously with water to produce oxygen (O2) and hydrogen fluoride (HF): Chlorine has maximum solubility of ca. 7.1 g Cl2 per kg of water at ambient temperature (21 °C). Dissolved chlorine reacts to form hydrochloric acid (HCl) and hypochlorous acid, a solution that can be used as a disinfectant or bleach: Bromine has a solubility of 3.41 g per 100 g of water, but it slowly reacts to form hydrogen bromide (HBr) and hypobromous acid (HBrO): Iodine, however, is minimally soluble in water (0.03 g/100 g water at 20 °C) and does not react with it. However, iodine will form an aqueous solution in the presence of iodide ion, such as by addition of potassium iodide (KI), because the triiodide ion is formed. Physical and atomic The table below is a summary of the key physical and atomic properties of the halogens. Data marked with question marks are either uncertain or are estimations partially based on periodic trends rather than observations. Isotopes Fluorine has one stable and naturally occurring isotope, fluorine-19. However, there are trace amounts in nature of the radioactive isotope fluorine-23, which occurs via cluster decay of protactinium-231. A total of eighteen isotopes of fluorine have been discovered, with atomic masses ranging from 13 to 31. Chlorine has two stable and naturally occurring isotopes, chlorine-35 and chlorine-37. However, there are trace amounts in nature of the isotope chlorine-36, which occurs via spallation of argon-36. A total of 24 isotopes of chlorine have been discovered, with atomic masses ranging from 28 to 51. There are two stable and naturally occurring isotopes of bromine, bromine-79 and bromine-81. A total of 33 isotopes of bromine have been discovered, with atomic masses ranging from 66 to 98. There is one stable and naturally occurring isotope of iodine, iodine-127. However, there are trace amounts in nature of the radioactive isotope iodine-129, which occurs via spallation and from the radioactive decay of uranium in ores. Several other radioactive isotopes of iodine have also been created naturally via the decay of uranium. A total of 38 isotopes of iodine have been discovered, with atomic masses ranging from 108 to 145. There are no stable isotopes of astatine. However, there are four naturally occurring radioactive isotopes of astatine produced via radioactive decay of uranium, neptunium, and plutonium. These isotopes are astatine-215, astatine-217, astatine-218, and astatine-219. A total of 31 isotopes of astatine have been discovered, with atomic masses ranging from 191 to 227. There are no stable isotopes of tennessine. Tennessine has only two known synthetic radioisotopes, tennessine-293 and tennessine-294. Production Approximately six million metric tons of the fluorine mineral fluorite are produced each year. Four hundred-thousand metric tons of hydrofluoric acid are made each year. Fluorine gas is made from hydrofluoric acid produced as a by-product in phosphoric acid manufacture. Approximately 15,000 metric tons of fluorine gas are made per year. The mineral halite is the mineral that is most commonly mined for chlorine, but the minerals carnallite and sylvite are also mined for chlorine. Forty million metric tons of chlorine are produced each year by the electrolysis of brine. Approximately 450,000 metric tons of bromine are produced each year. Fifty percent of all bromine produced is produced in the United States, 35% in Israel, and most of the remainder in China. Historically, bromine was produced by adding sulfuric acid and bleaching powder to natural brine. However, in modern times, bromine is produced by electrolysis, a method invented by Herbert Dow. It is also possible to produce bromine by passing chlorine through seawater and then passing air through the seawater. In 2003, 22,000 metric tons of iodine were produced. Chile produces 40% of all iodine produced, Japan produces 30%, and smaller amounts are produced in Russia and the United States. Until the 1950s, iodine was extracted from kelp. However, in modern times, iodine is produced in other ways. One way that iodine is produced is by mixing sulfur dioxide with nitrate ores, which contain some iodates. Iodine is also extracted from natural gas fields. Even though astatine is naturally occurring, it is usually produced by bombarding bismuth with alpha particles. Tennessine is made by using a cyclotron, fusing berkelium-249 and calcium-48 to make tennessine-293 and tennessine-294. Applications Disinfectants Both chlorine and bromine are used as disinfectants for drinking water, swimming pools, fresh wounds, spas, dishes, and surfaces. They kill bacteria and other potentially harmful microorganisms through a process known as sterilization. Their reactivity is also put to use in bleaching. Sodium hypochlorite, which is produced from chlorine, is the active ingredient of most fabric bleaches, and chlorine-derived bleaches are used in the production of some paper products. Lighting Halogen lamps are a type of incandescent lamp using a tungsten filament in bulbs that have small amounts of a halogen, such as iodine or bromine added. This enables the production of lamps that are much smaller than non-halogen incandescent lightbulbs at the same wattage. The gas reduces the thinning of the filament and blackening of the inside of the bulb resulting in a bulb that has a much greater life. Halogen lamps glow at a higher temperature (2800 to 3400 kelvin) with a whiter colour than other incandescent bulbs. However, this requires bulbs to be manufactured from fused quartz rather than silica glass to reduce breakage. Drug components In drug discovery, the incorporation of halogen atoms into a lead drug candidate results in analogues that are usually more lipophilic and less water-soluble. As a consequence, halogen atoms are used to improve penetration through lipid membranes and tissues. It follows that there is a tendency for some halogenated drugs to accumulate in adipose tissue. The chemical reactivity of halogen atoms depends on both their point of attachment to the lead and the nature of the halogen. Aromatic halogen groups are far less reactive than aliphatic halogen groups, which can exhibit considerable chemical reactivity. For aliphatic carbon-halogen bonds, the C-F bond is the strongest and usually less chemically reactive than aliphatic C-H bonds. The other aliphatic-halogen bonds are weaker, their reactivity increasing down the periodic table. They are usually more chemically reactive than aliphatic C-H bonds. As a consequence, the most common halogen substitutions are the less reactive aromatic fluorine and chlorine groups. Biological role Fluoride anions are found in ivory, bones, teeth, blood, eggs, urine, and hair of organisms. Fluoride anions in very small amounts may be essential for humans. There are 0.5 milligrams of fluorine per liter of human blood. Human bones contain 0.2 to 1.2% fluorine. Human tissue contains approximately 50 parts per billion of fluorine. A typical 70-kilogram human contains 3 to 6 grams of fluorine. Chloride anions are essential to a large number of species, humans included. The concentration of chlorine in the dry weight of cereals is 10 to 20 parts per million, while in potatoes the concentration of chloride is 0.5%. Plant growth is adversely affected by chloride levels in the soil falling below 2 parts per million. Human blood contains an average of 0.3% chlorine. Human bone typically contains 900 parts per million of chlorine. Human tissue contains approximately 0.2 to 0.5% chlorine. There is a total of 95 grams of chlorine in a typical 70-kilogram human. Some bromine in the form of the bromide anion is present in all organisms. A biological role for bromine in humans has not been proven, but some organisms contain organobromine compounds. Humans typically consume 1 to 20 milligrams of bromine per day. There are typically 5 parts per million of bromine in human blood, 7 parts per million of bromine in human bones, and 7 parts per million of bromine in human tissue. A typical 70-kilogram human contains 260 milligrams of bromine. Humans typically consume less than 100 micrograms of iodine per day. Iodine deficiency can cause intellectual disability. Organoiodine compounds occur in humans in some of the glands, especially the thyroid gland, as well as the stomach, epidermis, and immune system. Foods containing iodine include cod, oysters, shrimp, herring, lobsters, sunflower seeds, seaweed, and mushrooms. However, iodine is not known to have a biological role in plants. There are typically 0.06 milligrams per liter of iodine in human blood, 300 parts per billion of iodine in human bones, and 50 to 700 parts per billion of iodine in human tissue. There are 10 to 20 milligrams of iodine in a typical 70-kilogram human. Astatine, although very scarce, has been found in micrograms in the earth. It has no known biological role because of its high radioactivity, extreme rarity, and has a half-life of just about 8 hours for the most stable isotope. Tennessine is purely man-made and has no other roles in nature. Toxicity The halogens tend to decrease in toxicity towards the heavier halogens. Fluorine gas is extremely toxic; breathing in fluorine at a concentration of 25 parts per million is potentially lethal. Hydrofluoric acid is also toxic, being able to penetrate skin and cause highly painful burns. In addition, fluoride anions are toxic, but not as toxic as pure fluorine. Fluoride can be lethal in amounts of 5 to 10 grams. Prolonged consumption of fluoride above concentrations of 1.5 mg/L is associated with a risk of dental fluorosis, an aesthetic condition of the teeth. At concentrations above 4 mg/L, there is an increased risk of developing skeletal fluorosis, a condition in which bone fractures become more common due to the hardening of bones. Current recommended levels in water fluoridation, a way to prevent dental caries, range from 0.7 to 1.2 mg/L to avoid the detrimental effects of fluoride while at the same time reaping the benefits. People with levels between normal levels and those required for skeletal fluorosis tend to have symptoms similar to arthritis. Chlorine gas is highly toxic. Breathing in chlorine at a concentration of 3 parts per million can rapidly cause a toxic reaction. Breathing in chlorine at a concentration of 50 parts per million is highly dangerous. Breathing in chlorine at a concentration of 500 parts per million for a few minutes is lethal. In addition, breathing in chlorine gas is highly painful because of its corrosive properties. Hydrochloric acid is the acid of chlorine, while relatively nontoxic, it is highly corrosive and releases very irritating and toxic hydrogen chloride gas in open air. Pure bromine is somewhat toxic but less toxic than fluorine and chlorine. One hundred milligrams of bromine is lethal. Bromide anions are also toxic, but less so than bromine. Bromide has a lethal dose of 30 grams. Iodine is somewhat toxic, being able to irritate the lungs and eyes, with a safety limit of 1 milligram per cubic meter. When taken orally, 3 grams of iodine can be lethal. Iodide anions are mostly nontoxic, but these can also be deadly if ingested in large amounts. Astatine is radioactive and thus highly dangerous, but it has not been produced in macroscopic quantities and hence it is most unlikely that its toxicity will be of much relevance to the average individual. Tennessine cannot be chemically investigated due to how short its half-life is, although its radioactivity would make it very dangerous. Superhalogen Certain aluminium clusters have superatom properties. These aluminium clusters are generated as anions ( with n = 1, 2, 3, ... ) in helium gas and reacted with a gas containing iodine. When analyzed by mass spectrometry one main reaction product turns out to be . These clusters of 13 aluminium atoms with an extra electron added do not appear to react with oxygen when it is introduced in the same gas stream. Assuming each atom liberates its 3 valence electrons, this means 40 electrons are present, which is one of the magic numbers for sodium and implies that these numbers are a reflection of the noble gases. Calculations show that the additional electron is located in the aluminium cluster at the location directly opposite from the iodine atom. The cluster must therefore have a higher electron affinity for the electron than iodine and therefore the aluminium cluster is called a superhalogen (i.e., the vertical electron detachment energies of the moieties that make up the negative ions are larger than those of any halogen atom). The cluster component in the ion is similar to an iodide ion or a bromide ion. The related cluster is expected to behave chemically like the triiodide ion.
Physical sciences
Chemical element groups
null
13259
https://en.wikipedia.org/wiki/Home%20page
Home page
A home page (or homepage) is the main web page of a website. Usually, the home page is located at the root of the website's domain or subdomain. For example, if the domain is example.com, the home page is likely located at the URL www.example.com/. The term may also refer to the start page shown in a web browser when the application first opens. Function A home page is the main web page that a visitor will view when they navigate to a website via a search engine, and it may also function as a landing page to attract visitors. In some cases, the home page is a site directory, particularly when a website has multiple home pages. Good home page design is usually a high priority for a website; for example, a news website may curate headlines and first paragraphs of top stories, with links to full articles. According to Homepage Usability, the home page is the "most important page on any website" and receives the most views of any page. A poorly designed home page can overwhelm and deter visitors from the site. One important use of home pages is communicating the identity and value of a company. Browser start page When a web browser is launched, it will automatically open at least one web page. This is the browser's start page, which is also called its home page. Start pages can be a website or a special browser page, such as thumbnails of frequently visited websites. Moreover, there is a niche market of websites intended to be used solely as start pages.
Technology
Internet
null
13263
https://en.wikipedia.org/wiki/Hexadecimal
Hexadecimal
Hexadecimal (also known as base-16 or simply hex) is a positional numeral system that represents numbers using a radix (base) of sixteen. Unlike the decimal system representing numbers using ten symbols, hexadecimal uses sixteen distinct symbols, most often the symbols "0"–"9" to represent values 0 to 9 and "A"–"F" to represent values from ten to fifteen. Software developers and system designers widely use hexadecimal numbers because they provide a convenient representation of binary-coded values. Each hexadecimal digit represents four bits (binary digits), also known as a nibble (or nybble). For example, an 8-bit byte is two hexadecimal digits and its value can be written as to in hexadecimal. In mathematics, a subscript is typically used to specify the base. For example, the decimal value would be expressed in hexadecimal as . In programming, several notations denote hexadecimal numbers, usually involving a prefix. The prefix 0x is used in C, which would denote this value as 0x. Hexadecimal is used in the transfer encoding Base 16, in which each byte of the plain text is broken into two 4-bit values and represented by two hexadecimal digits. Representation Written representation In most current use cases, the letters A–F or a–f represent the values 10–15, while the numerals 0–9 are used to represent their decimal values. There is no universal convention to use lowercase or uppercase, so each is prevalent or preferred in particular environments by community standards or convention; even mixed case is used. Some seven-segment displays use mixed-case 'A b C d E F' to distinguish the digits A–F from one another and from 0–9. There is some standardization of using spaces (rather than commas or another punctuation mark) to separate hex values in a long list. For instance, in the following hex dump, each 8-bit byte is a 2-digit hex number, with spaces between them, while the 32-bit offset at the start is an 8-digit hex number. 00000000 57 69 6b 69 70 65 64 69 61 2c 20 74 68 65 20 66 00000010 72 65 65 20 65 6e 63 79 63 6c 6f 70 65 64 69 61 00000020 20 74 68 61 74 20 61 6e 79 6f 6e 65 20 63 61 6e 00000030 20 65 64 69 74 0a Distinguishing from decimal In contexts where the base is not clear, hexadecimal numbers can be ambiguous and confused with numbers expressed in other bases. There are several conventions for expressing values unambiguously. A numerical subscript (itself written in decimal) can give the base explicitly: 15910 is decimal 159; 15916 is hexadecimal 159, which equals 34510. Some authors prefer a text subscript, such as 159decimal and 159hex, or 159d and 159h. Donald Knuth introduced the use of a particular typeface to represent a particular radix in his book The TeXbook. Hexadecimal representations are written there in a typewriter typeface: , In linear text systems, such as those used in most computer programming environments, a variety of methods have arisen: Although best known from the C programming language (and the many languages influenced by C), the prefix 0x to indicate a hex constant may have had origins in the IBM Stretch systems. It is derived from the 0 prefix already in use for octal constants. Byte values can be expressed in hexadecimal with the prefix \x followed by two hex digits: '\x1B' represents the Esc control character; "\x1B[0m\x1B[25;1H" is a string containing 11 characters with two embedded Esc characters. To output an integer as hexadecimal with the printf function family, the format conversion code %X or %x is used. In XML and XHTML, characters can be expressed as hexadecimal numeric character references using the notation &#xcode;, for instance &#x0054; represents the character U+0054 (the uppercase letter "T"). If there is no the number is decimal (thus &#0084; is the same character). In Intel-derived assembly languages and Modula-2, hexadecimal is denoted with a suffixed or : FFh or 05A3H. Some implementations require a leading zero when the first hexadecimal digit character is not a decimal digit, so one would write 0FFh instead of FFh. Some other implementations (such as NASM) allow C-style numbers (0x42). Other assembly languages (6502, Motorola), Pascal, Delphi, some versions of BASIC (Commodore), GameMaker Language, Godot and Forth use $ as a prefix: $5A3, $C1F27ED. Some assembly languages (Microchip) use the notation H'ABCD' (for ABCD16). Similarly, Fortran 95 uses Z'ABCD'. Ada and VHDL enclose hexadecimal numerals in based "numeric quotes": 16#5A3#, 16#C1F27ED#. For bit vector constants VHDL uses the notation x"5A3", x"C1F27ED". Verilog represents hexadecimal constants in the form 8'hFF, where 8 is the number of bits in the value and FF is the hexadecimal constant. The Icon and Smalltalk languages use the prefix 16r: 16r5A3 PostScript and the Bourne shell and its derivatives denote hex with prefix 16#: 16#5A3, 16#C1F27ED. Common Lisp uses the prefixes #x and #16r. Setting the variables *read-base* and *print-base* to 16 can also be used to switch the reader and printer of a Common Lisp system to Hexadecimal number representation for reading and printing numbers. Thus Hexadecimal numbers can be represented without the #x or #16r prefix code, when the input or output base has been changed to 16. MSX BASIC, QuickBASIC, FreeBASIC and Visual Basic prefix hexadecimal numbers with &H: &H5A3 BBC BASIC and Locomotive BASIC use & for hex. TI-89 and 92 series uses a 0h prefix: 0h5A3, 0hC1F27ED ALGOL 68 uses the prefix 16r to denote hexadecimal numbers: 16r5a3, 16rC1F27ED. Binary, quaternary (base-4), and octal numbers can be specified similarly. The most common format for hexadecimal on IBM mainframes (zSeries) and midrange computers (IBM i) running the traditional OS's (zOS, zVSE, zVM, TPF, IBM i) is X'5A3' or X'C1F27ED', and is used in Assembler, PL/I, COBOL, JCL, scripts, commands and other places. This format was common on other (and now obsolete) IBM systems as well. Occasionally quotation marks were used instead of apostrophes. Syntax that is always Hex Sometimes the numbers are known to be Hex. In URIs (including URLs), character codes are written as hexadecimal pairs prefixed with : where is the code for the space (blank) character, ASCII code point 20 in hex, 32 in decimal. In the Unicode standard, a character value is represented with followed by the hex value, e.g. is the inverted exclamation point (¡). Color references in HTML, CSS and X Window can be expressed with six hexadecimal digits (two each for the red, green and blue components, in that order) prefixed with : magenta, for example, is represented as . CSS also allows 3-hexdigit abbreviations with one hexdigit per component: abbreviates (a golden orange: ). In MIME (e-mail extensions) quoted-printable encoding, character codes are written as hexadecimal pairs prefixed with : is "España" (F1 is the code for ñ in the ISO/IEC 8859-1 character set).) PostScript binary data (such as image pixels) can be expressed as unprefixed consecutive hexadecimal pairs:  ... Any IPv6 address can be written as eight groups of four hexadecimal digits (sometimes called hextets), where each group is separated by a colon (). This, for example, is a valid IPv6 address: or abbreviated by removing leading zeros as (IPv4 addresses are usually written in decimal). Globally unique identifiers are written as thirty-two hexadecimal digits, often in unequal hyphen-separated groupings, for example . Other symbols for 10–15 and mostly different symbol sets The use of the letters A through F to represent the digits above 9 was not universal in the early history of computers. During the 1950s, some installations, such as Bendix-14, favored using the digits 0 through 5 with an overline to denote the values as , , , , and . The SWAC (1950) and Bendix G-15 (1956) computers used the lowercase letters u, v, w, x, y and z for the values 10 to 15. The ORDVAC and ILLIAC I (1952) computers (and some derived designs, e.g. BRLESC) used the uppercase letters K, S, N, J, F and L for the values 10 to 15. The Librascope LGP-30 (1956) used the letters F, G, J, K, Q and W for the values 10 to 15. On the PERM (1956) computer, hexadecimal numbers were written as letters O for zero, A to N and P for 1 to 15. Many machine instructions had mnemonic hex-codes (A=add, M=multiply, L=load, F=fixed-point etc.); programs were written without instruction names. The Honeywell Datamatic D-1000 (1957) used the lowercase letters b, c, d, e, f, and g whereas the Elbit 100 (1967) used the uppercase letters B, C, D, E, F and G for the values 10 to 15. The Monrobot XI (1960) used the letters S, T, U, V, W and X for the values 10 to 15. The NEC parametron computer NEAC 1103 (1960) used the letters D, G, H, J, K (and possibly V) for values 10–15. The Pacific Data Systems 1020 (1964) used the letters L, C, A, S, M and D for the values 10 to 15. New numeric symbols and names were introduced in the Bibi-binary notation by Boby Lapointe in 1968. Bruce Alan Martin of Brookhaven National Laboratory considered the choice of A–F "ridiculous". In a 1968 letter to the editor of the CACM, he proposed an entirely new set of symbols based on the bit locations. In 1972, Ronald O. Whitaker of Rowco Engineering Co. proposed a triangular font that allows "direct binary reading" to "permit both input and output from computers without respect to encoding matrices." Some seven-segment display decoder chips (i.e., 74LS47) show unexpected output due to logic designed only to produce 0–9 correctly. Verbal and digital representations Since there were no traditional numerals to represent the quantities from ten to fifteen, alphabetic letters were re-employed as a substitute. Most European languages lack non-decimal-based words for some of the numerals eleven to fifteen. Some people read hexadecimal numbers digit by digit, like a phone number, or using the NATO phonetic alphabet, the Joint Army/Navy Phonetic Alphabet, or a similar ad-hoc system. In the wake of the adoption of hexadecimal among IBM System/360 programmers, Magnuson (1968) suggested a pronunciation guide that gave short names to the letters of hexadecimal – for instance, "A" was pronounced "ann", B "bet", C "chris", etc. Another naming-system was published online by Rogers (2007) that tries to make the verbal representation distinguishable in any case, even when the actual number does not contain numbers A–F. Examples are listed in the tables below. Yet another naming system was elaborated by Babb (2015), based on a joke in Silicon Valley. The system proposed by Babb was further improved by Atkins-Bittner in 2015-2016. Others have proposed using the verbal Morse Code conventions to express four-bit hexadecimal digits, with "dit" and "dah" representing zero and one, respectively, so that "0000" is voiced as "dit-dit-dit-dit" (....), dah-dit-dit-dah (-..-) voices the digit with a value of nine, and "dah-dah-dah-dah" (----) voices the hexadecimal digit for decimal 15. Systems of counting on digits have been devised for both binary and hexadecimal. Arthur C. Clarke suggested using each finger as an on/off bit, allowing finger counting from zero to 102310 on ten fingers. Another system for counting up to FF16 (25510) is illustrated on the right. Signs The hexadecimal system can express negative numbers the same way as in decimal: −2A to represent −4210, −B01D9 to represent −72136910 and so on. Hexadecimal can also be used to express the exact bit patterns used in the processor, so a sequence of hexadecimal digits may represent a signed or even a floating-point value. This way, the negative number −4210 can be written as FFFF FFD6 in a 32-bit CPU register (in two's complement), as C228 0000 in a 32-bit FPU register or C045 0000 0000 0000 in a 64-bit FPU register (in the IEEE floating-point standard). Hexadecimal exponential notation Just as decimal numbers can be represented in exponential notation, so too can hexadecimal numbers. P notation uses the letter P (or p, for "power"), whereas E (or e) serves a similar purpose in decimal E notation. The number after the P is decimal and represents the binary exponent. Increasing the exponent by 1 multiplies by 2, not 16: . Usually, the number is normalized so that the hexadecimal digits start with (zero is usually with no P). Example: represents . P notation is required by the IEEE 754-2008 binary floating-point standard and can be used for floating-point literals in the C99 edition of the C programming language. Using the %a or %A conversion specifiers, this notation can be produced by implementations of the printf family of functions following the C99 specification and Single Unix Specification (IEEE Std 1003.1) POSIX standard. Conversion Binary conversion Most computers manipulate binary data, but it is difficult for humans to work with a large number of digits for even a relatively small binary number. Although most humans are familiar with the base 10 system, it is much easier to map binary to hexadecimal than to decimal because each hexadecimal digit maps to a whole number of bits (410). This example converts 11112 to base ten. Since each position in a binary numeral can contain either a 1 or a 0, its value may be easily determined by its position from the right: 00012 = 110 00102 = 210 01002 = 410 10002 = 810 Therefore: With little practice, mapping 11112 to F16 in one step becomes easy (see table in written representation). The advantage of using hexadecimal rather than decimal increases rapidly with the size of the number. When the number becomes large, conversion to decimal is very tedious. However, when mapping to hexadecimal, it is trivial to regard the binary string as 4-digit groups and map each to a single hexadecimal digit. This example shows the conversion of a binary number to decimal, mapping each digit to the decimal value, and adding the results. Compare this to the conversion to hexadecimal, where each group of four digits can be considered independently and converted directly: The conversion from hexadecimal to binary is equally direct. Other simple conversions Although quaternary (base 4) is little used, it can easily be converted to and from hexadecimal or binary. Each hexadecimal digit corresponds to a pair of quaternary digits, and each quaternary digit corresponds to a pair of binary digits. In the above example 2 5 C16 = 02 11 304. The octal (base 8) system can also be converted with relative ease, although not quite as trivially as with bases 2 and 4. Each octal digit corresponds to three binary digits, rather than four. Therefore, we can convert between octal and hexadecimal via an intermediate conversion to binary followed by regrouping the binary digits in groups of either three or four. Division-remainder in source base As with all bases there is a simple algorithm for converting a representation of a number to hexadecimal by doing integer division and remainder operations in the source base. In theory, this is possible from any base, but for most humans, only decimal and for most computers, only binary (which can be converted by far more efficient methods) can be easily handled with this method. Let d be the number to represent in hexadecimal, and the series hihi−1...h2h1 be the hexadecimal digits representing the number. i ← 1 hi ← d mod 16 d ← (d − hi) / 16 If d = 0 (return series hi) else increment i and go to step 2 "16" may be replaced with any other base that may be desired. The following is a JavaScript implementation of the above algorithm for converting any number to a hexadecimal in String representation. Its purpose is to illustrate the above algorithm. To work with data seriously, however, it is much more advisable to work with bitwise operators. function toHex(d) { var r = d % 16; if (d - r == 0) { return toChar(r); } return toHex((d - r) / 16) + toChar(r); } function toChar(n) { const alpha = "0123456789ABCDEF"; return alpha.charAt(n); } Conversion through addition and multiplication It is also possible to make the conversion by assigning each place in the source base the hexadecimal representation of its place value — before carrying out multiplication and addition to get the final representation. For example, to convert the number B3AD to decimal, one can split the hexadecimal number into its digits: B (1110), 3 (310), A (1010) and D (1310), and then get the final result by multiplying each decimal representation by 16p (p being the corresponding hex digit position, counting from right to left, beginning with 0). In this case, we have that: which is 45997 in base 10. Tools for conversion Many computer systems provide a calculator utility capable of performing conversions between the various radices frequently including hexadecimal. In Microsoft Windows, the Calculator utility can be set to Programmer mode, which allows conversions between radix 16 (hexadecimal), 10 (decimal), 8 (octal), and 2 (binary), the bases most commonly used by programmers. In Programmer Mode, the on-screen numeric keypad includes the hexadecimal digits A through F, which are active when "Hex" is selected. In hex mode, however, the Windows Calculator supports only integers. Elementary arithmetic Elementary operations such as division can be carried out indirectly through conversion to an alternate numeral system, such as the commonly used decimal system or the binary system where each hex digit corresponds to four binary digits. Alternatively, one can also perform elementary operations directly within the hex system itself — by relying on its addition/multiplication tables and its corresponding standard algorithms such as long division and the traditional subtraction algorithm. Real numbers Rational numbers As with other numeral systems, the hexadecimal system can be used to represent rational numbers, although repeating expansions are common since sixteen (1016) has only a single prime factor: two. For any base, 0.1 (or "1/10") is always equivalent to one divided by the representation of that base value in its own number system. Thus, whether dividing one by two for binary or dividing one by sixteen for hexadecimal, both of these fractions are written as 0.1. Because the radix 16 is a perfect square (42), fractions expressed in hexadecimal have an odd period much more often than decimal ones, and there are no cyclic numbers (other than trivial single digits). Recurring digits are exhibited when the denominator in lowest terms has a prime factor not found in the radix; thus, when using hexadecimal notation, all fractions with denominators that are not a power of two result in an infinite string of recurring digits (such as thirds and fifths). This makes hexadecimal (and binary) less convenient than decimal for representing rational numbers since a larger proportion lies outside its range of finite representation. All rational numbers finitely representable in hexadecimal are also finitely representable in decimal, duodecimal and sexagesimal: that is, any hexadecimal number with a finite number of digits also has a finite number of digits when expressed in those other bases. Conversely, only a fraction of those finitely representable in the latter bases are finitely representable in hexadecimal. For example, decimal 0.1 corresponds to the infinite recurring representation 0.1 in hexadecimal. However, hexadecimal is more efficient than duodecimal and sexagesimal for representing fractions with powers of two in the denominator. For example, 0.062510 (one-sixteenth) is equivalent to 0.116, 0.0912, and 0;3,4560. Irrational numbers The table below gives the expansions of some common irrational numbers in decimal and hexadecimal. Powers Powers of two have very simple expansions in hexadecimal. The first sixteen powers of two are shown below. Cultural history The traditional Chinese units of measurement were base-16. For example, one jīn (斤) in the old system equals sixteen taels. The suanpan (Chinese abacus) can be used to perform hexadecimal calculations such as additions and subtractions. As with the duodecimal system, there have been occasional attempts to promote hexadecimal as the preferred numeral system. These attempts often propose specific pronunciation and symbols for the individual numerals. Some proposals unify standard measures so that they are multiples of 16. An early such proposal was put forward by John W. Nystrom in Project of a New System of Arithmetic, Weight, Measure and Coins: Proposed to be called the Tonal System, with Sixteen to the Base, published in 1862. Nystrom among other things suggested hexadecimal time, which subdivides a day by 16, so that there are 16 "hours" (or "10 tims", pronounced tontim) in a day. The word hexadecimal is first recorded in 1952. It is macaronic in the sense that it combines Greek ἕξ (hex) "six" with Latinate -decimal. The all-Latin alternative sexadecimal (compare the word sexagesimal for base 60) is older, and sees at least occasional use from the late 19th century. It is still in use in the 1950s in Bendix documentation. Schwartzman (1994) argues that use of sexadecimal may have been avoided because of its suggestive abbreviation to sex. Many western languages since the 1960s have adopted terms equivalent in formation to hexadecimal (e.g. French hexadécimal, Italian esadecimale, Romanian hexazecimal, Serbian хексадецимални, etc.) but others have introduced terms which substitute native words for "sixteen" (e.g. Greek δεκαεξαδικός, Icelandic sextándakerfi, Russian шестнадцатеричной etc.) Terminology and notation did not become settled until the end of the 1960s. In 1969, Donald Knuth argued that the etymologically correct term would be senidenary, or possibly sedenary, a Latinate term intended to convey "grouped by 16" modelled on binary, ternary, quaternary, etc. According to Knuth's argument, the correct terms for decimal and octal arithmetic would be denary and octonary, respectively. Alfred B. Taylor used senidenary in his mid-1800s work on alternative number bases, although he rejected base 16 because of its "incommodious number of digits". The now-current notation using the letters A to F establishes itself as the de facto standard beginning in 1966, in the wake of the publication of the Fortran IV manual for IBM System/360, which (unlike earlier variants of Fortran) recognizes a standard for entering hexadecimal constants. As noted above, alternative notations were used by NEC (1960) and The Pacific Data Systems 1020 (1964). The standard adopted by IBM seems to have become widely adopted by 1968, when Bruce Alan Martin in his letter to the editor of the CACM complains that Martin's argument was that use of numerals 0 to 9 in nondecimal numbers "imply to us a base-ten place-value scheme": "Why not use entirely new symbols (and names) for the seven or fifteen nonzero digits needed in octal or hex. Even use of the letters A through P would be an improvement, but entirely new symbols could reflect the binary nature of the system". He also argued that "re-using alphabetic letters for numerical digits represents a gigantic backward step from the invention of distinct, non-alphabetic glyphs for numerals sixteen centuries ago" (as Brahmi numerals, and later in a Hindu–Arabic numeral system), and that the recent ASCII standards (ASA X3.4-1963 and USAS X3.4-1968) "should have preserved six code table positions following the ten decimal digits -- rather than needlessly filling these with punctuation characters" (":;<=>?") that might have been placed elsewhere among the 128 available positions. Base16 (transfer encoding) Base16 (as a proper name without a space) can also refer to a binary to text encoding belonging to the same family as Base32, Base58, and Base64. In this case, data is broken into 4-bit sequences, and each value (between 0 and 15 inclusively) is encoded using one of 16 symbols from the ASCII character set. Although any 16 symbols from the ASCII character set can be used, in practice, the ASCII digits "0"–"9" and the letters "A"–"F" (or the lowercase "a"–"f") are always chosen in order to align with standard written notation for hexadecimal numbers. There are several advantages of Base16 encoding: Most programming languages already have facilities to parse ASCII-encoded hexadecimal Being exactly half a byte, 4-bits is easier to process than the 5 or 6 bits of Base32 and Base64 respectively The symbols 0–9 and A–F are universal in hexadecimal notation, so it is easily understood at a glance without needing to rely on a symbol lookup table. Many CPU architectures have dedicated instructions that allow access to a half-byte (otherwise known as a "nibble"), making it more efficient in hardware than Base32 and Base64 The main disadvantages of Base16 encoding are: Space efficiency is only 50%, since each 4-bit value from the original data will be encoded as an 8-bit byte. In contrast, Base32 and Base64 encodings have a space efficiency of 63% and 75% respectively. Possible added complexity of having to accept both uppercase and lowercase letters Support for Base16 encoding is ubiquitous in modern computing. It is the basis for the W3C standard for URL percent encoding, where a character is replaced with a percent sign "%" and its Base16-encoded form. Most modern programming languages directly include support for formatting and parsing Base16-encoded numbers.
Mathematics
Basics
null
13266
https://en.wikipedia.org/wiki/Histogram
Histogram
A histogram is a visual representation of the distribution of quantitative data. To construct a histogram, the first step is to "bin" (or "bucket") the range of values— divide the entire range of values into a series of intervals—and then count how many values fall into each interval. The bins are usually specified as consecutive, non-overlapping intervals of a variable. The bins (intervals) are adjacent and are typically (but not required to be) of equal size. Histograms give a rough sense of the density of the underlying distribution of the data, and often for density estimation: estimating the probability density function of the underlying variable. The total area of a histogram used for probability density is always normalized to 1. If the length of the intervals on the x-axis are all 1, then a histogram is identical to a relative frequency plot. Histograms are sometimes confused with bar charts. In a histogram, each bin is for a different range of values, so altogether the histogram illustrates the distribution of values. But in a bar chart, each bar is for a different category of observations (e.g., each bar might be for a different population), so altogether the bar chart can be used to compare different categories. Some authors recommend that bar charts always have gaps between the bars to clarify that they are not histograms. Etymology The term "histogram" was first introduced by Karl Pearson, the founder of mathematical statistics, in lectures delivered in 1892 at University College London. Pearson's term is sometimes incorrectly said to combine the Greek root γραμμα (gramma) = "figure" or "drawing" with the root ἱστορία (historia) = "inquiry" or "history". Alternatively the root ἱστίον (histion) is also proposed, meaning "web" or "tissue" (as in histology, the study of biological tissue). Both of these etymologies are incorrect, and in fact Pearson, who knew Ancient Greek well, derived the term from a different if homophonous Greek root, ἱστός = "something set upright", referring to the vertical bars in the graph. Pearson's new term was embedded in a series of other analogous neologisms, such as "stigmogram" and "radiogram". Pearson himself noted in 1895 that although the term "histogram" was new, the type of graph it designates was "a common form of graphical representation". In fact the technique of using a bar graph to represent statistical measurements was devised by the Scottish economist, William Playfair, in his Commercial and political atlas (1786). Examples This is the data for the histogram to the right, using 500 items: The words used to describe the patterns in a histogram are: "symmetric", "skewed left" or "right", "unimodal", "bimodal" or "multimodal". It is a good idea to plot the data using several different bin widths to learn more about it. Here is an example on tips given in a restaurant. The U.S. Census Bureau found that there were 124 million people who work outside of their homes. Using their data on the time occupied by travel to work, the table below shows the absolute number of people who responded with travel times "at least 30 but less than 35 minutes" is higher than the numbers for the categories above and below it. This is likely due to people rounding their reported journey time. The problem of reporting values as somewhat arbitrarily rounded numbers is a common phenomenon when collecting data from people. {| class="wikitable" style="text-align:center" |+Data by absolute numbers |- ! Interval !! Width !! Quantity !! Quantity/width |- | 0 || 5 || 4180 || 836 |- | 5 || 5 || 13687 || 2737 |- | 10 || 5 || 18618 || 3723 |- | 15 || 5 || 19634 || 3926 |- | 20 || 5 || 17981 || 3596 |- | 25 || 5 || 7190 || 1438 |- | 30 || 5 || 16369 || 3273 |- | 35 || 5 || 3212 || 642 |- | 40 || 5 || 4122 || 824 |- | 45 || 15 || 9200 || 613 |- | 60 || 30 || 6461 || 215 |- | 90 || 60 || 3435 || 57 |} This histogram shows the number of cases per unit interval as the height of each block, so that the area of each block is equal to the number of people in the survey who fall into its category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers, with Q in thousands. {| class="wikitable" style="text-align:center" |+Data by proportion |- ! Interval !! Width !! Quantity (Q) !! Q/total/width |- | 0 || 5 || 4180 || 0.0067 |- | 5 || 5 || 13687 || 0.0221 |- | 10 || 5 || 18618 || 0.0300 |- | 15 || 5 || 19634 || 0.0316 |- | 20 || 5 || 17981 || 0.0290 |- | 25 || 5 || 7190 || 0.0116 |- | 30 || 5 || 16369 || 0.0264 |- | 35 || 5 || 3212 || 0.0052 |- | 40 || 5 || 4122 || 0.0066 |- | 45 || 15 || 9200 || 0.0049 |- | 60 || 30 || 6461 || 0.0017 |- | 90 || 60 || 3435 || 0.0005 |} This histogram differs from the first only in the vertical scale. The area of each block is the fraction of the total that each category represents, and the total area of all the bars is equal to 1 (the fraction meaning "all"). The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram. In other words, a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies: the height of each is the average frequency density for the interval. The intervals are placed together in order to show that the data represented by the histogram, while exclusive, is also contiguous. (E.g., in a histogram it is possible to have two connecting intervals of 10.5–20.5 and 20.5–33.5, but not two connecting intervals of 10.5–20.5 and 22.5–32.5. Empty intervals are represented as empty and not skipped.) Mathematical definitions The data used to construct a histogram are generated via a function mi that counts the number of observations that fall into each of the disjoint categories (known as bins). Thus, if we let n be the total number of observations and k be the total number of bins, the histogram data mi meet the following conditions: A histogram can be thought of as a simplistic kernel density estimation, which uses a kernel to smooth frequencies over the bins. This yields a smoother probability density function, which will in general more accurately reflect distribution of the underlying variable. The density estimate could be plotted as an alternative to the histogram, and is usually drawn as a curve rather than a set of boxes. Histograms are nevertheless preferred in applications, when their statistical properties need to be modeled. The correlated variation of a kernel density estimate is very difficult to describe mathematically, while it is simple for a histogram where each bin varies independently. An alternative to kernel density estimation is the average shifted histogram, which is fast to compute and gives a smooth curve estimate of the density without using kernels. Cumulative histogram A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram Mi of a histogram mj is defined as: Number of bins and width There is no "best" number of bins, and different bin sizes can reveal different features of the data. Grouping data is at least as old as Graunt's work in the 17th century, but no systematic guidelines were given until Sturges's work in 1926. Using wider bins where the density of the underlying data points is low reduces noise due to sampling randomness; using narrower bins where the density is high (so the signal drowns the noise) gives greater precision to the density estimation. Thus varying the bin-width within a histogram can be beneficial. Nonetheless, equal-width bins are widely used. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. Depending on the actual data distribution and the goals of the analysis, different bin widths may be appropriate, so experimentation is usually needed to determine an appropriate width. There are, however, various useful guidelines and rules of thumb. The number of bins k can be assigned directly or can be calculated from a suggested bin width h as: The braces indicate the ceiling function. Square-root choice which takes the square root of the number of data points in the sample and rounds to the next integer. This rule is suggested by a number of elementary statistics textbooks and widely implemented in many software packages. Sturges's formula Sturges's rule is derived from a binomial distribution and implicitly assumes an approximately normal distribution. Sturges's formula implicitly bases bin sizes on the range of the data, and can perform poorly if , because the number of bins will be small—less than seven—and unlikely to show trends in the data well. On the other extreme, Sturges's formula may overestimate bin width for very large datasets, resulting in oversmoothed histograms. It may also perform poorly if the data are not normally distributed. When compared to Scott's rule and the Terrell-Scott rule, two other widely accepted formulas for histogram bins, the output of Sturges's formula is closest when . Rice rule The Rice rule is presented as a simple alternative to Sturges's rule. Doane's formula Doane's formula is a modification of Sturges's formula which attempts to improve its performance with non-normal data. where is the estimated 3rd-moment-skewness of the distribution and Scott's normal reference rule Bin width is given by where is the sample standard deviation. Scott's normal reference rule is optimal for random samples of normally distributed data, in the sense that it minimizes the integrated mean squared error of the density estimate. This is the default rule used in Microsoft Excel. Terrell–Scott rule The Terrell–Scott rule is not a normal reference rule. It gives the minimum number of bins required for an asymptotically optimal histogram, where optimality is measured by the integrated mean squared error. The bound is derived by finding the 'smoothest' possible density, which turns out to be . Any other density will require more bins, hence the above estimate is also referred to as the 'oversmoothed' rule. The similarity of the formulas and the fact that Terrell and Scott were at Rice University when the proposed it suggests that this is also the origin of the Rice rule. Freedman–Diaconis rule The Freedman–Diaconis rule gives bin width as: which is based on the interquartile range, denoted by IQR. It replaces 3.5σ of Scott's rule with 2 IQR, which is less sensitive than the standard deviation to outliers in data. Minimizing cross-validation estimated squared error This approach of minimizing integrated mean squared error from Scott's rule can be generalized beyond normal distributions, by using leave-one out cross validation: Here, is the number of datapoints in the kth bin, and choosing the value of h that minimizes J will minimize integrated mean squared error. Shimazaki and Shinomoto's choice The choice is based on minimization of an estimated L2 risk function where and are mean and biased variance of a histogram with bin-width , and . Variable bin widths Rather than choosing evenly spaced bins, for some applications it is preferable to vary the bin width. This avoids bins with low counts. A common case is to choose equiprobable bins, where the number of samples in each bin is expected to be approximately equal. The bins may be chosen according to some known distribution or may be chosen based on the data so that each bin has samples. When plotting the histogram, the frequency density is used for the dependent axis. While all bins have approximately equal area, the heights of the histogram approximate the density distribution. For equiprobable bins, the following rule for the number of bins is suggested: This choice of bins is motivated by maximizing the power of a Pearson chi-squared test testing whether the bins do contain equal numbers of samples. More specifically, for a given confidence interval it is recommended to choose between 1/2 and 1 times the following equation: Where is the probit function. Following this rule for would give between and ; the coefficient of 2 is chosen as an easy-to-remember value from this broad optimum. Remark A good reason why the number of bins should be proportional to is the following: suppose that the data are obtained as independent realizations of a bounded probability distribution with smooth density. Then the histogram remains equally "rugged" as tends to infinity. If is the "width" of the distribution (e. g., the standard deviation or the inter-quartile range), then the number of units in a bin (the frequency) is of order and the relative standard error is of order . Compared to the next bin, the relative change of the frequency is of order provided that the derivative of the density is non-zero. These two are of the same order if is of order , so that is of order . This simple cubic root choice can also be applied to bins with non-constant widths. Applications In hydrology the histogram and estimated density function of rainfall and river discharge data, analysed with a probability distribution, are used to gain insight in their behaviour and frequency of occurrence. An example is shown in the blue figure. In many Digital image processing programs there is an histogram tool, which show you the distribution of the contrast / brightness of the pixels.
Mathematics
Statistics
null
13292
https://en.wikipedia.org/wiki/Hypoxia%20%28medicine%29
Hypoxia (medicine)
Hypoxia is a condition in which the body or a region of the body is deprived of adequate oxygen supply at the tissue level. Hypoxia may be classified as either generalized, affecting the whole body, or local, affecting a region of the body. Although hypoxia is often a pathological condition, variations in arterial oxygen concentrations can be part of the normal physiology, for example, during strenuous physical exercise. Hypoxia differs from hypoxemia and anoxemia, in that hypoxia refers to a state in which oxygen present in a tissue or the whole body is insufficient, whereas hypoxemia and anoxemia refer specifically to states that have low or no oxygen in the blood. Hypoxia in which there is complete absence of oxygen supply is referred to as anoxia. Hypoxia can be due to external causes, when the breathing gas is hypoxic, or internal causes, such as reduced effectiveness of gas transfer in the lungs, reduced capacity of the blood to carry oxygen, compromised general or local perfusion, or inability of the affected tissues to extract oxygen from, or metabolically process, an adequate supply of oxygen from an adequately oxygenated blood supply. Generalized hypoxia occurs in healthy people when they ascend to high altitude, where it causes altitude sickness leading to potentially fatal complications: high altitude pulmonary edema (HAPE) and high altitude cerebral edema (HACE). Hypoxia also occurs in healthy individuals when breathing inappropriate mixtures of gases with a low oxygen content, e.g., while diving underwater, especially when using malfunctioning closed-circuit rebreather systems that control the amount of oxygen in the supplied air. Mild, non-damaging intermittent hypoxia is used intentionally during altitude training to develop an athletic performance adaptation at both the systemic and cellular level. Hypoxia is a common complication of preterm birth in newborn infants. Because the lungs develop late in pregnancy, premature infants frequently possess underdeveloped lungs. To improve blood oxygenation, infants at risk of hypoxia may be placed inside incubators that provide warmth, humidity, and supplemental oxygen. More serious cases are treated with continuous positive airway pressure (CPAP). Classification Hypoxia exists when there is a reduced amount of oxygen in the tissues of the body. Hypoxemia refers to a reduction in arterial oxygenation below the normal range, regardless of whether gas exchange is impaired in the lung, arterial oxygen content (CaO2 – which represents the amount of oxygen delivered to the tissues) is adequate, or tissue hypoxia exists. The classification categories are not always mutually exclusive, and hypoxia can be a consequence of a wide variety of causes. By cause Hypoxic hypoxia, also referred to as generalised hypoxia, may be caused by: Hypoventilation, which is insufficient ventilation of the lungs due to any cause (fatigue, excessive work of breathing, barbiturate poisoning, pneumothorax, sleep apnea, etc.). Low-inspired oxygen partial pressure, which may be caused by breathing normal air at low ambient pressures due to altitude, by breathing hypoxic breathing gas at an unsuitable depth, by breathing inadequately re-oxygenated recycled breathing gas from a rebreather, life support system, or anesthetic machine. Hypoxia of ascent (latent hypoxia) in freediving and rebreather diving. Airway obstruction, choking, drowning. Chronic obstructive pulmonary disease (COPD) Neuromuscular diseases or interstitial lung disease Malformed vascular system such as an anomalous coronary artery. Hypoxemic hypoxia is a lack of oxygen caused by low oxygen tension in the arterial blood, due to the inability of the lungs to sufficiently oxygenate the blood. Causes include hypoventilation, impaired alveolar diffusion, and pulmonary shunting. This definition overlaps considerably with that of hypoxic hypoxia. is hypoxia from hypoxemia due to abnormal pulmonary function, and occurs when the lungs receive adequately oxygenated gas which does not oxygenate the blood sufficiently. It may be caused by: Ventilation perfusion mismatch (V/Q mismatch), which can be either low or high. A reduced V/Q ratio can be caused by impaired ventilation, which may be a consequence of conditions such as bronchitis, obstructive airway disease, mucus plugs, or pulmonary edema, which limit or obstruct the ventilation. In this situation there is not enough oxygen in the alveolar gas to fully oxygenate the blood volume passing through, and PaO2 will be low. Conversely, an increased V/Q ratio tends to be a consequence of impaired perfusion, in which circumstances the blood supply is insufficient to carry the available oxygen, PaO2 will be normal, but tissues will be insufficiently perfused to meet the oxygen demand. A V/Q mismatch can also occur when the surface area available for gas exchange in the lungs is decreased. Pulmonary shunt, in which blood passes from the right to the left side of the heart without being oxygenated. This may be due to anatomical shunts, in which the blood bypasses the alveoli, via intracardiac shunts, pulmonary arteriovenous malformations, fistulas, and hepatopulmonary syndrome, or physiological shunting, in which blood passes through non-ventilated alveoli. Impaired diffusion, a reduced capacity for gas molecules to move between the air in the alveoli and the blood, which occurs when alveolar–capillary membranes thicken. This can happen in interstitial lung diseases such as pulmonary fibrosis, sarcoidosis, hypersensitivity pneumonitis, and connective tissue disorders. , also known as ischemic hypoxia or stagnant hypoxia, is caused by abnormally low blood flow to the lungs, which can occur during shock, cardiac arrest, severe congestive heart failure, or abdominal compartment syndrome, where the main dysfunction is in the cardiovascular system, causing a major reduction in perfusion. Arterial gas is adequately oygenated in the lungs, and the tissues are able to accept the oxygen available, but the flow rate to the tissues is insufficient. Venous oxygenation is particularly low. Anemic hypoxia or hypemic hypoxia is the lack of capacity of the blood to carry the normal level of oxygen. It can be caused by anemia or: Carbon monoxide poisoning, in which carbon monoxide combines with the hemoglobin, to form carboxyhemoglobin (HbCO) preventing it from transporting oxygen. Methemoglobinemia, a change in the hemoglobin molecule from a ferrous ion (Fe2+) to a ferric ion (Fe3+), which has a lesser capacity to bind free oxygen molecules, and a greater affinity for bound oxygen. This causes a left shift in the O2–Hb curve. It can be congenital or caused by medications, food additives or toxins, including chloroquine, benzene, nitrites, benzocaine. Histotoxic hypoxia (Dysoxia) or occurs when the cells of the affected tissues are unable to use oxygen provided by normally oxygenated hemoglobin. Examples include cyanide poisoning which inhibits cytochrome c oxidase, an enzyme required for cellular respiration in mitochondria. Methanol poisoning has a similar effect, as the metabolism of methanol produces formic acid which inhibits mitochondrial cytochrome oxidase. Intermittent hypoxic training induces mild generalized hypoxia for short periods as a training method to improve sporting performance. This is not considered a medical condition. Acute cerebral hypoxia leading to blackout can occur during freediving. This is a consequence of prolonged voluntary apnea underwater, and generally occurs in trained athletes in good health and good physical condition. By extent Hypoxia may affect the whole body, or just some parts. Generalized hypoxia The term generalized hypoxia may refer to hypoxia affecting the whole body, or may be used as a synonym for hypoxic hypoxia, which occurs when there is insufficient oxygen in the breathing gas to oxygenate the blood to a level that will adequately support normal metabolic processes, and which will inherently affect all perfused tissues. The symptoms of generalized hypoxia depend on its severity and acceleration of onset. In the case of altitude sickness, where hypoxia develops gradually, the symptoms include fatigue, numbness / tingling of extremities, nausea, and cerebral hypoxia. These symptoms are often difficult to identify, but early detection of symptoms can be critical. In severe hypoxia, or hypoxia of very rapid onset, ataxia, confusion, disorientation, hallucinations, behavioral change, severe headaches, reduced level of consciousness, papilloedema, breathlessness, pallor, tachycardia, and pulmonary hypertension eventually leading to the late signs cyanosis, slow heart rate, cor pulmonale, and low blood pressure followed by heart failure eventually leading to shock and death. Because hemoglobin is a darker red when it is not bound to oxygen (deoxyhemoglobin), as opposed to the rich red color that it has when bound to oxygen (oxyhemoglobin), when seen through the skin it has an increased tendency to reflect blue light back to the eye. In cases where the oxygen is displaced by another molecule, such as carbon monoxide, the skin may appear 'cherry red' instead of cyanotic. Hypoxia can cause premature birth, and injure the liver, among other deleterious effects. Localized hypoxia Hypoxia that is localized to a region of the body, such as an organ or a limb. is usually the consequence of ischemia, the reduced perfusion to that organ or limb, and may not necessarily be associated with general hypoxemia. A locally reduced perfusion is generally caused by an increased resistance to flow through the blood vessels of the affected area. Ischemia is a restriction in blood supply to any tissue, muscle group, or organ, causing a shortage of oxygen. Ischemia is generally caused by problems with blood vessels, with resultant damage to or dysfunction of tissue i.e. hypoxia and microvascular dysfunction. It also means local hypoxia in a given part of a body sometimes resulting from vascular occlusion such as vasoconstriction, thrombosis, or embolism. Ischemia comprises not only insufficiency of oxygen, but also reduced availability of nutrients and inadequate removal of metabolic wastes. Ischemia can be a partial (poor perfusion) or total blockage. Compartment syndrome is a condition in which increased pressure within one of the body's anatomical compartments results in insufficient blood supply to tissue within that space. There are two main types: acute and chronic. Compartments of the leg or arm are most commonly involved. If tissue is not being perfused properly, it may feel cold and appear pale; if severe, hypoxia can result in cyanosis, a blue discoloration of the skin. If hypoxia is very severe, a tissue may eventually become gangrenous. By affected tissues and organs Any living tissue can be affected by hypoxia, but some are particularly sensitive, or have more noticeable or notable consequences. Cerebral hypoxia Cerebral hypoxia is hypoxia specifically involving the brain. The four categories of cerebral hypoxia in order of increasing severity are: diffuse cerebral hypoxia (DCH), focal cerebral ischemia, cerebral infarction, and global cerebral ischemia. Prolonged hypoxia induces neuronal cell death via apoptosis, resulting in a hypoxic brain injury. Oxygen deprivation can be hypoxic (reduced general oxygen availability) or ischemic (oxygen deprivation due to a disruption in blood flow) in origin. Brain injury as a result of oxygen deprivation is generally termed hypoxic injury. Hypoxic ischemic encephalopathy (HIE) is a condition that occurs when the entire brain is deprived of an adequate oxygen supply, but the deprivation is not total. While HIE is associated in most cases with oxygen deprivation in the neonate due to birth asphyxia, it can occur in all age groups, and is often a complication of cardiac arrest. Corneal hypoxia Although corneal hypoxia can arise from any of several causes, it is primarily attributable to the prolonged use of contact lenses. The corneas are not perfused and get their oxygen from the atmosphere by diffusion. Impermeable contact lenses form a barrier to this diffusion, and therefore can cause damage to the corneas. Symptoms may include irritation, excessive tearing and blurred vision. The sequelae of corneal hypoxia include punctate keratitis, corneal neovascularization and epithelial microcysts. Intrauterine hypoxia Intrauterine hypoxia, also known as fetal hypoxia, occurs when the fetus is deprived of an adequate supply of oxygen. It may be due to a variety of reasons such as prolapse or occlusion of the umbilical cord, placental infarction, maternal diabetes (prepregnancy or gestational diabetes) and maternal smoking. Intrauterine growth restriction may cause or be the result of hypoxia. Intrauterine hypoxia can cause cellular damage that occurs within the central nervous system (the brain and spinal cord). This results in an increased mortality rate, including an increased risk of sudden infant death syndrome (SIDS). Oxygen deprivation in the fetus and neonate have been implicated as either a primary or as a contributing risk factor in numerous neurological and neuropsychiatric disorders such as epilepsy, attention deficit hyperactivity disorder, eating disorders and cerebral palsy. Tumor hypoxia Tumor hypoxia is the situation where tumor cells have been deprived of oxygen. As a tumor grows, it rapidly outgrows its blood supply, leaving portions of the tumor with regions where the oxygen concentration is significantly lower than in healthy tissues. Hypoxic microenvironements in solid tumors are a result of available oxygen being consumed within 70 to 150 μm of tumour vasculature by rapidly proliferating tumor cells thus limiting the amount of oxygen available to diffuse further into the tumor tissue. The severity of hypoxia is related to tumor types and varies between different types. Research has shown that the level of oxygenation in hypoxic tumor tissues is poorer than normal tissues and it is reported somewhere between 1%–2% O2. In order to support continuous growth and proliferation in challenging hypoxic environments, cancer cells are found to alter their metabolism. Furthermore, hypoxia is known to change cell behavior and is associated with extracellular matrix remodeling and increased migratory and metastatic behavior. Tumour hypoxia is usually associated with highly malignant tumours, which frequently do not respond well to treatment. Vestibular system In acute exposure to hypoxic hypoxia on the vestibular system and the visuo-vestibular interactions, the gain of the vestibulo-ocular reflex (VOR) decreases under mild hypoxia at altitude. Postural control is also disturbed by hypoxia at altitude, postural sway is increased, and there is a correlation between hypoxic stress and adaptive tracking performance. Signs and symptoms Arterial oxygen tension can be measured by blood gas analysis of an arterial blood sample, and less reliably by pulse oximetry, which is not a complete measure of circulatory oxygen sufficiency. If there is insufficient blood flow or insufficient hemoglobin in the blood (anemia), tissues can be hypoxic even when there is high arterial oxygen saturation. Cyanosis Headache Increased reaction time, disorientation, and uncoordinated movement. Impaired judgment, confusion, memory loss and cognitive problems. Euphoria or dissociation Visual impairment A moderate level of hypoxia can cause a generalized partial loss of color vision affecting both red-green and blue-yellow discrimination at an altitude of . Lightheaded or dizzy sensation, vertigo Fatigue, drowsiness, or tiredness Shortness of breath Palpitations may occur in the initial phases. Later, the heart rate may reduce significantly degree. In severe cases, abnormal heart rhythms may develop. Nausea and vomiting Initially raised blood pressure followed by lowered blood pressure as the condition progresses. Severe hypoxia can cause loss of consciousness, seizures or convulsions, coma and eventually death. Breathing rate may slow down and become shallow and the pupils may not respond to light. Tingling in fingers and toes Numbness Complications Local tissue death and gangrene is a relatively common complication of ischaemic hypoxia. (diabetes, etc.) Brain damage – cortical blindness is a known but uncommon complication of acute hypoxic damage to the cerebral cortex. Obstructive sleep apnea syndrome is a risk factor for cerebrovascular disease and cognitive dysfunction. Causes Oxygen passively diffuses in the lung alveoli according to a concentration gradient, also referred to as a partial pressure gradient. Inhaled air rapidly reaches saturation with water vapour, which slightly reduces the partial pressures of the other components. Oxygen diffuses from the inhaled air to arterial blood, where its partial pressure is around 100 mmHg (13.3 kPa). In the blood, oxygen is bound to hemoglobin, a protein in red blood cells. The binding capacity of hemoglobin is influenced by the partial pressure of oxygen in the environment, as described by the oxygen–hemoglobin dissociation curve. A smaller amount of oxygen is transported in solution in the blood. In systemic tissues, oxygen again diffuses down a concentration gradient into cells and their mitochondria, where it is used to produce energy in conjunction with the breakdown of glucose, fats, and some amino acids. Hypoxia can result from a failure at any stage in the delivery of oxygen to cells. This can include low partial pressures of oxygen in the breathing gas, problems with diffusion of oxygen in the lungs through the interface between air and blood, insufficient available hemoglobin, problems with blood flow to the end user tissue, problems with the breathing cycle regarding rate and volume, and physiological and mechanical dead space. Experimentally, oxygen diffusion becomes rate limiting when arterial oxygen partial pressure falls to 60 mmHg (5.3 kPa) or below. Almost all the oxygen in the blood is bound to hemoglobin, so interfering with this carrier molecule limits oxygen delivery to the perfused tissues. Hemoglobin increases the oxygen-carrying capacity of blood by about 40-fold, with the ability of hemoglobin to carry oxygen influenced by the partial pressure of oxygen in the local environment, a relationship described in the oxygen–hemoglobin dissociation curve. When the ability of hemoglobin to carry oxygen is degraded, a hypoxic state can result. Ischemia Ischemia, meaning insufficient blood flow to a tissue, can also result in hypoxia in the affected tissues. This is called 'ischemic hypoxia'. Ischemia can be caused by an embolism, a heart attack that decreases overall blood flow, trauma to a tissue that results in damage reducing perfusion, and a variety of other causes. A consequence of insufficient blood flow causing local hypoxia is gangrene that occurs in diabetes. Diseases such as peripheral vascular disease can also result in local hypoxia. Symptoms are worse when a limb is used, increasing the oxygen demand in the active muscles. Pain may also be felt as a result of increased hydrogen ions leading to a decrease in blood pH (acidosis) created as a result of anaerobic metabolism. G-LOC, or g-force induced loss of consciousness, is a special case of ischemic hypoxia which occurs when the body is subjected to high enough acceleration sustained for long enough to lower cerebral blood pressure and circulation to the point where loss of consciousness occurs due to cerebral hypoxia. The human body is most sensitive to longitudinal acceleration towards the head, as this causes the largest hydrostatic pressure deficit in the head. Hypoxemic hypoxia This refers specifically to hypoxic states where the arterial content of oxygen is insufficient. This can be caused by alterations in respiratory drive, such as in respiratory alkalosis, physiological or pathological shunting of blood, diseases interfering in lung function resulting in a ventilation-perfusion mismatch, such as a pulmonary embolus, or alterations in the partial pressure of oxygen in the environment or lung alveoli, such as may occur at altitude or when diving. Common disorders that can cause respiratory dysfunction include trauma to the head and spinal cord, nontraumatic acute myelopathies, demyelinating disorders, stroke, Guillain–Barré syndrome, and myasthenia gravis. These dysfunctions may necessitate mechanical ventilation. Some chronic neuromuscular disorders such as motor neuron disease and muscular dystrophy may require ventilatory support in advanced stages. Carbon monoxide poisoning Carbon monoxide competes with oxygen for binding sites on hemoglobin molecules. As carbon monoxide binds with hemoglobin hundreds of times tighter than oxygen, it can prevent the carriage of oxygen. Carbon monoxide poisoning can occur acutely, as with smoke intoxication, or over a period of time, as with cigarette smoking. Due to physiological processes, carbon monoxide is maintained at a resting level of 4–6 ppm. This is increased in urban areas (7–13 ppm) and in smokers (20–40 ppm). A carbon monoxide level of 40 ppm is equivalent to a reduction in hemoglobin levels of 10 g/L. Carbon monoxide has a second toxic effect, namely removing the allosteric shift of the oxygen dissociation curve and shifting the foot of the curve to the left. In so doing, the hemoglobin is less likely to release its oxygen at the peripheral tissues. Certain abnormal hemoglobin variants also have higher than normal affinity for oxygen, and so are also poor at delivering oxygen to the periphery. Altitude Atmospheric pressure reduces with altitude and proportionally, so does the oxygen content of the air. The reduction in the partial pressure of inspired oxygen at higher altitudes lowers the oxygen saturation of the blood, ultimately leading to hypoxia. The clinical features of altitude sickness include: sleep problems, dizziness, headache and oedema. Hypoxic breathing gases The breathing gas may contain an insufficient partial pressure of oxygen. Such situations may lead to unconsciousness without symptoms since carbon dioxide levels remain normal and the human body senses pure hypoxia poorly. Hypoxic breathing gases can be defined as mixtures with a lower oxygen fraction than air, though gases containing sufficient oxygen to reliably maintain consciousness at normal sea level atmospheric pressure may be described as normoxic even when the oxygen fraction is slightly below normoxic. Hypoxic breathing gas mixtures in this context are those which will not reliably maintain consciousness at sea level pressure. One of the most widespread circumstances of exposure to hypoxic breathing gas is ascent to altitudes where the ambient pressure drops sufficiently to reduce the partial pressure of oxygen to hypoxic levels. Gases with as little as 2% oxygen by volume in a helium diluent are used for deep diving operations. The ambient pressure at 190 msw is sufficient to provide a partial pressure of about 0.4 bar, which is suitable for saturation diving. As the divers are decompressed, the breathing gas must be oxygenated to maintain a breathable atmosphere. It is also possible for the breathing gas for diving to have a dynamically controlled oxygen partial pressure, known as a set point, which is maintained in the breathing gas circuit of a diving rebreather by addition of oxygen and diluent gas to maintain the desired oxygen partial pressure at a safe level between hypoxic and hyperoxic at the ambient pressure due to the current depth. A malfunction of the control system may lead to the gas mixture becoming hypoxic at the current depth. A special case of hypoxic breathing gas is encountered in deep freediving where the partial pressure of the oxygen in the lung gas is depleted during the dive, but remains sufficient at depth, and when it drops during ascent, it becomes too hypoxic to maintain consciousness, and the diver loses consciousness before reaching the surface. Hypoxic gases may also occur in industrial, mining, and firefighting environments. Some of these may also be toxic or narcotic, others are just asphyxiant. Some are recognisable by smell, others are odourless. Inert gas asphyxiation may be deliberate with use of a suicide bag. Accidental death has occurred in cases where concentrations of nitrogen in controlled atmospheres, or methane in mines, has not been detected or appreciated. Other Hemoglobin's function can also be lost by chemically oxidizing its iron atom to its ferric form. This form of inactive hemoglobin is called methemoglobin and can be made by ingesting sodium nitrite as well as certain drugs and other chemicals. Anemia Hemoglobin plays a substantial role in carrying oxygen throughout the body, and when it is deficient, anemia can result, causing 'anaemic hypoxia' if tissue oxygenation is decreased. Iron deficiency is the most common cause of anemia. As iron is used in the synthesis of hemoglobin, less hemoglobin will be synthesised when there is less iron, due to insufficient intake, or poor absorption. Anemia is typically a chronic process that is compensated over time by increased levels of red blood cells via upregulated erythropoetin. A chronic hypoxic state can result from a poorly compensated anaemia. Histotoxic hypoxia Histotoxic hypoxia (also called histoxic hypoxia) is the inability of cells to take up or use oxygen from the bloodstream, despite physiologically normal delivery of oxygen to such cells and tissues. Histotoxic hypoxia results from tissue poisoning, such as that caused by cyanide (which acts by inhibiting cytochrome oxidase) and certain other poisons like hydrogen sulfide (byproduct of sewage and used in leather tanning). Mechanism Tissue hypoxia from low oxygen delivery may be due to low haemoglobin concentration (anaemic hypoxia), low cardiac output (stagnant hypoxia) or low haemoglobin saturation (hypoxic hypoxia). The consequence of oxygen deprivation in tissues is a switch to anaerobic metabolism at the cellular level. As such, reduced systemic blood flow may result in increased serum lactate. Serum lactate levels have been correlated with illness severity and mortality in critically ill adults and in ventilated neonates with respiratory distress. Physiological responses All vertebrates must maintain oxygen homeostasis to survive, and have evolved physiological systems to ensure adequate oxygenation of all tissues. In air breathing vertebrates this is based on lungs to acquire the oxygen, hemoglobin in red corpuscles to transport it, a vasculature to distribute, and a heart to deliver. Short term variations in the levels of oxygenation are sensed by chemoreceptor cells which respond by activating existing proteins, and over longer terms by regulation of gene transcription. Hypoxia is also involved in the pathogenesis of some common and severe pathologies. The most common causes of death in an aging population include myocardial infarction, stroke and cancer. These diseases share a common feature that limitation of oxygen availability contributes to the development of the pathology. Cells and organisms are also able to respond adaptively to hypoxic conditions, in ways that help them to cope with these adverse conditions. Several systems can sense oxygen concentration and may respond with adaptations to acute and long-term hypoxia. The systems activated by hypoxia usually help cells to survive and overcome the hypoxic conditions. Erythropoietin, which is produced in larger quantities by the kidneys under hypoxic conditions, is an essential hormone that stimulates production of red blood cells, which are the primary transporter of blood oxygen, and glycolytic enzymes are involved in anaerobic ATP formation. Hypoxia-inducible factors (HIFs) are transcription factors that respond to decreases in available oxygen in the cellular environment, or hypoxia. The HIF signaling cascade mediates the effects of hypoxia on the cell. Hypoxia often keeps cells from differentiating. However, hypoxia promotes the formation of blood vessels, and is important for the formation of a vascular system in embryos and tumors. The hypoxia in wounds also promotes the migration of keratinocytes and the restoration of the epithelium. It is therefore not surprising that HIF-1 modulation was identified as a promising treatment paradigm in wound healing. Exposure of a tissue to repeated short periods of hypoxia, between periods of normal oxygen levels, influences the tissue's later response to prolonged ischaemic exposure. Thus is known as ischaemic preconditioning, and it is known to occur in many tissues. Acute If oxygen delivery to cells is insufficient for the demand (hypoxia), electrons will be shifted to pyruvic acid in the process of lactic acid fermentation. This temporary measure (anaerobic metabolism) allows small amounts of energy to be released. Lactic acid build up (in tissues and blood) is a sign of inadequate mitochondrial oxygenation, which may be due to hypoxemia, poor blood flow (e.g., shock) or a combination of both. If severe or prolonged it could lead to cell death. In humans, hypoxia is detected by the peripheral chemoreceptors in the carotid body and aortic body, with the carotid body chemoreceptors being the major mediators of reflex responses to hypoxia. This response does not control ventilation rate at normal PO2, but below normal the activity of neurons innervating these receptors increases dramatically, so much as to override the signals from central chemoreceptors in the hypothalamus, increasing PO2 despite a falling PCO2 In most tissues of the body, the response to hypoxia is vasodilation. By widening the blood vessels, the tissue allows greater perfusion. By contrast, in the lungs, the response to hypoxia is vasoconstriction. This is known as hypoxic pulmonary vasoconstriction, or "HPV", and has the effect of redirecting blood away from poorly ventilated regions, which helps match perfusion to ventilation, giving a more even oxygenation of blood from different parts of the lungs. In conditions of hypoxic breathing gas, such as at high altitude, HPV is generalized over the entire lung, but with sustained exposure to generalized hypoxia, HPV is suppressed. Hypoxic ventilatory response (HVR) is the increase in ventilation induced by hypoxia that allows the body to take in and transport lower concentrations of oxygen at higher rates. It is initially elevated in lowlanders who travel to high altitude, but reduces significantly over time as people acclimatize. Chronic When the pulmonary capillary pressure remains elevated chronically (for at least 2 weeks), the lungs become even more resistant to pulmonary edema because the lymph vessels expand greatly, increasing their capability of carrying fluid away from the interstitial spaces perhaps as much as 10-fold. Therefore, in patients with chronic mitral stenosis, pulmonary capillary pressures of 40 to 45 mm Hg have been measured without the development of lethal pulmonary edema. There are several potential physiologic mechanisms for hypoxemia, but in patients with chronic obstructive pulmonary disease (COPD), ventilation/perfusion (V/Q) mismatching is most common, with or without alveolar hypoventilation, as indicated by arterial carbon dioxide concentration. Hypoxemia caused by V/Q mismatching in COPD is relatively easy to correct, and relatively small flow rates of supplemental oxygen (less than 3 L/min for the majority of patients) are required for long term oxygen therapy (LTOT). Hypoxemia normally stimulates ventilation and produces dyspnea, but these and the other signs and symptoms of hypoxia are sufficiently variable in COPD to limit their value in patient assessment. Chronic alveolar hypoxia is the main factor leading to development of cor pulmonale — right ventricular hypertrophy with or without overt right ventricular failure — in patients with COPD. Pulmonary hypertension adversely affects survival in COPD, proportional to resting mean pulmonary artery pressure elevation. Although the severity of airflow obstruction as measured by forced expiratory volume tests FEV1 correlates best with overall prognosis in COPD, chronic hypoxemia increases mortality and morbidity for any severity of disease. Large-scale studies of long term oxygen therapy in patients with COPD show a dose–response relationship between daily hours of supplemental oxygen use and survival. Continuous, 24-hours-per-day oxygen use in appropriately selected patients may produce a significant survival benefit. Pathological responses Cerebral ischemia The brain has relatively high energy requirements, using about 20% of the oxygen under resting conditions, but low reserves, which make it specially vulnerable to hypoxia. In normal conditions, an increased demand for oxygen is easily compensated by an increased cerebral blood flow. but under conditions when there is insufficient oxygen available, increased blood flow may not be sufficient to compensate, and hypoxia can result in brain injury. A longer duration of cerebral hypoxia will generally result in larger areas of the brain being affected. The brainstem, hippocampus and cerebral cortex seem to be the most vulnerable regions. Injury becomes irreversible if oxygenation is not soon restored. Most cell death is by necrosis but delayed apoptosis also occurs. In addition, presynaptic neurons release large amounts of glutamate which further increases Ca2+ influx and causes catastrophic collapse in postsynaptic cells. Although it is the only way to save the tissue, reperfusion also produces reactive oxygen species and inflammatory cell infiltration, which induces further cell death. If the hypoxia is not too severe, cells can suppress some of their functions, such as protein synthesis and spontaneous electrical activity, in a process called penumbra, which is reversible if the oxygen supply is resumed soon enough. Myocardial ischemia Parts of the heart are exposed to ischemic hypoxia in the event of occlusion of a coronary artery. Short periods of ischaemia are reversible if reperfused within about 20 minutes, without development of necrosis, but the phenomenon known as stunning is generally evident. If hypoxia continues beyond this period, necrosis propagates through the myocardial tissue. Energy metabolism in the affected area shifts from mitochondrial respiration to anaerobic glycolysis almost immediately, with concurrent reduction of effectiveness of contractions, which soon cease. Anaerobic products accumulate in the muscle cells, which develop acidosis and osmotic load leading to cellular edema. Intracellular Ca2+ increases and eventually leads to cell necrosis. Arterial flow must be restored to return to aerobic metabolism and prevent necrosis of the affected muscle cells, but this also causes further damage by reperfusion injury. Myocadial stunning has been described as "prolonged postischaemic dysfunction of viable tissue salvaged by reperfusion", which manifests as temporary contractile failure in oxygenated muscle tissue. This may be caused by a release of reactive oxygen species during the early stages of reperfusion. Tumor angiogenesis As tumors grow, regions of relative hypoxia develop as the oxygen supply is unevenly utilized by the tumor cells. The formation of new blood vessels is necessary for continued tumor growth, and is also an important factor in metastasis, as the route by which cancerous cells are transported to other sites. Diagnosis Physical examination and history Hypoxia can present as acute or chronic. Acute presentation may include dyspnea (shortness of breath) and tachypnea (rapid, often shallow, breathing). Severity of symptom presentation is commonly an indication of severity of hypoxia. Tachycardia (rapid pulse) may develop to compensate for low arterial oxygen tension. Stridor may be heard in upper airway obstruction, and cyanosis may indicate severe hypoxia. Neurological symptoms and organ function deterioration occur when the oxygen delivery is severely compromised. In moderate hypoxia, restlessness, headache and confusion may occur, with coma and eventual death possible in severe cases. In chronic presentation, dyspnea following exertion is most commonly mentioned. Symptoms of the underlying condition that caused the hypoxia may be apparent, and can help with differential diagnosis. A productive cough and fever may be present with lung infection, and leg edema may suggest heart failure. Lung auscultation can provide useful information. Tests An arterial blood gas test (ABG) may be done, which usually includes measurements of oxygen content, hemoglobin, oxygen saturation (how much of the hemoglobin is carrying oxygen), arterial partial pressure of oxygen (PaO2), partial pressure of carbon dioxide (PaCO2), blood pH level, and bicarbonate (HCO3) An arterial oxygen tension (PaO2) less than 80 mmHg is considered abnormal, but must be considered in context of the clinical situation. In addition to diagnosis of hypoxemia, the ABG may provide additional information, such as PCO2, which can help identify the etiology. The arterial partial pressure of carbon dioxide is an indirect measure of exchange of carbon dioxide with the air in the lungs, and is related to minute ventilation. PCO2 is raised in hypoventilation. The normal range of PaO2:FiO2 ratio is 300 to 500 mmHg, if this ratio is lower than 300 it may indicate a deficit in gas exchange, which is particularly relevant for identifying acute respiratory distress syndrome (ARDS). A ratio of less than 200 indicates severe hypoxemia. The alveolar–arterial gradient (A-aO2, or A–a gradient), is the difference between the alveolar (A) concentration of oxygen and the arterial (a) concentration of oxygen. It is a useful parameter for narrowing the differential diagnosis of hypoxemia. The A–a gradient helps to assess the integrity of the alveolar capillary unit. For example, at high altitude, the arterial oxygen PaO2 is low, but only because the alveolar oxygen PAO2 is also low. However, in states of ventilation perfusion mismatch, such as pulmonary embolism or right-to-left shunt, oxygen is not effectively transferred from the alveoli to the blood which results in an elevated A-a gradient. PaO2 can be obtained from the arterial blood gas analysis and PAO2 is calculated using the alveolar gas equation. An abnormally low hematocrit (volume percentage of red blood cells) may indicate anemia. X-rays or CT scans of the chest and airways can reveal abnormalities that may affect ventilation or perfusion. A ventilation/perfusion scan, also called a V/Q lung scan, is a type of medical imaging using scintigraphy and medical isotopes to evaluate the circulation of air and blood within a patient's lungs, in order to determine the ventilation/perfusion ratio. The ventilation part of the test looks at the ability of air to reach all parts of the lungs, while the perfusion part evaluates how well blood circulates within the lungs. Pulmonary function testing may include: Tests that measure oxygen levels during the night The six-minute walk test, which measures how far a person can walk on a flat surface in six minutes to test exercise capacity by measuring oxygen levels in response to exercise. Diagnostic measurements that may be relevant include: Lung volumes, including lung capacity, airway resistance, respiratory muscle strength, diffusing capacity Other pulmonary function tests which may be relevant include: Spirometry, body plethysmography, forced oscillation technique for calculating the volume, pressure, and air flow in the lungs, bronchodilator responsiveness, carbon monoxide diffusion test (DLCO), oxygen titration studies, cardiopulmonary stress test, bronchoscopy, and thoracentesis Differential diagnosis Treatment will depend on severity and may also depend on the cause, as some cases are due to external causes and removing them and treating acute symptoms may be sufficient, but where the symptoms are due to underlying pathology, treatment of the obvious symptoms may only provide temporary or partial relief, so differential diagnosis can be important in selecting definitive treatment. Hypoxemic hypoxia: Low oxygen tension in the arterial blood (PaO2) is generally an indication of inability of the lungs to properly oxygenate the blood. Internal causes include hypoventilation, impaired alveolar diffusion, and pulmonary shunting. External causes include hypoxic environment, which could be caused by low ambient pressure or unsuitable breathing gas. Both acute and chronic hypoxia and hypercapnia caused by respiratory dysfunction can produce neurological symptoms such as encephalopathy, seizures, headache, papilledema, and asterixis. Obstructive sleep apnea syndrome may cause morning headaches Circulatory Hypoxia: Caused by insufficient perfusion of the affected tissues by blood which is adequately oxygenated. This may be generalised, due to cardiac failure or hypovolemia, or localised, due to infarction or localised injury. Anemic Hypoxia is caused by a deficit in oxygen-carrying capacity, usually due to low hemoglobin levels, leading to generalised inadequate oxygen delivery. Histotoxic Hypoxia (Dysoxia) is a consequence of cells being unable to utilize oxygen effectively. A classic example is cyanide poisoning which inhibits the enzyme cytochrome C oxidase in the mitochondria, blocking the use of oxygen to make ATP. Critical illness polyneuropathy or myopathy should be considered in the intensive care unit when patients have difficulty coming off the ventilator. Prevention Prevention can be as simple as risk management of occupational exposure to hypoxic environments, and commonly involves the use of environmental monitoring and personal protective equipment. Prevention of hypoxia as a predictable consequence of medical conditions requires prevention of those conditions. Screening of demographics known to be at risk for specific disorders may be useful. Prevention of altitude induced hypoxia To counter the effects of high-altitude diseases, the body must return arterial PaO2 toward normal. Acclimatization, the means by which the body adapts to higher altitudes, only partially restores PO2 to standard levels. Hyperventilation, the body's most common response to high-altitude conditions, increases alveolar PO2 by raising the depth and rate of breathing. However, while PO2 does improve with hyperventilation, it does not return to normal. Studies of miners and astronomers working at 3000 meters and above show improved alveolar PO2 with full acclimatization, yet the PO2 level remains equal to or even below the threshold for continuous oxygen therapy for patients with chronic obstructive pulmonary disease (COPD). In addition, there are complications involved with acclimatization. Polycythemia, in which the body increases the number of red blood cells in circulation, thickens the blood, raising the risk of blood clots. In high-altitude situations, only oxygen enrichment or compartment pressurisation can counteract the effects of hypoxia. Pressurisation is practicable in vehicles, and for emergencies in ground installations. By increasing the concentration of oxygen in the at ambient pressure, the effects of lower barometric pressure are countered and the level of arterial PO2 is restored toward normal capacity. A small amount of supplemental oxygen reduces the equivalent altitude in climate-controlled rooms. At 4000 m, raising the oxygen concentration level by 5% via an oxygen concentrator and an existing ventilation system provides an altitude equivalent of 3000 m, which is much more tolerable for the increasing number of low-landers who work in high altitude. In a study of astronomers working in Chile at 5050 m, oxygen concentrators increased the level of oxygen concentration by almost 30 percent (that is, from 21 percent to 27 percent). This resulted in increased worker productivity, less fatigue, and improved sleep. Oxygen concentrators are suited for high altitude oxygen enrichment of climate-controlled environments. They require little maintenance and electricity, utilise a locally available source of oxygen, and eliminate the expensive task of transporting oxygen cylinders to remote areas. Offices and housing often already have climate-controlled rooms, in which temperature and humidity are kept at a constant level. Treatment and management Treatment and management depend on circumstances. For most high altitude situations the risk is known, and prevention is appropriate. At low altitudes hypoxia is more likely to be associated with a medical problem or an unexpected contingency, and treatment is more likely to be provided to suit the specific case. It is necessary to identify persons who need oxygen therapy, as supplemental oxygen is required to treat most causes of hypoxia, but different oxygen concentrations may be appropriate. Treatment of acute and chronic cases Treatment will depend on the cause of hypoxia. If it is determined that there is an external cause, and it can be removed, then treatment may be limited to support and returning the system to normal oxygenation. In other cases a longer course of treatment may be necessary, and this may require supplemental oxygen over a fairly long term or indefinitely. There are three main aspects of oxygenation treatment: maintaining patent airways, providing sufficient oxygen content of the inspired air, and improving the diffusion in the lungs. In some cases treatment may extend to improving oxygen capacity of the blood, which may include volumetric and circulatory intervention and support, hyperbaric oxygen therapy and treatment of intoxication. Invasive ventilation may be necessary or an elective option in surgery. This generally involves a positive pressure ventilator connected to an endotracheal tube, and allows precise delivery of ventilation, accurate monitoring of FiO2, and positive end-expiratory pressure, and can be combined with anaesthetic gas delivery. In some cases a tracheotomy may be necessary. Decreasing metabolic rate by reducing body temperature lowers oxygen demand and consumption, and can minimise the effects of tissue hypoxia, especially in the brain, and therapeutic hypothermia based on this principle may be useful. Where the problem is due to respiratory failure. it is desirable to treat the underlying cause. In cases of pulmonary edema, diuretics can be used to reduce the oedems. Steroids may be effective in some cases of interstitial lung disease, and in extreme cases, extracorporeal membrane oxygenation (ECMO) can be used. Hyperbaric oxygen has been found useful for treating some forms of localized hypoxia, including poorly perfused trauma injuries such as Crush injury, compartment syndrome, and other acute traumatic ischemias. It is the definitive treatment for severe decompression sickness, which is largely a condition involving localized hypoxia initially caused by inert gas embolism and inflammatory reactions to extravascular bubble growth. It is also effective in carbon monoxide poisoning and diabetic foot. A prescription renewal for home oxygen following hospitalization requires an assessment of the patient for ongoing hypoxemia. Outcomes Prognosis is strongly affected by cause, severity, treatment, and underlying pathology. Hypoxia leading to reduced capacity to respond appropriately, or to loss of consciousness, has been implicated in incidents where the direct cause of death was not hypoxia. This is recorded in underwater diving incidents, where drowning has often been given as cause of death, high altitude mountaineering, where exposure, hypothermia and falls have been consequences, flying in unpressurized aircraft, and aerobatic maneuvers, where loss of control leading to a crash is possible. Epidemiology Hypoxia is a common disorder but there are many possible causes. Prevalence is variable. Some of the causes are very common, like pneumonia or chronic obstructive pulmonary disease; some are quite rare like hypoxia due to cyanide poisoning. Others, like reduced oxygen tension at high altitude, may be regionally distributed or associated with a specific demographic. Generalized hypoxia is an occupational hazard in several high-risk occupations, including firefighting, professional diving, mining and underground rescue, and flying at high altitudes in unpressurised aircraft. Potentially life-threatening hypoxemia is common in critically ill patients. Localized hypoxia may be a complication of diabetes, decompression sickness, and of trauma that affects blood supply to the extremities. Hypoxia due to underdeveloped lung function is a common complication of premature birth. In the United States, intrauterine hypoxia and birth asphyxia were listed together as the tenth leading cause of neonatal death. Silent hypoxia Silent hypoxia (also known as happy hypoxia) is generalised hypoxia that does not coincide with shortness of breath. This presentation is known to be a complication of COVID-19, and is also known in atypical pneumonia, altitude sickness, and rebreather malfunction accidents. History The 2019 Nobel Prize in Physiology or Medicine was awarded to William G. Kaelin Jr., Sir Peter J. Ratcliffe, and Gregg L. Semenza in recognition of their discovery of cellular mechanisms to sense and adapt to different oxygen concentrations, establishing a basis for how oxygen levels affect physiological function. The use of the term hypoxia appears to be relatively recent, with the first recorded use in scientific publication from 1945. Previous to this the term anoxia was extensively used for all levels of oxygen deprivation. Investigation into the effects of lack of oxygen date from the mid 19th century. Etymology Hypoxia is formed from the Greek roots υπo (hypo), meaning under, below, and less than, and oξυ (oxy), meaning acute or acid, which is the root for oxygen.
Biology and health sciences
Injury
null
13311
https://en.wikipedia.org/wiki/Hormone
Hormone
A hormone (from the Greek participle , "setting in motion") is a class of signaling molecules in multicellular organisms that are sent to distant organs or tissues by complex biological processes to regulate physiology and behavior. Hormones are required for the correct development of animals, plants and fungi. Due to the broad definition of a hormone (as a signaling molecule that exerts its effects far from its site of production), numerous kinds of molecules can be classified as hormones. Among the substances that can be considered hormones, are eicosanoids (e.g. prostaglandins and thromboxanes), steroids (e.g. oestrogen and brassinosteroid), amino acid derivatives (e.g. epinephrine and auxin), protein or peptides (e.g. insulin and CLE peptides), and gases (e.g. ethylene and nitric oxide). Hormones are used to communicate between organs and tissues. In vertebrates, hormones are responsible for regulating a wide range of processes including both physiological processes and behavioral activities such as digestion, metabolism, respiration, sensory perception, sleep, excretion, lactation, stress induction, growth and development, movement, reproduction, and mood manipulation. In plants, hormones modulate almost all aspects of development, from germination to senescence. Hormones affect distant cells by binding to specific receptor proteins in the target cell, resulting in a change in cell function. When a hormone binds to the receptor, it results in the activation of a signal transduction pathway that typically activates gene transcription, resulting in increased expression of target proteins. Hormones can also act in non-genomic pathways that synergize with genomic effects. Water-soluble hormones (such as peptides and amines) generally act on the surface of target cells via second messengers. Lipid soluble hormones, (such as steroids) generally pass through the plasma membranes of target cells (both cytoplasmic and nuclear) to act within their nuclei. Brassinosteroids, a type of polyhydroxysteroids, are a sixth class of plant hormones and may be useful as an anticancer drug for endocrine-responsive tumors to cause apoptosis and limit plant growth. Despite being lipid soluble, they nevertheless attach to their receptor at the cell surface. In vertebrates, endocrine glands are specialized organs that secrete hormones into the endocrine signaling system. Hormone secretion occurs in response to specific biochemical signals and is often subject to negative feedback regulation. For instance, high blood sugar (serum glucose concentration) promotes insulin synthesis. Insulin then acts to reduce glucose levels and maintain homeostasis, leading to reduced insulin levels. Upon secretion, water-soluble hormones are readily transported through the circulatory system. Lipid-soluble hormones must bond to carrier plasma glycoproteins (e.g., thyroxine-binding globulin (TBG)) to form ligand-protein complexes. Some hormones, such as insulin and growth hormones, can be released into the bloodstream already fully active. Other hormones, called prohormones, must be activated in certain cells through a series of steps that are usually tightly controlled. The endocrine system secretes hormones directly into the bloodstream, typically via fenestrated capillaries, whereas the exocrine system secretes its hormones indirectly using ducts. Hormones with paracrine function diffuse through the interstitial spaces to nearby target tissue. Plants lack specialized organs for the secretion of hormones, although there is spatial distribution of hormone production. For example, the hormone auxin is produced mainly at the tips of young leaves and in the shoot apical meristem. The lack of specialised glands means that the main site of hormone production can change throughout the life of a plant, and the site of production is dependent on the plant's age and environment. Introduction and overview Hormone producing cells are found in the endocrine glands, such as the thyroid gland, ovaries, and testes. Hormonal signaling involves the following steps: Biosynthesis of a particular hormone in a particular tissue. Storage and secretion of the hormone. Transport of the hormone to the target cell(s). Recognition of the hormone by an associated cell membrane or intracellular receptor protein. Relay and amplification of the received hormonal signal via a signal transduction process: This then leads to a cellular response. The reaction of the target cells may then be recognized by the original hormone-producing cells, leading to a downregulation in hormone production. This is an example of a homeostatic negative feedback loop. Breakdown of the hormone. Exocytosis and other methods of membrane transport are used to secrete hormones when the endocrine glands are signaled. The hierarchical model is an oversimplification of the hormonal signaling process. Cellular recipients of a particular hormonal signal may be one of several cell types that reside within a number of different tissues, as is the case for insulin, which triggers a diverse range of systemic physiological effects. Different tissue types may also respond differently to the same hormonal signal. Discovery Arnold Adolph Berthold (1849) Arnold Adolph Berthold was a German physiologist and zoologist, who, in 1849, had a question about the function of the testes. He noticed in castrated roosters that they did not have the same sexual behaviors as roosters with their testes intact. He decided to run an experiment on male roosters to examine this phenomenon. He kept a group of roosters with their testes intact, and saw that they had normal sized wattles and combs (secondary sexual organs), a normal crow, and normal sexual and aggressive behaviors. He also had a group with their testes surgically removed, and noticed that their secondary sexual organs were decreased in size, had a weak crow, did not have sexual attraction towards females, and were not aggressive. He realized that this organ was essential for these behaviors, but he did not know how. To test this further, he removed one testis and placed it in the abdominal cavity. The roosters acted and had normal physical anatomy. He was able to see that location of the testes does not matter. He then wanted to see if it was a genetic factor that was involved in the testes that provided these functions. He transplanted a testis from another rooster to a rooster with one testis removed, and saw that they had normal behavior and physical anatomy as well. Berthold determined that the location or genetic factors of the testes do not matter in relation to sexual organs and behaviors, but that some chemical in the testes being secreted is causing this phenomenon. It was later identified that this factor was the hormone testosterone. Charles and Francis Darwin (1880) Although known primarily for his work on the Theory of Evolution, Charles Darwin was also keenly interested in plants. Through the 1870s, he and his son Francis studied the movement of plants towards light. They were able to show that light is perceived at the tip of a young stem (the coleoptile), whereas the bending occurs lower down the stem. They proposed that a 'transmissible substance' communicated the direction of light from the tip down to the stem. The idea of a 'transmissible substance' was initially dismissed by other plant biologists, but their work later led to the discovery of the first plant hormone. In the 1920s Dutch scientist Frits Warmolt Went and Russian scientist Nikolai Cholodny (working independently of each other) conclusively showed that asymmetric accumulation of a growth hormone was responsible for this bending. In 1933 this hormone was finally isolated by Kögl, Haagen-Smit and Erxleben and given the name 'auxin'. Oliver and Schäfer (1894) British physician George Oliver and physiologist Edward Albert Schäfer, professor at University College London, collaborated on the physiological effects of adrenal extracts. They first published their findings in two reports in 1894, a full publication followed in 1895. Though frequently falsely attributed to secretin, found in 1902 by Bayliss and Starling, Oliver and Schäfer's adrenal extract containing adrenaline, the substance causing the physiological changes, was the first hormone to be discovered. The term hormone would later be coined by Starling. Bayliss and Starling (1902) William Bayliss and Ernest Starling, a physiologist and biologist, respectively, wanted to see if the nervous system had an impact on the digestive system. They knew that the pancreas was involved in the secretion of digestive fluids after the passage of food from the stomach to the intestines, which they believed to be due to the nervous system. They cut the nerves to the pancreas in an animal model and discovered that it was not nerve impulses that controlled secretion from the pancreas. It was determined that a factor secreted from the intestines into the bloodstream was stimulating the pancreas to secrete digestive fluids. This was named secretin: a hormone. Types of signaling Hormonal effects are dependent on where they are released, as they can be released in different manners. Not all hormones are released from a cell and into the blood until it binds to a receptor on a target. The major types of hormone signaling are: Chemical classes As hormones are defined functionally, not structurally, they may have diverse chemical structures. Hormones occur in multicellular organisms (plants, animals, fungi, brown algae, and red algae). These compounds occur also in unicellular organisms, and may act as signaling molecules however there is no agreement that these molecules can be called hormones. Vertebrates Invertebrates Compared with vertebrates, insects and crustaceans possess a number of structurally unusual hormones such as the juvenile hormone, a sesquiterpenoid. Plants Examples include abscisic acid, auxin, cytokinin, ethylene, and gibberellin. Receptors Most hormones initiate a cellular response by initially binding to either cell surface receptors or intracellular receptors. A cell may have several different receptors that recognize the same hormone but activate different signal transduction pathways, or a cell may have several different receptors that recognize different hormones and activate the same biochemical pathway. Receptors for most peptide as well as many eicosanoid hormones are embedded in the cell membrane as cell surface receptors, and the majority of these belong to the G protein-coupled receptor (GPCR) class of seven alpha helix transmembrane proteins. The interaction of hormone and receptor typically triggers a cascade of secondary effects within the cytoplasm of the cell, described as signal transduction, often involving phosphorylation or dephosphorylation of various other cytoplasmic proteins, changes in ion channel permeability, or increased concentrations of intracellular molecules that may act as secondary messengers (e.g., cyclic AMP). Some protein hormones also interact with intracellular receptors located in the cytoplasm or nucleus by an intracrine mechanism. For steroid or thyroid hormones, their receptors are located inside the cell within the cytoplasm of the target cell. These receptors belong to the nuclear receptor family of ligand-activated transcription factors. To bind their receptors, these hormones must first cross the cell membrane. They can do so because they are lipid-soluble. The combined hormone-receptor complex then moves across the nuclear membrane into the nucleus of the cell, where it binds to specific DNA sequences, regulating the expression of certain genes, and thereby increasing the levels of the proteins encoded by these genes. However, it has been shown that not all steroid receptors are located inside the cell. Some are associated with the plasma membrane. Effects in humans Hormones have the following effects on the body: stimulation or inhibition of growth wake-sleep cycle and other circadian rhythms mood swings induction or suppression of apoptosis (programmed cell death) activation or inhibition of the immune system regulation of metabolism preparation of the body for mating, fighting, fleeing, and other activity preparation of the body for a new phase of life, such as puberty, parenting, and menopause control of the reproductive cycle hunger cravings A hormone may also regulate the production and release of other hormones. Hormone signals control the internal environment of the body through homeostasis. Regulation The rate of hormone biosynthesis and secretion is often regulated by a homeostatic negative feedback control mechanism. Such a mechanism depends on factors that influence the metabolism and excretion of hormones. Thus, higher hormone concentration alone cannot trigger the negative feedback mechanism. Negative feedback must be triggered by overproduction of an "effect" of the hormone. Hormone secretion can be stimulated and inhibited by: Other hormones (stimulating- or releasing -hormones) Plasma concentrations of ions or nutrients, as well as binding globulins Neurons and mental activity Environmental changes, e.g., of light or temperature One special group of hormones is the tropic hormones that stimulate the hormone production of other endocrine glands. For example, thyroid-stimulating hormone (TSH) causes growth and increased activity of another endocrine gland, the thyroid, which increases output of thyroid hormones. To release active hormones quickly into the circulation, hormone biosynthetic cells may produce and store biologically inactive hormones in the form of pre- or prohormones. These can then be quickly converted into their active hormone form in response to a particular stimulus. Eicosanoids are considered to act as local hormones. They are considered to be "local" because they possess specific effects on target cells close to their site of formation. They also have a rapid degradation cycle, making sure they do not reach distant sites within the body. Hormones are also regulated by receptor agonists. Hormones are ligands, which are any kinds of molecules that produce a signal by binding to a receptor site on a protein. Hormone effects can be inhibited, thus regulated, by competing ligands that bind to the same target receptor as the hormone in question. When a competing ligand is bound to the receptor site, the hormone is unable to bind to that site and is unable to elicit a response from the target cell. These competing ligands are called antagonists of the hormone. Therapeutic use Many hormones and their structural and functional analogs are used as medication. The most commonly prescribed hormones are estrogens and progestogens (as methods of hormonal contraception and as HRT), thyroxine (as levothyroxine, for hypothyroidism) and steroids (for autoimmune diseases and several respiratory disorders). Insulin is used by many diabetics. Local preparations for use in otolaryngology often contain pharmacologic equivalents of adrenaline, while steroid and vitamin D creams are used extensively in dermatological practice. A "pharmacologic dose" or "supraphysiological dose" of a hormone is a medical usage referring to an amount of a hormone far greater than naturally occurs in a healthy body. The effects of pharmacologic doses of hormones may be different from responses to naturally occurring amounts and may be therapeutically useful, though not without potentially adverse side effects. An example is the ability of pharmacologic doses of glucocorticoids to suppress inflammation. Hormone-behavior interactions At the neurological level, behavior can be inferred based on hormone concentration, which in turn are influenced by hormone-release patterns; the numbers and locations of hormone receptors; and the efficiency of hormone receptors for those involved in gene transcription. Hormone concentration does not incite behavior, as that would undermine other external stimuli; however, it influences the system by increasing the probability of a certain event to occur. Not only can hormones influence behavior, but also behavior and the environment can influence hormone concentration. Thus, a feedback loop is formed, meaning behavior can affect hormone concentration, which in turn can affect behavior, which in turn can affect hormone concentration, and so on. For example, hormone-behavior feedback loops are essential in providing constancy to episodic hormone secretion, as the behaviors affected by episodically secreted hormones directly prevent the continuous release of sad hormones. Three broad stages of reasoning may be used to determine if a specific hormone-behavior interaction is present within a system: The frequency of occurrence of a hormonally dependent behavior should correspond to that of its hormonal source. A hormonally dependent behavior is not expected if the hormonal source (or its types of action) is non-existent. The reintroduction of a missing behaviorally dependent hormonal source (or its types of action) is expected to bring back the absent behavior. Comparison with neurotransmitters Though colloquially oftentimes used interchangeably, there are various clear distinctions between hormones and neurotransmitters: A hormone can perform functions over a larger spatial and temporal scale than can a neurotransmitter, which often acts in micrometer-scale distances. Hormonal signals can travel virtually anywhere in the circulatory system, whereas neural signals are restricted to pre-existing nerve tracts. Assuming the travel distance is equivalent, neural signals can be transmitted much more quickly (in the range of milliseconds) than can hormonal signals (in the range of seconds, minutes, or hours). Neural signals can be sent at speeds up to 100 meters per second. Neural signalling is an all-or-nothing (digital) action, whereas hormonal signalling is an action that can be continuously variable as it is dependent upon hormone concentration. Neurohormones are a type of hormone that share a commonality with neurotransmitters. They are produced by endocrine cells that receive input from neurons, or neuroendocrine cells. Both classic hormones and neurohormones are secreted by endocrine tissue; however, neurohormones are the result of a combination between endocrine reflexes and neural reflexes, creating a neuroendocrine pathway. While endocrine pathways produce chemical signals in the form of hormones, the neuroendocrine pathway involves the electrical signals of neurons. In this pathway, the result of the electrical signal produced by a neuron is the release of a chemical, which is the neurohormone. Finally, like a classic hormone, the neurohormone is released into the bloodstream to reach its target. Binding proteins Hormone transport and the involvement of binding proteins is an essential aspect when considering the function of hormones. The formation of a complex with a binding protein has several benefits: the effective half-life of the bound hormone is increased, and a reservoir of bound hormones is created, which evens the variations in concentration of unbound hormones (bound hormones will replace the unbound hormones when these are eliminated). An example of the usage of hormone-binding proteins is in the thyroxine-binding protein which carries up to 80% of all thyroxine in the body, a crucial element in regulating the metabolic rate.
Biology and health sciences
Chemistry
null
13435
https://en.wikipedia.org/wiki/Hydrology
Hydrology
Hydrology () is the scientific study of the movement, distribution, and management of water on Earth and other planets, including the water cycle, water resources, and drainage basin sustainability. A practitioner of hydrology is called a hydrologist. Hydrologists are scientists studying earth or environmental science, civil or environmental engineering, and physical geography. Using various analytical methods and scientific techniques, they collect and analyze data to help solve water related problems such as environmental preservation, natural disasters, and water management. Hydrology subdivides into surface water hydrology, groundwater hydrology (hydrogeology), and marine hydrology. Domains of hydrology include hydrometeorology, surface hydrology, hydrogeology, drainage-basin management, and water quality. Oceanography and meteorology are not included because water is only one of many important aspects within those fields. Hydrological research can inform environmental engineering, policy, and planning. Branches Chemical hydrology is the study of the chemical characteristics of water. Ecohydrology is the study of interactions between organisms and the hydrologic cycle. Hydrogeology is the study of the presence and movement of groundwater. Hydrogeochemistry is the study of how terrestrial water dissolves minerals weathering and this effect on water chemistry. Hydroinformatics is the adaptation of information technology to hydrology and water resources applications. Hydrometeorology is the study of the transfer of water and energy between land and water body surfaces and the lower atmosphere. Isotope hydrology is the study of the isotopic signatures of water. Surface hydrology is the study of hydrologic processes that operate at or near Earth's surface. Drainage basin management covers water storage, in the form of reservoirs, and floods protection. Water quality includes the chemistry of water in rivers and lakes, both of pollutants and natural solutes. Applications Calculation of rainfall. Calculation of Evapotranspiration Calculating surface runoff and precipitation. Determining the water balance of a region. Determining the agricultural water balance. Designing riparian-zone restoration projects. Mitigating and predicting flood, landslide and Drought risk. Real-time flood forecasting, flood warning, Flood Frequency Analysis Designing irrigation schemes and managing agricultural productivity. Part of the hazard module in catastrophe modeling. Providing drinking water. Designing dams for water supply or hydroelectric power generation. Designing bridges. Designing sewers and urban drainage systems. Analyzing the impacts of antecedent moisture on sanitary sewer systems. Predicting geomorphologic changes, such as erosion or sedimentation. Assessing the impacts of natural and anthropogenic environmental change on water resources. Assessing contaminant transport risk and establishing environmental policy guidelines. Estimating the water resource potential of river basins. Water resources management. Water resources engineering - application of hydrological and hydraulic principles to the planning, development, and management of water resources for beneficial human use. It involves assessing water availability, quality, and demand; designing and operating water infrastructure; and implementing strategies for sustainable water management. History Hydrology has been subject to investigation and engineering for millennia. Ancient Egyptians were one of the first to employ hydrology in their engineering and agriculture, inventing a form of water management known as basin irrigation. Mesopotamian towns were protected from flooding with high earthen walls. Aqueducts were built by the Greeks and Romans, while history shows that the Chinese built irrigation and flood control works. The ancient Sinhalese used hydrology to build complex irrigation works in Sri Lanka, also known for the invention of the Valve Pit which allowed construction of large reservoirs, anicuts and canals which still function. Marcus Vitruvius, in the first century BC, described a philosophical theory of the hydrologic cycle, in which precipitation falling in the mountains infiltrated the Earth's surface and led to streams and springs in the lowlands. With the adoption of a more scientific approach, Leonardo da Vinci and Bernard Palissy independently reached an accurate representation of the hydrologic cycle. It was not until the 17th century that hydrologic variables began to be quantified. Pioneers of the modern science of hydrology include Pierre Perrault, Edme Mariotte and Edmund Halley. By measuring rainfall, runoff, and drainage area, Perrault showed that rainfall was sufficient to account for the flow of the Seine. Mariotte combined velocity and river cross-section measurements to obtain a discharge value, again in the Seine. Halley showed that the evaporation from the Mediterranean Sea was sufficient to account for the outflow of rivers flowing into the sea. Advances in the 18th century included the Bernoulli piezometer and Bernoulli's equation, by Daniel Bernoulli, and the Pitot tube, by Henri Pitot. The 19th century saw development in groundwater hydrology, including Darcy's law, the Dupuit-Thiem well formula, and Hagen-Poiseuille's capillary flow equation. Rational analyses began to replace empiricism in the 20th century, while governmental agencies began their own hydrological research programs. Of particular importance were Leroy Sherman's unit hydrograph, the infiltration theory of Robert E. Horton, and C.V. Theis' aquifer test/equation describing well hydraulics. Since the 1950s, hydrology has been approached with a more theoretical basis than in the past, facilitated by advances in the physical understanding of hydrological processes and by the advent of computers and especially geographic information systems (GIS). (
Physical sciences
Hydrology
null
13443
https://en.wikipedia.org/wiki/HTTP
HTTP
HTTP (Hypertext Transfer Protocol) is an application layer protocol in the Internet protocol suite model for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser. Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989 and summarized in a simple document describing the behavior of a client and a server using the first HTTP version, named 0.9. That version was subsequently developed, eventually becoming the public 1.0. Development of early HTTP Requests for Comments (RFCs) started a few years later in a coordinated effort by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), with work later moving to the IETF. HTTP/1 was finalized and fully documented (as version 1.0) in 1996. It evolved (as version 1.1) in 1997 and then its specifications were updated in 1999, 2014, and 2022. Its secure variant named HTTPS is used by more than 85% of websites. HTTP/2, published in 2015, provides a more efficient expression of HTTP's semantics "on the wire". it is supported by 66.2% of websites (35.3% HTTP/2 + 30.9% HTTP/3 with backwards compatibility) and supported by almost all web browsers (over 98% of users). It is also supported by major web servers over Transport Layer Security (TLS) using an Application-Layer Protocol Negotiation (ALPN) extension where TLS 1.2 or newer is required. HTTP/3, the successor to HTTP/2, was published in 2022. it is now used on 30.9% of websites and is supported by most web browsers, i.e. (at least partially) supported by 97% of users. HTTP/3 uses QUIC instead of TCP for the underlying transport protocol. Like HTTP/2, it does not obsolete previous major versions of the protocol. Support for HTTP/3 was added to Cloudflare and Google Chrome first, and is also enabled in Firefox. HTTP/3 has lower latency for real-world web pages, if enabled on the server, and loads faster than with HTTP/2, in some cases over three times faster than HTTP/1.1 (which is still commonly only enabled). Technical overview HTTP functions as a request–response protocol in the client–server model. A web browser, for example, may be the client whereas a process, named web server, running on a computer hosting one or more websites may be the server. The client submits an HTTP request message to the server. The server, which provides resources such as HTML files and other content or performs other functions on behalf of the client, returns a response message to the client. The response contains completion status information about the request and may also contain requested content in its message body. A web browser is an example of a user agent (UA). Other types of user agent include the indexing software used by search providers (web crawlers), voice browsers, mobile apps, and other software that accesses, consumes, or displays web content. HTTP is designed to permit intermediate network elements to improve or enable communications between clients and servers. High-traffic websites often benefit from web cache servers that deliver content on behalf of upstream servers to improve response time. Web browsers cache previously accessed web resources and reuse them, whenever possible, to reduce network traffic. HTTP proxy servers at private network boundaries can facilitate communication for clients without a globally routable address, by relaying messages with external servers. To allow intermediate HTTP nodes (proxy servers, web caches, etc.) to accomplish their functions, some of the HTTP headers (found in HTTP requests/responses) are managed hop-by-hop whereas other HTTP headers are managed end-to-end (managed only by the source client and by the target web server). HTTP is an application layer protocol designed within the framework of the Internet protocol suite. Its definition presumes an underlying and reliable transport layer protocol. In HTTP/3, the Transmission Control Protocol (TCP) is no longer used, but the older versions are still more used and they most commonly use TCP. They have also been adapted to use unreliable protocols such as the User Datagram Protocol (UDP), which HTTP/3 also (indirectly) always builds on, for example in HTTPU and Simple Service Discovery Protocol (SSDP). HTTP resources are identified and located on the network by Uniform Resource Locators (URLs), using the Uniform Resource Identifiers (URIs) schemes http and https. As defined in , URIs are encoded as hyperlinks in HTML documents, so as to form interlinked hypertext documents. In HTTP/1.0 a separate TCP connection to the same server is made for every resource request. In HTTP/1.1 instead a TCP connection can be reused to make multiple resource requests (i.e. of HTML pages, frames, images, scripts, stylesheets, etc.). HTTP/1.1 communications therefore experience less latency as the establishment of TCP connections presents considerable overhead, especially under high traffic conditions. HTTP/2 is a revision of previous HTTP/1.1 in order to maintain the same client–server model and the same protocol methods but with these differences in order: to use a compressed binary representation of metadata (HTTP headers) instead of a textual one, so that headers require much less space; to use a single TCP/IP (usually encrypted) connection per accessed server domain instead of 2 to 8 TCP/IP connections; to use one or more bidirectional streams per TCP/IP connection in which HTTP requests and responses are broken down and transmitted in small packets to almost solve the problem of the HOLB (head-of-line blocking). to add a push capability to allow server application to send data to clients whenever new data is available (without forcing clients to request periodically new data to server by using polling methods). HTTP/2 communications therefore experience much less latency and, in most cases, even higher speeds than HTTP/1.1 communications. HTTP/3 is a revision of previous HTTP/2 in order to use QUIC + UDP transport protocols instead of TCP. Before that version, TCP/IP connections were used; but now, only the IP layer is used (which UDP, like TCP, builds on). This slightly improves the average speed of communications and to avoid the occasional (very rare) problem of TCP connection congestion that can temporarily block or slow down the data flow of all its streams (another form of "head of line blocking"). History The term hypertext was coined by Ted Nelson in 1965 in the Xanadu Project, which was in turn inspired by Vannevar Bush's 1930s vision of the microfilm-based information retrieval and management "memex" system described in his 1945 essay "As We May Think". Tim Berners-Lee and his team at CERN are credited with inventing the original HTTP, along with HTML and the associated technology for a web server and a client user interface called web browser. Berners-Lee designed HTTP in order to help with the adoption of his other idea: the "WorldWideWeb" project, which was first proposed in 1989, now known as the World Wide Web. The first web server went live in 1990. The protocol used had only one method, namely GET, which would request a page from a server. The response from the server was always an HTML page. Summary of HTTP milestone versions HTTP/0.9 In 1991, the first documented official version of HTTP was written as a plain document, less than 700 words long, and this version was named HTTP/0.9, which supported only GET method, allowing clients to only retrieve HTML documents from the server, but not supporting any other file formats or information upload. HTTP/1.0-draft Since 1992, a new document was written to specify the evolution of the basic protocol towards its next full version. It supported both the simple request method of the 0.9 version and the full GET request that included the client HTTP version. This was the first of the many unofficial HTTP/1.0 drafts that preceded the final work on HTTP/1.0. W3C HTTP Working Group After having decided that new features of HTTP protocol were required and that they had to be fully documented as official RFCs, in early 1995 the HTTP Working Group (HTTP WG, led by Dave Raggett) was constituted with the aim to standardize and expand the protocol with extended operations, extended negotiation, richer meta-information, tied with a security protocol which became more efficient by adding additional methods and header fields. The HTTP WG planned to revise and publish new versions of the protocol as HTTP/1.0 and HTTP/1.1 within 1995, but, because of the many revisions, that timeline lasted much more than one year. The HTTP WG planned also to specify a far future version of HTTP called HTTP-NG (HTTP Next Generation) that would have solved all remaining problems, of previous versions, related to performances, low latency responses, etc. but this work started only a few years later and it was never completed. HTTP/1.0 In May 1996, was published as a final HTTP/1.0 revision of what had been used in previous 4 years as a pre-standard HTTP/1.0-draft which was already used by many web browsers and web servers. In early 1996 developers started to even include unofficial extensions of the HTTP/1.0 protocol (i.e. keep-alive connections, etc.) into their products by using drafts of the upcoming HTTP/1.1 specifications. HTTP/1.1 Since early 1996, major web browsers and web server developers also started to implement new features specified by pre-standard HTTP/1.1 drafts specifications. End-user adoption of the new versions of browsers and servers was rapid. In March 1996, one web hosting company reported that over 40% of browsers in use on the Internet used the new HTTP/1.1 header "Host" to enable virtual hosting, and that by June 1996, 65% of all browsers accessing their servers were pre-standard HTTP/1.1 compliant. In January 1997, was officially released as HTTP/1.1 specifications. In June 1999, was released to include all improvements and updates based on previous (obsolete) HTTP/1.1 specifications. W3C HTTP-NG Working Group Resuming the old 1995 plan of previous HTTP Working Group, in 1997 an HTTP-NG Working Group was formed to develop a new HTTP protocol named HTTP-NG (HTTP New Generation). A few proposals / drafts were produced for the new protocol to use multiplexing of HTTP transactions inside a single TCP/IP connection, but in 1999, the group stopped its activity passing the technical problems to IETF. IETF HTTP Working Group restarted In 2007, the IETF HTTP Working Group (HTTP WG bis or HTTPbis) was restarted firstly to revise and clarify previous HTTP/1.1 specifications and secondly to write and refine future HTTP/2 specifications (named httpbis). SPDY: an unofficial HTTP protocol developed by Google In 2009, Google, a private company, announced that it had developed and tested a new HTTP binary protocol named SPDY. The implicit aim was to greatly speed up web traffic (specially between future web browsers and its servers). SPDY was indeed much faster than HTTP/1.1 in many tests and so it was quickly adopted by Chromium and then by other major web browsers. Some of the ideas about multiplexing HTTP streams over a single TCP/IP connection were taken from various sources, including the work of W3C HTTP-NG Working Group. HTTP/2 In January–March 2012, HTTP Working Group (HTTPbis) announced the need to start to focus on a new HTTP/2 protocol (while finishing the revision of HTTP/1.1 specifications), maybe taking in consideration ideas and work done for SPDY. After a few months about what to do to develop a new version of HTTP, it was decided to derive it from SPDY. In May 2015, HTTP/2 was published as and quickly adopted by all web browsers already supporting SPDY and more slowly by web servers. 2014 updates to HTTP/1.1 In June 2014, the HTTP Working Group released an updated six-part HTTP/1.1 specification obsoleting : , HTTP/1.1: Message Syntax and Routing , HTTP/1.1: Semantics and Content , HTTP/1.1: Conditional Requests , HTTP/1.1: Range Requests , HTTP/1.1: Caching , HTTP/1.1: Authentication HTTP/0.9 Deprecation In Appendix-A, HTTP/0.9 was deprecated for servers supporting HTTP/1.1 version (and higher): Since 2016 many product managers and developers of user agents (browsers, etc.) and web servers have begun planning to gradually deprecate and dismiss support for HTTP/0.9 protocol, mainly for the following reasons: it is so simple that an RFC document was never written (there is only the original document); it has no HTTP headers and lacks many other features that nowadays are required for minimal security reasons; it has not been widespread since 1999..2000 (because of HTTP/1.0 and HTTP/1.1) and is commonly used only by some very old network hardware, i.e. routers, etc. HTTP/3 In 2020, the first drafts HTTP/3 were published and major web browsers and web servers started to adopt it. On 6 June 2022, IETF standardized HTTP/3 as . Updates and refactoring in 2022 In June 2022, a batch of RFCs was published, deprecating many of the previous documents and introducing a few minor changes and a refactoring of HTTP semantics description into a separate document. , HTTP Semantics , HTTP Caching , HTTP/1.1 , HTTP/2 , HTTP/3 (see also the section above) , QPACK: Field Compression for HTTP/3 , Extensible Prioritization Scheme for HTTP HTTP data exchange HTTP is a stateless application-level protocol and it requires a reliable network transport connection to exchange data between client and server. In HTTP implementations, TCP/IP connections are used using well-known ports (typically port 80 if the connection is unencrypted or port 443 if the connection is encrypted, see also List of TCP and UDP port numbers). In HTTP/2, a TCP/IP connection plus multiple protocol channels are used. In HTTP/3, the application transport protocol QUIC over UDP is used. Request and response messages through connections Data is exchanged through a sequence of request–response messages which are exchanged by a session layer transport connection. An HTTP client initially tries to connect to a server establishing a connection (real or virtual). An HTTP(S) server listening on that port accepts the connection and then waits for a client's request message. The client sends its HTTP request message. Upon receiving the request the server sends back an HTTP response message, which includes header(s) plus a body if it is required. The body of this response message is typically the requested resource, although an error message or other information may also be returned. At any time (for many reasons) client or server can close the connection. Closing a connection is usually advertised in advance by using one or more HTTP headers in the last request/response message sent to server or client. Persistent connections In HTTP/0.9, the TCP/IP connection is always closed after server response has been sent, so it is never persistent. In HTTP/1.0, as stated in RFC 1945, the TCP/IP connection should always be closed by server after a response has been sent. In HTTP/1.1 a keep-alive-mechanism was officially introduced so that a connection could be reused for more than one request/response. Such persistent connections reduce request latency perceptibly because the client does not need to re-negotiate the TCP 3-Way-Handshake connection after the first request has been sent. Another positive side effect is that, in general, the connection becomes faster with time due to TCP's slow-start-mechanism. HTTP/1.1 added also HTTP pipelining in order to further reduce lag time when using persistent connections by allowing clients to send multiple requests before waiting for each response. This optimization was never considered really safe because a few web servers and many proxy servers, specially transparent proxy servers placed in Internet / Intranets between clients and servers, did not handle pipelined requests properly (they served only the first request discarding the others, they closed the connection because they saw more data after the first request or some proxies even returned responses out of order etc.). Because of this, only HEAD and some GET requests (i.e. limited to real file requests and so with URLs without query string used as a command, etc.) could be pipelined in a safe and idempotent mode. After many years of struggling with the problems introduced by enabling pipelining, this feature was first disabled and then removed from most browsers also because of the announced adoption of HTTP/2. HTTP/2 extended the usage of persistent connections by multiplexing many concurrent requests/responses through a single TCP/IP connection. HTTP/3 does not use TCP/IP connections but QUIC + UDP (see also: technical overview). Content retrieval optimizations HTTP/0.9 A requested resource was always sent in its entirety. HTTP/1.0 HTTP/1.0 added headers to manage resources cached by client in order to allow conditional GET requests; in practice a server has to return the entire content of the requested resource only if its last modified time is not known by client or if it changed since last full response to GET request. One of these headers, "Content-Encoding", was added to specify whether the returned content of a resource was or was not compressed. If the total length of the content of a resource was not known in advance (i.e. because it was dynamically generated, etc.) then the header "Content-Length: number" was not present in HTTP headers and the client assumed that when server closed the connection, the content had been sent in its entirety. This mechanism could not distinguish between a resource transfer successfully completed and an interrupted one (because of a server / network error or something else). HTTP/1.1 HTTP/1.1 introduced: new headers to better manage the conditional retrieval of cached resources. chunked transfer encoding to allow content to be streamed in chunks in order to reliably send it even when the server does not know its length in advance (i.e. because it is dynamically generated, etc.). byte range serving, where a client can request only one or more portions (ranges of bytes) of a resource (i.e. the first part, a part in the middle or in the end of the entire content, etc.) and the server usually sends only the requested part(s). This is useful to resume an interrupted download (when a file is very large), when only a part of a content has to be shown or dynamically added to the already visible part by a browser (i.e. only the first or the following n comments of a web page) in order to spare time, bandwidth and system resources, etc. HTTP/2, HTTP/3 Both HTTP/2 and HTTP/3 have kept the above mentioned features of HTTP/1.1. HTTP authentication HTTP provides multiple authentication schemes such as basic access authentication and digest access authentication which operate via a challenge–response mechanism whereby the server identifies and issues a challenge before serving the requested content. HTTP provides a general framework for access control and authentication, via an extensible set of challenge–response authentication schemes, which can be used by a server to challenge a client request and by a client to provide authentication information. The authentication mechanisms described above belong to the HTTP protocol and are managed by client and server HTTP software (if configured to require authentication before allowing client access to one or more web resources), and not by the web applications using a web application session. Authentication realms The HTTP Authentication specification also provides an arbitrary, implementation-specific construct for further dividing resources common to a given root URI. The realm value string, if present, is combined with the canonical root URI to form the protection space component of the challenge. This in effect allows the server to define separate authentication scopes under one root URI. HTTP application session HTTP is a stateless protocol. A stateless protocol does not require the web server to retain information or status about each user for the duration of multiple requests. Some web applications need to manage user sessions, so they implement states, or server side sessions, using for instance HTTP cookies or hidden variables within web forms. To start an application user session, an interactive authentication via web application login must be performed. To stop a user session a logout operation must be requested by user. These kind of operations do not use HTTP authentication but a custom managed web application authentication. HTTP/1.1 request messages Request messages are sent by a client to a target server. Request syntax A client sends request messages to the server, which consist of: a request line, consisting of the case-sensitive request method, a space, the requested URI, another space, the protocol version, a carriage return, and a line feed, e.g.: zero or more request header fields (at least 1 or more headers in case of HTTP/1.1), each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.: Host: www.example.com Accept-Language: en an empty line, consisting of a carriage return and a line feed; an optional message body. In the HTTP/1.1 protocol, all header fields except Host: hostname are optional. A request line containing only the path name is accepted by servers to maintain compatibility with HTTP clients before the HTTP/1.0 specification in . Request methods HTTP defines methods (sometimes referred to as verbs, but nowhere in the specification does it mention verb) to indicate the desired action to be performed on the identified resource. What this resource represents, whether pre-existing data or data that is generated dynamically, depends on the implementation of the server. Often, the resource corresponds to a file or the output of an executable residing on the server. The HTTP/1.0 specification defined the GET, HEAD, and POST methods as well as listing the PUT, DELETE, LINK and UNLINK methods under additional methods. However, the HTTP/1.1 specification formally defined and added five new methods: PUT, DELETE, CONNECT, OPTIONS, and TRACE. Any client can use any method and the server can be configured to support any combination of methods. If a method is unknown to an intermediate, it will be treated as an unsafe and non-idempotent method. There is no limit to the number of methods that can be defined, which allows for future methods to be specified without breaking existing infrastructure. For example, WebDAV defined seven new methods and specified the PATCH method. Method names are case sensitive. This is in contrast to HTTP header field names which are case-insensitive. GET The GET method requests that the target resource transfer a representation of its state. GET requests should only retrieve data and should have no other effect. (This is also true of some other HTTP methods.) For retrieving resources without making changes, GET is preferred over POST, as they can be addressed through a URL. This enables bookmarking and sharing and makes GET responses eligible for caching, which can save bandwidth. The W3C has published guidance principles on this distinction, saying, "Web application design should be informed by the above principles, but also by the relevant limitations." See safe methods below. HEAD The HEAD method requests that the target resource transfer a representation of its state, as for a GET request, but without the representation data enclosed in the response body. This is useful for retrieving the representation metadata in the response header, without having to transfer the entire representation. Uses include checking whether a page is available through the status code and quickly finding the size of a file (Content-Length). POST The POST method requests that the target resource process the representation enclosed in the request according to the semantics of the target resource. For example, it is used for posting a message to an Internet forum, subscribing to a mailing list, or completing an online shopping transaction. PUT The PUT method requests that the target resource create or update its state with the state defined by the representation enclosed in the request. A distinction from POST is that the client specifies the target location on the server. DELETE The DELETE method requests that the target resource delete its state. CONNECT The CONNECT method requests that the intermediary establish a TCP/IP tunnel to the origin server identified by the request target. It is often used to secure connections through one or more HTTP proxies with TLS. See HTTP CONNECT method. OPTIONS The OPTIONS method requests that the target resource transfer the HTTP methods that it supports. This can be used to check the functionality of a web server by requesting '*' instead of a specific resource. TRACE The TRACE method requests that the target resource transfer the received request in the response body. That way a client can see what (if any) changes or additions have been made by intermediaries. PATCH The PATCH method requests that the target resource modify its state according to the partial update defined in the representation enclosed in the request. This can save bandwidth by updating a part of a file or document without having to transfer it entirely. All general-purpose web servers are required to implement at least the GET and HEAD methods, and all other methods are considered optional by the specification. Safe methods A request method is safe if a request with that method has no intended effect on the server. The methods GET, HEAD, OPTIONS, and TRACE are defined as safe. In other words, safe methods are intended to be read-only. Safe methods can still have side effects not seen by the client, such as appending request information to a log file or charging an advertising account. In contrast, the methods POST, PUT, DELETE, CONNECT, and PATCH are not safe. They may modify the state of the server or have other effects such as sending an email. Such methods are therefore not usually used by conforming web robots or web crawlers; some that do not conform tend to make requests without regard to context or consequences. Despite the prescribed safety of GET requests, in practice their handling by the server is not technically limited in any way. Careless or deliberately irregular programming can allow GET requests to cause non-trivial changes on the server. This is discouraged because of the problems which can occur when web caching, search engines, and other automated agents make unintended changes on the server. For example, a website might allow deletion of a resource through a URL such as https://example.com/article/1234/delete, which, if arbitrarily fetched, even using GET, would simply delete the article. A properly coded website would require a DELETE or POST method for this action, which non-malicious bots would not make. One example of this occurring in practice was during the short-lived Google Web Accelerator beta, which prefetched arbitrary URLs on the page a user was viewing, causing records to be automatically altered or deleted en masse. The beta was suspended only weeks after its first release, following widespread criticism. Idempotent methods A request method is idempotent if multiple identical requests with that method have the same effect as a single such request. The methods PUT and DELETE, and safe methods are defined as idempotent. Safe methods are trivially idempotent, since they are intended to have no effect on the server whatsoever; the PUT and DELETE methods, meanwhile, are idempotent since successive identical requests will be ignored. A website might, for instance, set up a PUT endpoint to modify a user's recorded email address. If this endpoint is configured correctly, any requests which ask to change a user's email address to the same email address which is already recorded—e.g. duplicate requests following a successful request—will have no effect. Similarly, a request to DELETE a certain user will have no effect if that user has already been deleted. In contrast, the methods POST, CONNECT, and PATCH are not necessarily idempotent, and therefore sending an identical POST request multiple times may further modify the state of the server or have further effects, such as sending multiple emails. In some cases this is the desired effect, but in other cases it may occur accidentally. A user might, for example, inadvertently send multiple POST requests by clicking a button again if they were not given clear feedback that the first click was being processed. While web browsers may show alert dialog boxes to warn users in some cases where reloading a page may re-submit a POST request, it is generally up to the web application to handle cases where a POST request should not be submitted more than once. Note that whether or not a method is idempotent is not enforced by the protocol or web server. It is perfectly possible to write a web application in which (for example) a database insert or other non-idempotent action is triggered by a GET or other request. To do so against recommendations, however, may result in undesirable consequences, if a user agent assumes that repeating the same request is safe when it is not. Cacheable methods A request method is cacheable if responses to requests with that method may be stored for future reuse. The methods GET, HEAD, and POST are defined as cacheable. In contrast, the methods PUT, DELETE, CONNECT, OPTIONS, TRACE, and PATCH are not cacheable. Request header fields Request header fields allow the client to pass additional information beyond the request line, acting as request modifiers (similarly to the parameters of a procedure). They give information about the client, about the target resource, or about the expected handling of the request. HTTP/1.1 response messages A response message is sent by a server to a client as a reply to its former request message. Response syntax A server sends response messages to the client, which consist of: a status line, consisting of the protocol version, a space, the response status code, another space, a possibly empty reason phrase, a carriage return and a line feed, e.g.: HTTP/1.1 200 OK zero or more response header fields, each consisting of the case-insensitive field name, a colon, optional leading whitespace, the field value, an optional trailing whitespace and ending with a carriage return and a line feed, e.g.: Content-Type: text/html an empty line, consisting of a carriage return and a line feed; an optional message body. Response status codes In HTTP/1.0 and since, the first line of the HTTP response is called the status line and includes a numeric status code (such as "404") and a textual reason phrase (such as "Not Found"). The response status code is a three-digit integer code representing the result of the server's attempt to understand and satisfy the client's corresponding request. The way the client handles the response depends primarily on the status code, and secondarily on the other response header fields. Clients may not understand all registered status codes but they must understand their class (given by the first digit of the status code) and treat an unrecognized status code as being equivalent to the x00 status code of that class. The standard reason phrases are only recommendations, and can be replaced with "local equivalents" at the web developer's discretion. If the status code indicated a problem, the user agent might display the reason phrase to the user to provide further information about the nature of the problem. The standard also allows the user agent to attempt to interpret the reason phrase, though this might be unwise since the standard explicitly specifies that status codes are machine-readable and reason phrases are human-readable. The first digit of the status code defines its class: 1XX (informational) The request was received, continuing process. 2XX (successful) The request was successfully received, understood, and accepted. 3XX (redirection) Further action needs to be taken in order to complete the request. 4XX (client error) The request contains bad syntax or cannot be fulfilled. 5XX (server error) The server failed to fulfill an apparently valid request. Response header fields The response header fields allow the server to pass additional information beyond the status line, acting as response modifiers. They give information about the server or about further access to the target resource or related resources. Each response header field has a defined meaning which can be further refined by the semantics of the request method or response status code. HTTP/1.1 example of request / response transaction Below is a sample HTTP transaction between an HTTP/1.1 client and an HTTP/1.1 server running on www.example.com, port 80. Client request GET / HTTP/1.1 Host: www.example.com User-Agent: Mozilla/5.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8 Accept-Language: en-GB,en;q=0.5 Accept-Encoding: gzip, deflate, br Connection: keep-alive A client request (consisting in this case of the request line and a few headers that can be reduced to only the "Host: hostname" header) is followed by a blank line, so that the request ends with a double end of line, each in the form of a carriage return followed by a line feed. The "Host: hostname" header value distinguishes between various DNS names sharing a single IP address, allowing name-based virtual hosting. While optional in HTTP/1.0, it is mandatory in HTTP/1.1. (A "/" (slash) will usually fetch a /index.html file if there is one.) Server response HTTP/1.1 200 OK Date: Mon, 23 May 2005 22:38:34 GMT Content-Type: text/html; charset=UTF-8 Content-Length: 155 Last-Modified: Wed, 08 Jan 2003 23:11:55 GMT Server: Apache/1.3.3.7 (Unix) (Red-Hat/Linux) ETag: "3f80f-1b6-3e1cb03b" Accept-Ranges: bytes Connection: close <html> <head> <title>An Example Page</title> </head> <body> <p>Hello World, this is a very simple HTML document.</p> </body> </html> The ETag (entity tag) header field is used to determine if a cached version of the requested resource is identical to the current version of the resource on the server. "Content-Type" specifies the Internet media type of the data conveyed by the HTTP message, while "Content-Length" indicates its length in bytes. The HTTP/1.1 webserver publishes its ability to respond to requests for certain byte ranges of the document by setting the field "Accept-Ranges: bytes". This is useful, if the client needs to have only certain portions of a resource sent by the server, which is called byte serving. When "Connection: close" is sent, it means that the web server will close the TCP connection immediately after the end of the transfer of this response. Most of the header lines are optional but some are mandatory. When header "Content-Length: number" is missing in a response with an entity body then this should be considered an error in HTTP/1.0 but it may not be an error in HTTP/1.1 if header "Transfer-Encoding: chunked" is present. Chunked transfer encoding uses a chunk size of 0 to mark the end of the content. Some old implementations of HTTP/1.0 omitted the header "Content-Length" when the length of the body entity was not known at the beginning of the response and so the transfer of data to client continued until server closed the socket. A "Content-Encoding: gzip" can be used to inform the client that the body entity part of the transmitted data is compressed by gzip algorithm. Encrypted connections The most popular way of establishing an encrypted HTTP connection is HTTPS. Two other methods for establishing an encrypted HTTP connection also exist: Secure Hypertext Transfer Protocol, and using the HTTP/1.1 Upgrade header to specify an upgrade to TLS. Browser support for these two is, however, nearly non-existent. Similar protocols The Gopher protocol is a content delivery protocol that was displaced by HTTP in the early 1990s. The SPDY protocol is an alternative to HTTP developed at Google, superseded by HTTP/2. The Gemini protocol is a Gopher-inspired protocol which mandates privacy-related features.
Technology
Internet
null
13457
https://en.wikipedia.org/wiki/Heredity
Heredity
Heredity, also called inheritance or biological inheritance, is the passing on of traits from parents to their offspring; either through asexual reproduction or sexual reproduction, the offspring cells or organisms acquire the genetic information of their parents. Through heredity, variations between individuals can accumulate and cause species to evolve by natural selection. The study of heredity in biology is genetics. Overview In humans, eye color is an example of an inherited characteristic: an individual might inherit the "brown-eye trait" from one of the parents. Inherited traits are controlled by genes and the complete set of genes within an organism's genome is called its genotype. The complete set of observable traits of the structure and behavior of an organism is called its phenotype. These traits arise from the interaction of the organism's genotype with the environment. As a result, many aspects of an organism's phenotype are not inherited. For example, suntanned skin derives from the interaction between a person's genotype and sunlight; thus, suntans are not passed on to people's children. However, some people tan more easily than others, due to differences in their genotype: a striking example is people with the inherited trait of albinism, who do not tan at all and are very sensitive to sunburn. Heritable traits are known to be passed from one generation to the next via DNA, a molecule that encodes genetic information. DNA is a long polymer that incorporates four types of bases, which are interchangeable. The Nucleic acid sequence (the sequence of bases along a particular DNA molecule) specifies the genetic information: this is comparable to a sequence of letters spelling out a passage of text. Before a cell divides through mitosis, the DNA is copied, so that each of the resulting two cells will inherit the DNA sequence. A portion of a DNA molecule that specifies a single functional unit is called a gene; different genes have different sequences of bases. Within cells, the long strands of DNA form condensed structures called chromosomes. Organisms inherit genetic material from their parents in the form of homologous chromosomes, containing a unique combination of DNA sequences that code for genes. The specific location of a DNA sequence within a chromosome is known as a locus. If the DNA sequence at a particular locus varies between individuals, the different forms of this sequence are called alleles. DNA sequences can change through mutations, producing new alleles. If a mutation occurs within a gene, the new allele may affect the trait that the gene controls, altering the phenotype of the organism. However, while this simple correspondence between an allele and a trait works in some cases, most traits are more complex and are controlled by multiple interacting genes within and among organisms. Developmental biologists suggest that complex interactions in genetic networks and communication among cells can lead to heritable variations that may underlie some of the mechanics in developmental plasticity and canalization. Recent findings have confirmed important examples of heritable changes that cannot be explained by direct agency of the DNA molecule. These phenomena are classed as epigenetic inheritance systems that are causally or independently evolving over genes. Research into modes and mechanisms of epigenetic inheritance is still in its scientific infancy, but this area of research has attracted much recent activity as it broadens the scope of heritability and evolutionary biology in general. DNA methylation marking chromatin, self-sustaining metabolic loops, gene silencing by RNA interference, and the three dimensional conformation of proteins (such as prions) are areas where epigenetic inheritance systems have been discovered at the organismic level. Heritability may also occur at even larger scales. For example, ecological inheritance through the process of niche construction is defined by the regular and repeated activities of organisms in their environment. This generates a legacy of effect that modifies and feeds back into the selection regime of subsequent generations. Descendants inherit genes plus environmental characteristics generated by the ecological actions of ancestors. Other examples of heritability in evolution that are not under the direct control of genes include the inheritance of cultural traits, group heritability, and symbiogenesis. These examples of heritability that operate above the gene are covered broadly under the title of multilevel or hierarchical selection, which has been a subject of intense debate in the history of evolutionary science. Relation to theory of evolution When Charles Darwin proposed his theory of evolution in 1859, one of its major problems was the lack of an underlying mechanism for heredity. Darwin believed in a mix of blending inheritance and the inheritance of acquired traits (pangenesis). Blending inheritance would lead to uniformity across populations in only a few generations and then would remove variation from a population on which natural selection could act. This led to Darwin adopting some Lamarckian ideas in later editions of On the Origin of Species and his later biological works. Darwin's primary approach to heredity was to outline how it appeared to work (noticing that traits that were not expressed explicitly in the parent at the time of reproduction could be inherited, that certain traits could be sex-linked, etc.) rather than suggesting mechanisms. Darwin's initial model of heredity was adopted by, and then heavily modified by, his cousin Francis Galton, who laid the framework for the biometric school of heredity. Galton found no evidence to support the aspects of Darwin's pangenesis model, which relied on acquired traits. The inheritance of acquired traits was shown to have little basis in the 1880s when August Weismann cut the tails off many generations of mice and found that their offspring continued to develop tails. History Scientists in Antiquity had a variety of ideas about heredity: Theophrastus proposed that male flowers caused female flowers to ripen; Hippocrates speculated that "seeds" were produced by various body parts and transmitted to offspring at the time of conception; and Aristotle thought that male and female fluids mixed at conception. Aeschylus, in 458 BC, proposed the male as the parent, with the female as a "nurse for the young life sown within her". Ancient understandings of heredity transitioned to two debated doctrines in the 18th century. The Doctrine of Epigenesis and the Doctrine of Preformation were two distinct views of the understanding of heredity. The Doctrine of Epigenesis, originated by Aristotle, claimed that an embryo continually develops. The modifications of the parent's traits are passed off to an embryo during its lifetime. The foundation of this doctrine was based on the theory of inheritance of acquired traits. In direct opposition, the Doctrine of Preformation claimed that "like generates like" where the germ would evolve to yield offspring similar to the parents. The Preformationist view believed procreation was an act of revealing what had been created long before. However, this was disputed by the creation of the cell theory in the 19th century, where the fundamental unit of life is the cell, and not some preformed parts of an organism. Various hereditary mechanisms, including blending inheritance were also envisaged without being properly tested or quantified, and were later disputed. Nevertheless, people were able to develop domestic breeds of animals as well as crops through artificial selection. The inheritance of acquired traits also formed a part of early Lamarckian ideas on evolution. During the 18th century, Dutch microscopist Antonie van Leeuwenhoek (1632–1723) discovered "animalcules" in the sperm of humans and other animals. Some scientists speculated they saw a "little man" (homunculus) inside each sperm. These scientists formed a school of thought known as the "spermists". They contended the only contributions of the female to the next generation were the womb in which the homunculus grew, and prenatal influences of the womb. An opposing school of thought, the ovists, believed that the future human was in the egg, and that sperm merely stimulated the growth of the egg. Ovists thought women carried eggs containing boy and girl children, and that the gender of the offspring was determined well before conception. An early research initiative emerged in 1878 when Alpheus Hyatt led an investigation to study the laws of heredity through compiling data on family phenotypes (nose size, ear shape, etc.) and expression of pathological conditions and abnormal characteristics, particularly with respect to the age of appearance. One of the projects aims was to tabulate data to better understand why certain traits are consistently expressed while others are highly irregular. Gregor Mendel: father of genetics The idea of particulate inheritance of genes can be attributed to the Moravian monk Gregor Mendel who published his work on pea plants in 1865. However, his work was not widely known and was rediscovered in 1901. It was initially assumed that Mendelian inheritance only accounted for large (qualitative) differences, such as those seen by Mendel in his pea plants – and the idea of additive effect of (quantitative) genes was not realised until R.A. Fisher's (1918) paper, "The Correlation Between Relatives on the Supposition of Mendelian Inheritance" Mendel's overall contribution gave scientists a useful overview that traits were inheritable. His pea plant demonstration became the foundation of the study of Mendelian Traits. These traits can be traced on a single locus. Modern development of genetics and heredity In the 1930s, work by Fisher and others resulted in a combination of Mendelian and biometric schools into the modern evolutionary synthesis. The modern synthesis bridged the gap between experimental geneticists and naturalists; and between both and palaeontologists, stating that: All evolutionary phenomena can be explained in a way consistent with known genetic mechanisms and the observational evidence of naturalists. Evolution is gradual: small genetic changes, recombination ordered by natural selection. Discontinuities amongst species (or other taxa) are explained as originating gradually through geographical separation and extinction (not saltation). Selection is overwhelmingly the main mechanism of change; even slight advantages are important when continued. The object of selection is the phenotype in its surrounding environment. The role of genetic drift is equivocal; though strongly supported initially by Dobzhansky, it was downgraded later as results from ecological genetics were obtained. The primacy of population thinking: the genetic diversity carried in natural populations is a key factor in evolution. The strength of natural selection in the wild was greater than expected; the effect of ecological factors such as niche occupation and the significance of barriers to gene flow are all important. The idea that speciation occurs after populations are reproductively isolated has been much debated. In plants, polyploidy must be included in any view of speciation. Formulations such as 'evolution consists primarily of changes in the frequencies of alleles between one generation and another' were proposed rather later. The traditional view is that developmental biology ('evo-devo') played little part in the synthesis, but an account of Gavin de Beer's work by Stephen Jay Gould suggests he may be an exception. Almost all aspects of the synthesis have been challenged at times, with varying degrees of success. There is no doubt, however, that the synthesis was a great landmark in evolutionary biology. It cleared up many confusions, and was directly responsible for stimulating a great deal of research in the post-World War II era. Trofim Lysenko however caused a backlash of what is now called Lysenkoism in the Soviet Union when he emphasised Lamarckian ideas on the inheritance of acquired traits. This movement affected agricultural research and led to food shortages in the 1960s and seriously affected the USSR. There is growing evidence that there is transgenerational inheritance of epigenetic changes in humans and other animals. Common genetic disorders Fragile X syndrome Sickle cell disease Phenylketonuria (PKU) Haemophilia Types The description of a mode of biological inheritance consists of three main categories: 1. Number of involved loci Monogenetic (also called "simple") – one locus Oligogenic – few loci Polygenetic – many loci 2. Involved chromosomes Autosomal – loci are not situated on a sex chromosome Gonosomal – loci are situated on a sex chromosome X-chromosomal – loci are situated on the X-chromosome (the more common case) Y-chromosomal – loci are situated on the Y-chromosome Mitochondrial – loci are situated on the mitochondrial DNA 3. Correlation genotype–phenotype Dominant Intermediate (also called "codominant") Recessive Overdominant Underdominant These three categories are part of every exact description of a mode of inheritance in the above order. In addition, more specifications may be added as follows: 4. Coincidental and environmental interactions Penetrance Complete Incomplete (percentual number) Expressivity Invariable Variable Heritability (in polygenetic and sometimes also in oligogenetic modes of inheritance) Maternal or paternal imprinting phenomena (also see epigenetics) 5. Sex-linked interactions Sex-linked inheritance (gonosomal loci) Sex-limited phenotype expression (e.g., cryptorchism) Inheritance through the maternal line (in case of mitochondrial DNA loci) Inheritance through the paternal line (in case of Y-chromosomal loci) 6. Locus–locus interactions Epistasis with other loci (e.g., overdominance) Gene coupling with other loci (also see crossing over) Homozygotous lethal factors Semi-lethal factors Determination and description of a mode of inheritance is also achieved primarily through statistical analysis of pedigree data. In case the involved loci are known, methods of molecular genetics can also be employed. Dominant and recessive alleles An allele is said to be dominant if it is always expressed in the appearance of an organism (phenotype) provided that at least one copy of it is present. For example, in peas the allele for green pods, G, is dominant to that for yellow pods, g. Thus pea plants with the pair of alleles either GG (homozygote) or Gg (heterozygote) will have green pods. The allele for yellow pods is recessive. The effects of this allele are only seen when it is present in both chromosomes, gg (homozygote). This derives from Zygosity, the degree to which both copies of a chromosome or gene have the same genetic sequence, in other words, the degree of similarity of the alleles in an organism.
Biology and health sciences
Biology
null
13465
https://en.wikipedia.org/wiki/Holmium
Holmium
Holmium is a chemical element; it has symbol Ho and atomic number 67. It is a rare-earth element and the eleventh member of the lanthanide series. It is a relatively soft, silvery, fairly corrosion-resistant and malleable metal. Like many other lanthanides, holmium is too reactive to be found in native form, as pure holmium slowly forms a yellowish oxide coating when exposed to air. When isolated, holmium is relatively stable in dry air at room temperature. However, it reacts with water and corrodes readily, and also burns in air when heated. In nature, holmium occurs together with the other rare-earth metals (like thulium). It is a relatively rare lanthanide, making up 1.4 parts per million of the Earth's crust, an abundance similar to tungsten. Holmium was discovered through isolation by Swedish chemist Per Theodor Cleve. It was also independently discovered by Jacques-Louis Soret and Marc Delafontaine, who together observed it spectroscopically in 1878. Its oxide was first isolated from rare-earth ores by Cleve in 1878. The element's name comes from Holmia, the Latin name for the city of Stockholm. Like many other lanthanides, holmium is found in the minerals monazite and gadolinite and is usually commercially extracted from monazite using ion-exchange techniques. Its compounds in nature and in nearly all of its laboratory chemistry are trivalently oxidized, containing Ho(III) ions. Trivalent holmium ions have fluorescent properties similar to many other rare-earth ions (while yielding their own set of unique emission light lines), and thus are used in the same way as some other rare earths in certain laser and glass-colorant applications. Holmium has the highest magnetic permeability and magnetic saturation of any element and is thus used for the pole pieces of the strongest static magnets. Because holmium strongly absorbs neutrons, it is also used as a burnable poison in nuclear reactors. Properties Holmium is the eleventh member of the lanthanide series. In the periodic table, it appears in period 6, between the lanthanides dysprosium to its left and erbium to its right, and above the actinide einsteinium. Physical properties With a boiling point of , holmium is the sixth most volatile lanthanide after ytterbium, europium, samarium, thulium and dysprosium. At standard temperature and pressure, holmium, like many of the second half of the lanthanides, normally assumes a hexagonally close-packed (hcp) structure. Its 67 electrons are arranged in the configuration [Xe] 4f11 6s2, so that it has thirteen valence electrons filling the 4f and 6s subshells. Holmium, like all of the lanthanides, is paramagnetic at standard temperature and pressure. However, holmium is ferromagnetic at temperatures below . It has the highest magnetic moment () of any naturally occurring element and possesses other unusual magnetic properties. When combined with yttrium, it forms highly magnetic compounds. Chemical properties Holmium metal tarnishes slowly in air, forming a yellowish oxide layer that has an appearance similar to that of iron rust. It burns readily to form holmium(III) oxide: 4 Ho + 3 O2 → 2 Ho2O3 It is a relatively soft and malleable element that is fairly corrosion-resistant and chemically stable in dry air at standard temperature and pressure. In moist air and at higher temperatures, however, it quickly oxidizes, forming a yellowish oxide. In pure form, holmium possesses a metallic, bright silvery luster. Holmium is quite electropositive: on the Pauling electronegativity scale, it has an electronegativity of 1.23. It is generally trivalent. It reacts slowly with cold water and quickly with hot water to form holmium(III) hydroxide: 2 Ho (s) + 6 H2O (l) → 2 Ho(OH)3 (aq) + 3 H2 (g) Holmium metal reacts with all the stable halogens: 2 Ho (s) + 3 F2 (g) → 2 HoF3 (s) [pink] 2 Ho (s) + 3 Cl2 (g) → 2 HoCl3 (s) [yellow] 2 Ho (s) + 3 Br2 (g) → 2 HoBr3 (s) [yellow] 2 Ho (s) + 3 I2 (g) → 2 HoI3 (s) [yellow] Holmium dissolves readily in dilute sulfuric acid to form solutions containing the yellow Ho(III) ions, which exist as a [Ho(OH2)9]3+ complexes: 2 Ho (s) + 3 H2SO4 (aq) → 2 Ho3+ (aq) + 3 (aq) + 3 H2 (g) Oxidation states As with many lanthanides, holmium is usually found in the +3 oxidation state, forming compounds such as holmium(III) fluoride (HoF3) and holmium(III) chloride (HoCl3). Holmium in solution is in the form of Ho3+ surrounded by nine molecules of water. Holmium dissolves in acids. However, holmium is also found to exist in +2, +1 and 0 oxidation states. Isotopes The isotopes of holmium range from 140Ho to 175Ho. The primary decay mode before the most abundant stable isotope, 165Ho, is positron emission, and the primary mode after is beta minus decay. The primary decay products before 165Ho are terbium and dysprosium isotopes, and the primary products after are erbium isotopes. Natural holmium consists of one primordial isotope, holmium-165; it is the only isotope of holmium that is thought to be stable, although it is predicted to undergo alpha decay to terbium-161 with a very long half-life. Of the 35 synthetic radioactive isotopes that are known, the most stable one is holmium-163 (163Ho), with a half-life of 4570 years. All other radioisotopes have ground-state half-lives not greater than 1.117 days, with the longest, holmium-166 (166Ho) having a half-life of 26.83 hours, and most have half-lives under 3 hours. 166m1Ho has a half-life of around 1200 years. The high excitation energy, resulting in a particularly rich spectrum of decay gamma rays produced when the metastable state de-excites, makes this isotope useful as a means for calibrating gamma ray spectrometers. Compounds Oxides and chalcogenides Holmium(III) oxide is the only oxide of holmium. It changes its color depending on the lighting conditions. In daylight, it has a yellowish color. Under trichromatic light, it appears orange red, almost indistinguishable from the appearance of erbium oxide under the same lighting conditions. The color change is related to the sharp emission lines of trivalent holmium ions acting as red phosphors. Holmium(III) oxide appears pink under a cold-cathode fluorescent lamp. Other chalcogenides are known for holmium. Holmium(III) sulfide has orange-yellow crystals in the monoclinic crystal system, with the space group P21/m (No. 11). Under high pressure, holmium(III) sulfide can form in the cubic and orthorhombic crystal systems. It can be obtained by the reaction of holmium(III) oxide and hydrogen sulfide at . Holmium(III) selenide is also known. It is antiferromagnetic below 6 K. Halides All four trihalides of holmium are known. Holmium(III) fluoride is a yellowish powder that can be produced by reacting holmium(III) oxide and ammonium fluoride, then crystallising it from the ammonium salt formed in solution. Holmium(III) chloride can be prepared in a similar way, with ammonium chloride instead of ammonium fluoride. It has the YCl3 layer structure in the solid state. These compounds, as well as holmium(III) bromide and holmium(III) iodide, can be obtained by the direct reaction of the elements: 2 Ho + 3 X2 → 2 HoX3 In addition, holmium(III) iodide can be obtained by the direct reaction of holmium and mercury(II) iodide, then removing the mercury by distillation. Organoholmium compounds Organoholmium compounds are very similar to those of the other lanthanides, as they all share an inability to undergo π backbonding. They are thus mostly restricted to the mostly ionic cyclopentadienides (isostructural with those of lanthanum) and the σ-bonded simple alkyls and aryls, some of which may be polymeric. History Holmium (, Latin name for Stockholm) was discovered by the Swiss chemists Jacques-Louis Soret and Marc Delafontaine in 1878 who noticed the aberrant spectrographic emission spectrum of the then-unknown element (they called it "Element X"). The Swedish chemist Per Teodor Cleve also independently discovered the element while he was working on erbia earth (erbium oxide). He was the first to isolate the new element. Using the method developed by the Swedish chemist Carl Gustaf Mosander, Cleve first removed all of the known contaminants from erbia. The result of that effort was two new materials, one brown and one green. He named the brown substance holmia (after the Latin name for Cleve's home town, Stockholm) and the green one thulia. Holmia was later found to be the holmium oxide, and thulia was thulium oxide. In the English physicist Henry Moseley's classic paper on atomic numbers, holmium was assigned the value 66. The holmium preparation he had been given to investigate had been impure, dominated by neighboring (at the time undiscovered) dysprosium. He would have seen x-ray emission lines for both elements, but assumed that the dominant ones belonged to holmium, instead of the dysprosium impurity. Occurrence and production Like all the other rare-earth elements, holmium is not naturally found as a free element. It occurs combined with other elements in gadolinite, monazite and other rare-earth minerals. No holmium-dominant mineral has yet been found. The main mining areas are China, United States, Brazil, India, Sri Lanka, and Australia with reserves of holmium estimated as 400,000 tonnes. The annual production of holmium metal is of about 10 tonnes per year. Holmium makes up 1.3 parts per million of the Earth's crust by mass. Holmium makes up 1 part per million of the soils, 400 parts per quadrillion of seawater, and almost none of Earth's atmosphere, which is very rare for a lanthanide. It makes up 500 parts per trillion of the universe by mass. Holmium is commercially extracted by ion exchange from monazite sand (0.05% holmium), but is still difficult to separate from other rare earths. The element has been isolated through the reduction of its anhydrous chloride or fluoride with metallic calcium. Its estimated abundance in the Earth's crust is 1.3 mg/kg. Holmium obeys the Oddo–Harkins rule: as an odd-numbered element, it is less abundant than both dysprosium and erbium. However, it is the most abundant of the odd-numbered heavy lanthanides. Of the lanthanides, only promethium, thulium, lutetium and terbium are less abundant on Earth. The principal current source are some of the ion-adsorption clays of southern China. Some of these have a rare-earth composition similar to that found in xenotime or gadolinite. Yttrium makes up about two-thirds of the total by mass; holmium is around 1.5%. Holmium is relatively inexpensive for a rare-earth metal with the price about 1000 USD/kg. Applications Glass containing holmium oxide and holmium oxide solutions (usually in perchloric acid) have sharp optical absorption peaks in the spectral range 200 to 900 nm. They are therefore used as a calibration standard for optical spectrophotometers. The radioactive but long-lived 166m1Ho is used in calibration of gamma-ray spectrometers. Holmium is used to create the strongest artificially generated magnetic fields, when placed within high-strength magnets as a magnetic pole piece (also called a magnetic flux concentrator). Holmium is also used in the manufacture of some permanent magnets. Holmium-doped yttrium iron garnet (YIG) and yttrium lithium fluoride have applications in solid-state lasers, and Ho-YIG has applications in optical isolators and in microwave equipment (e.g., YIG spheres). Holmium lasers emit at 2.1 micrometres. They are used in medical, dental, and fiber-optical applications. It is also being considered for usage in the enucleation of the prostate. Since holmium can absorb nuclear fission-bred neutrons, it is used as a burnable poison to regulate nuclear reactors. It is used as a colorant for cubic zirconia, providing pink coloring, and for glass, providing yellow-orange coloring. In March 2017, IBM announced that they had developed a technique to store one bit of data on a single holmium atom set on a bed of magnesium oxide. With sufficient quantum and classical control techniques, holmium may be a good candidate to make quantum computers. Holmium is used in the medical field, particularly in laser surgery for procedures such as kidney stone removal and prostate treatment, due to its precision and minimal tissue damage. Its isotope, holmium-166, is applied in targeted cancer therapies, especially for liver cancer, and it also enhances MRI imaging as a contrast agent. Biological role and precautions Holmium plays no biological role in humans, but its salts are able to stimulate metabolism. Humans typically consume about a milligram of holmium a year. Plants do not readily take up holmium from the soil. Some vegetables have had their holmium content measured, and it amounted to 100 parts per trillion. Holmium and its soluble salts are slightly toxic if ingested, but insoluble holmium salts are nontoxic. Metallic holmium in dust form presents a fire and explosion hazard. Large amounts of holmium salts can cause severe damage if inhaled, consumed orally, or injected. The biological effects of holmium over a long period of time are not known. Holmium has a low level of acute toxicity.
Physical sciences
Chemical elements_2
null
13466
https://en.wikipedia.org/wiki/Hafnium
Hafnium
Hafnium is a chemical element; it has symbol Hf and atomic number 72. A lustrous, silvery gray, tetravalent transition metal, hafnium chemically resembles zirconium and is found in many zirconium minerals. Its existence was predicted by Dmitri Mendeleev in 1869, though it was not identified until 1922, by Dirk Coster and George de Hevesy. Hafnium is named after , the Latin name for Copenhagen, where it was discovered. Hafnium is used in filaments and electrodes. Some semiconductor fabrication processes use its oxide for integrated circuits at 45 nanometers and smaller feature lengths. Some superalloys used for special applications contain hafnium in combination with niobium, titanium, or tungsten. Hafnium's large neutron capture cross section makes it a good material for neutron absorption in control rods in nuclear power plants, but at the same time requires that it be removed from the neutron-transparent corrosion-resistant zirconium alloys used in nuclear reactors. Characteristics Physical characteristics Hafnium is a shiny, silvery, ductile metal that is corrosion-resistant and chemically similar to zirconium in that they have the same number of valence electrons and are in the same group. Also, their relativistic effects are similar: The expected expansion of atomic radii from period 5 to 6 is almost exactly canceled out by the lanthanide contraction. Hafnium changes from its alpha form, a hexagonal close-packed lattice, to its beta form, a body-centered cubic lattice, at . The physical properties of hafnium metal samples are markedly affected by zirconium impurities, especially the nuclear properties, as these two elements are among the most difficult to separate because of their chemical similarity. A notable physical difference between these metals is their density, with zirconium having about one-half the density of hafnium. The most notable nuclear properties of hafnium are its high thermal neutron capture cross section and that the nuclei of several different hafnium isotopes readily absorb two or more neutrons apiece. In contrast with this, zirconium is practically transparent to thermal neutrons, and it is commonly used for the metal components of nuclear reactors—especially the cladding of their nuclear fuel rods. Chemical characteristics Hafnium reacts in air to form a protective film that inhibits further corrosion. Despite this, the metal is attacked by hydrofluoric acid and concentrated sulfuric acid, and can be oxidized with halogens or burnt in air. Like its sister metal zirconium, finely divided hafnium can ignite spontaneously in air. The metal is resistant to concentrated alkalis. As a consequence of lanthanide contraction, the chemistry of hafnium and zirconium is so similar that the two cannot be separated based on differing chemical reactions. The melting and boiling points of the compounds and the solubility in solvents are the major differences in the chemistry of these twin elements. Isotopes At least 40 isotopes of hafnium have been observed, ranging in mass number from 153 to 192. The five stable isotopes have mass numbers ranging from 176 to 180 inclusive. The radioactive isotopes' half-lives range from 400 ms for 153Hf to years for the most stable one, the primordial 174Hf. The extinct radionuclide 182Hf has a half-life of , and is an important tracker isotope for the formation of planetary cores. The nuclear isomer 178m2Hf was at the center of a controversy for several years regarding its potential use as a weapon. Occurrence Hafnium is estimated to make up about between 3.0 and 4.8 ppm of the Earth's upper crust by mass. It does not exist as a free element on Earth, but is found combined in solid solution with zirconium in natural zirconium compounds such as zircon, ZrSiO4, which usually has about 1–4% of the Zr replaced by Hf. Rarely, the Hf/Zr ratio increases during crystallization to give the isostructural mineral hafnon , with atomic Hf > Zr. An obsolete name for a variety of zircon containing unusually high Hf content is alvite. A major source of zircon (and hence hafnium) ores is heavy mineral sands ore deposits, pegmatites, particularly in Brazil and Malawi, and carbonatite intrusions, particularly the Crown Polymetallic Deposit at Mount Weld, Western Australia. A potential source of hafnium is trachyte tuffs containing rare zircon-hafnium silicates eudialyte or armstrongite, at Dubbo in New South Wales, Australia. Production The heavy mineral sands ore deposits of the titanium ores ilmenite and rutile yield most of the mined zirconium, and therefore also most of the hafnium. Zirconium is a good nuclear fuel-rod cladding metal, with the desirable properties of a very low neutron capture cross section and good chemical stability at high temperatures. However, because of hafnium's neutron-absorbing properties, hafnium impurities in zirconium would cause it to be far less useful for nuclear reactor applications. Thus, a nearly complete separation of zirconium and hafnium is necessary for their use in nuclear power. The production of hafnium-free zirconium is the main source of hafnium. The chemical properties of hafnium and zirconium are nearly identical, which makes the two difficult to separate. The methods first used—fractional crystallization of ammonium fluoride salts or the fractional distillation of the chloride—have not proven suitable for an industrial-scale production. After zirconium was chosen as a material for nuclear reactor programs in the 1940s, a separation method had to be developed. Liquid–liquid extraction processes with a wide variety of solvents were developed and are still used for producing hafnium. About half of all hafnium metal manufactured is produced as a by-product of zirconium refinement. The end product of the separation is hafnium(IV) chloride. The purified hafnium(IV) chloride is converted to the metal by reduction with magnesium or sodium, as in the Kroll process. HfCl4{} + 2 Mg ->[1100~^\circ\text{C}] Hf{} + 2 MgCl2 Further purification is effected by a chemical transport reaction developed by Arkel and de Boer: In a closed vessel, hafnium reacts with iodine at temperatures of , forming hafnium(IV) iodide; at a tungsten filament of the reverse reaction happens preferentially, and the chemically bound iodine and hafnium dissociate into the native elements. The hafnium forms a solid coating at the tungsten filament, and the iodine can react with additional hafnium, resulting in a steady iodine turnover and ensuring the chemical equilibrium remains in favor of hafnium production. Hf{} + 2 I2 ->[500~^\circ\text{C}] HfI4 HfI4 ->[1700~^\circ\text{C}] Hf{} + 2 I2 Chemical compounds Due to the lanthanide contraction, the ionic radius of hafnium(IV) (0.78 ångström) is almost the same as that of zirconium(IV) (0.79 angstroms). Consequently, compounds of hafnium(IV) and zirconium(IV) have very similar chemical and physical properties. Hafnium and zirconium tend to occur together in nature and the similarity of their ionic radii makes their chemical separation rather difficult. Hafnium tends to form inorganic compounds in the oxidation state of +4. Halogens react with it to form hafnium tetrahalides. At higher temperatures, hafnium reacts with oxygen, nitrogen, carbon, boron, sulfur, and silicon. Some hafnium compounds in lower oxidation states are known. Hafnium(IV) chloride and hafnium(IV) iodide have some applications in the production and purification of hafnium metal. They are volatile solids with polymeric structures. These tetrachlorides are precursors to various organohafnium compounds such as hafnocene dichloride and tetrabenzylhafnium. The white hafnium oxide (HfO2), with a melting point of and a boiling point of roughly , is very similar to zirconia, but slightly more basic. Hafnium carbide is the most refractory binary compound known, with a melting point over , and hafnium nitride is the most refractory of all known metal nitrides, with a melting point of . This has led to proposals that hafnium or its carbides might be useful as construction materials that are subjected to very high temperatures. The mixed carbide tantalum hafnium carbide () possesses the highest melting point of any currently known compound, . Recent supercomputer simulations suggest a hafnium alloy with a melting point of . History Hafnium's existence was predicted by Dmitri Mendeleev in 1869. In his report on The Periodic Law of the Chemical Elements, in 1869, Dmitri Mendeleev had implicitly predicted the existence of a heavier analog of titanium and zirconium. At the time of his formulation in 1871, Mendeleev believed that the elements were ordered by their atomic masses and placed lanthanum (element 57) in the spot below zirconium. The exact placement of the elements and the location of missing elements was done by determining the specific weight of the elements and comparing the chemical and physical properties. The X-ray spectroscopy done by Henry Moseley in 1914 showed a direct dependency between spectral line and effective nuclear charge. This led to the nuclear charge, or atomic number of an element, being used to ascertain its place within the periodic table. With this method, Moseley determined the number of lanthanides and showed the gaps in the atomic number sequence at numbers 43, 61, 72, and 75. The discovery of the gaps led to an extensive search for the missing elements. In 1914, several people claimed the discovery after Henry Moseley predicted the gap in the periodic table for the then-undiscovered element 72. Georges Urbain asserted that he found element 72 in the rare earth elements in 1907 and published his results on celtium in 1911. Neither the spectra nor the chemical behavior he claimed matched with the element found later, and therefore his claim was turned down after a long-standing controversy. The controversy was partly because the chemists favored the chemical techniques which led to the discovery of celtium, while the physicists relied on the use of the new X-ray spectroscopy method that proved that the substances discovered by Urbain did not contain element 72. In 1921, Charles R. Bury suggested that element 72 should resemble zirconium and therefore was not part of the rare earth elements group. By early 1923, Niels Bohr and others agreed with Bury. These suggestions were based on Bohr's theories of the atom which were identical to chemist Charles Bury, the X-ray spectroscopy of Moseley, and the chemical arguments of Friedrich Paneth. Encouraged by these suggestions and by the reappearance in 1922 of Urbain's claims that element 72 was a rare earth element discovered in 1911, Dirk Coster and Georg von Hevesy were motivated to search for the new element in zirconium ores. Hafnium was discovered by the two in 1923 in Copenhagen, Denmark, validating the original 1869 prediction of Mendeleev. It was ultimately found in zircon in Norway through X-ray spectroscopy analysis. The place where the discovery took place led to the element being named for the Latin name for "Copenhagen", Hafnia, the home town of Niels Bohr. Today, the Faculty of Science of the University of Copenhagen uses in its seal a stylized image of the hafnium atom. Hafnium was separated from zirconium through repeated recrystallization of the double ammonium or potassium fluorides by Valdemar Thal Jantzen and von Hevesey. Anton Eduard van Arkel and Jan Hendrik de Boer were the first to prepare metallic hafnium by passing hafnium tetraiodide vapor over a heated tungsten filament in 1924. This process for differential purification of zirconium and hafnium is still in use today. Hafnium was one of the last two stable elements to be discovered. The element rhenium was found in 1908 by Masataka Ogawa, though its atomic number was misidentified at the time, and it was not generally recognised by the scientific community until its rediscovery by Walter Noddack, Ida Noddack, and Otto Berg in 1925. This makes it somewhat difficult to say if hafnium or rhenium was discovered last. In 1923, six predicted elements were still missing from the periodic table: 43 (technetium), 61 (promethium), 85 (astatine), and 87 (francium) are radioactive elements and are only present in trace amounts in the environment, thus making elements 75 (rhenium) and 72 (hafnium) the last two unknown non-radioactive elements. Applications Most of the hafnium produced is used in the manufacture of control rods for nuclear reactors. Hafnium has limited technical applications due to a few factors. First, it's very similar to zirconium, a more abundant element that can be used in most cases. Second, pure hafnium wasn't widely available until the late 1950s, when it became a byproduct of the nuclear industry's need for hafnium-free zirconium. Additionally, hafnium is rare and difficult to separate from other elements, making it expensive. After the Fukushima disaster reduced the demand for hafnium-free zirconium, the price of hafnium increased significantly from around $500–$600/kg ($227-$272/lb) in 2014 to around $1000/kg ($454/lb) in 2015. Nuclear reactors The nuclei of several hafnium isotopes can each absorb multiple neutrons. This makes hafnium a good material for nuclear reactors' control rods. Its neutron capture cross section (Capture Resonance Integral Io ≈ 2000 barns) is about 600 times that of zirconium (other elements that are good neutron-absorbers for control rods are cadmium and boron). Excellent mechanical properties and exceptional corrosion-resistance properties allow its use in the harsh environment of pressurized water reactors. The German research reactor FRM II uses hafnium as a neutron absorber. It is also common in military reactors, particularly in US naval submarine reactors, to slow reactor rates that are too high. It is seldom found in civilian reactors, the first core of the Shippingport Atomic Power Station (a conversion of a naval reactor) being a notable exception. Alloys Hafnium is used in alloys with iron, titanium, niobium, tantalum, and other metals. An alloy used for liquid-rocket thruster nozzles, for example the main engine of the Apollo Lunar Modules, is C103 which consists of 89% niobium, 10% hafnium and 1% titanium. Small additions of hafnium increase the adherence of protective oxide scales on nickel-based alloys. It thereby improves the corrosion resistance, especially under cyclic temperature conditions that tend to break oxide scales, by inducing thermal stresses between the bulk material and the oxide layer. Microprocessors Hafnium-based compounds are employed in gates of transistors as insulators in the 45 nm (and below) generation of integrated circuits from Intel, IBM and others. Hafnium oxide-based compounds are practical high-k dielectrics, allowing reduction of the gate leakage current which improves performance at such scales. Isotope geochemistry Isotopes of hafnium and lutetium (along with ytterbium) are also used in isotope geochemistry and geochronological applications, in lutetium-hafnium dating. It is often used as a tracer of isotopic evolution of Earth's mantle through time. This is because 176Lu decays to 176Hf with a half-life of approximately 37 billion years. In most geologic materials, zircon is the dominant host of hafnium (>10,000 ppm) and is often the focus of hafnium studies in geology. Hafnium is readily substituted into the zircon crystal lattice, and is therefore very resistant to hafnium mobility and contamination. Zircon also has an extremely low Lu/Hf ratio, making any correction for initial lutetium minimal. Although the Lu/Hf system can be used to calculate a "model age", i.e. the time at which it was derived from a given isotopic reservoir such as the depleted mantle, these "ages" do not carry the same geologic significance as do other geochronological techniques as the results often yield isotopic mixtures and thus provide an average age of the material from which it was derived. Garnet is another mineral that contains appreciable amounts of hafnium to act as a geochronometer. The high and variable Lu/Hf ratios found in garnet make it useful for dating metamorphic events. Other uses Due to its heat resistance and its affinity to oxygen and nitrogen, hafnium is a good scavenger for oxygen and nitrogen in gas-filled and incandescent lamps. Hafnium is also used as the electrode in plasma cutting because of its ability to shed electrons into the air. The high energy content of 178m2Hf was the concern of a DARPA-funded program in the US. This program eventually concluded that using the above-mentioned 178m2Hf nuclear isomer of hafnium to construct high-yield weapons with X-ray triggering mechanisms—an application of induced gamma emission—was infeasible because of its expense. See hafnium controversy. Hafnium metallocene compounds can be prepared from hafnium tetrachloride and various cyclopentadiene-type ligand species. Perhaps the simplest hafnium metallocene is hafnocene dichloride. Hafnium metallocenes are part of a large collection of Group 4 transition metal metallocene catalysts that are used worldwide in the production of polyolefin resins like polyethylene and polypropylene. A pyridyl-amidohafnium catalyst can be used for the controlled iso-selective polymerization of propylene which can then be combined with polyethylene to make a much tougher recycled plastic. Hafnium diselenide is studied in spintronics thanks to its charge density wave and superconductivity. Precautions Care needs to be taken when machining hafnium because it is pyrophoric—fine particles can spontaneously combust when exposed to air. Compounds that contain this metal are rarely encountered by most people. The pure metal is not considered toxic, but hafnium compounds should be handled as if they were toxic because the ionic forms of metals are normally at greatest risk for toxicity, and limited animal testing has been done for hafnium compounds. People can be exposed to hafnium in the workplace by breathing, swallowing, skin, and eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for exposure to hafnium and hafnium compounds in the workplace as TWA 0.5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set the same recommended exposure limit (REL). At levels of 50 mg/m3, hafnium is immediately dangerous to life and health.
Physical sciences
Chemical elements_2
null
13471
https://en.wikipedia.org/wiki/Holocene
Holocene
The Holocene () is the current geological epoch, beginning approximately 11,700 years ago. It follows the Last Glacial Period, which concluded with the Holocene glacial retreat. The Holocene and the preceding Pleistocene together form the Quaternary period. The Holocene is an interglacial period within the ongoing glacial cycles of the Quaternary, and is equivalent to Marine Isotope Stage 1. The Holocene correlates with the last maximum axial tilt of the Earth towards the Sun, and corresponds with the rapid proliferation, growth, and impacts of the human species worldwide, including all of its written history, technological revolutions, development of major civilizations, and overall significant transition towards urban living in the present. The human impact on modern-era Earth and its ecosystems may be considered of global significance for the future evolution of living species, including approximately synchronous lithospheric evidence, or more recently hydrospheric and atmospheric evidence of the human impact. In July 2018, the International Union of Geological Sciences split the Holocene Epoch into three distinct ages based on the climate, Greenlandian (11,700 years ago to 8,200 years ago), Northgrippian (8,200 years ago to 4,200 years ago) and Meghalayan (4,200 years ago to the present), as proposed by the International Commission on Stratigraphy. The oldest age, the Greenlandian, was characterized by a warming following the preceding ice age. The Northgrippian Age is known for vast cooling due to a disruption in ocean circulations that was caused by the melting of glaciers. The most recent age of the Holocene is the present Meghalayan, which began with extreme drought that lasted around 200 years. Etymology The word Holocene was formed from two Ancient Greek words. Hólos () is the Greek word for "whole". "Cene" comes from the Greek word kainós (), meaning "new". The concept is that this epoch is "entirely new". The suffix '-cene' is used for all the seven epochs of the Cenozoic Era. Overview The International Commission on Stratigraphy has defined the Holocene as starting approximately 11,700 years before 2000 CE (11,650 cal years BP, or 9,700 BCE). The Subcommission on Quaternary Stratigraphy (SQS) regards the term 'recent' as an incorrect way of referring to the Holocene, preferring the term 'modern' instead to describe current processes. It also observes that the term 'Flandrian' may be used as a synonym for Holocene, although it is becoming outdated. The International Commission on Stratigraphy, however, considers the Holocene to be an epoch following the Pleistocene and specifically following the last glacial period. Local names for the last glacial period include the Wisconsinan in North America, the Weichselian in Europe, the Devensian in Britain, the Llanquihue in Chile and the Otiran in New Zealand. The Holocene can be subdivided into five time intervals, or chronozones, based on climatic fluctuations: Preboreal (10 ka–9 ka BP), Boreal (9 ka–8 ka BP), Atlantic (8 ka–5 ka BP), Subboreal (5 ka–2.5 ka BP) and Subatlantic (2.5 ka BP–present). Note: "ka BP" means "kilo-annum Before Present", i.e. 1,000 years before 1950 (non-calibrated C14 dates) Geologists working in different regions are studying sea levels, peat bogs, and ice-core samples, using a variety of methods, with a view toward further verifying and refining the Blytt–Sernander sequence. This is a classification of climatic periods initially defined by plant remains in peat mosses. Though the method was once thought to be of little interest, based on 14C dating of peats that was inconsistent with the claimed chronozones, investigators have found a general correspondence across Eurasia and North America. The scheme was defined for Northern Europe, but the climate changes were claimed to occur more widely. The periods of the scheme include a few of the final pre-Holocene oscillations of the last glacial period and then classify climates of more recent prehistory. Paleontologists have not defined any faunal stages for the Holocene. If subdivision is necessary, periods of human technological development, such as the Mesolithic, Neolithic, and Bronze Age, are usually used. However, the time periods referenced by these terms vary with the emergence of those technologies in different parts of the world. Some scholars have argued that a third epoch of the Quaternary, the Anthropocene, has now begun. This term has been used to denote the present time-interval in which many geologically significant conditions and processes have been profoundly altered by human activities. The 'Anthropocene' (a term coined by Paul J. Crutzen and Eugene Stoermer in 2000) was never a formally defined geological unit. The Subcommission on Quaternary Stratigraphy (SQS) of the International Commission on Stratigraphy (ICS) had a working group to determine whether it should be. In May 2019, members of the working group voted in favour of recognizing the Anthropocene as formal chrono-stratigraphic unit, with stratigraphic signals around the mid-twentieth century CE as its base. The exact criteria was still to be determined, after which the recommendation also had to be approved by the working group's parent bodies (ultimately the International Union of Geological Sciences). In March 2024, after 15 years of deliberation, the Anthropocene Epoch proposal of the working group was voted down by a wide margin by the SQS, owing largely to its shallow sedimentary record and extremely recent proposed start date. The ICS and the International Union of Geological Sciences later formally confirmed, by a near unanimous vote, the rejection of the working group's Anthropocene Epoch proposal for inclusion in the Geologic Time Scale. Geology The Holocene is a geologic epoch that follows directly after the Pleistocene. Continental motions due to plate tectonics are less than a kilometre over a span of only 10,000 years. However, ice melt caused world sea levels to rise about in the early part of the Holocene and another 30 m in the later part of the Holocene. In addition, many areas above about 40 degrees north latitude had been depressed by the weight of the Pleistocene glaciers and rose as much as due to post-glacial rebound over the late Pleistocene and Holocene, and are still rising today. The sea-level rise and temporary land depression allowed temporary marine incursions into areas that are now far from the sea. For example, marine fossils from the Holocene epoch have been found in locations such as Vermont and Michigan. Other than higher-latitude temporary marine incursions associated with glacial depression, Holocene fossils are found primarily in lakebed, floodplain, and cave deposits. Holocene marine deposits along low-latitude coastlines are rare because the rise in sea levels during the period exceeds any likely tectonic uplift of non-glacial origin. Post-glacial rebound in the Scandinavia region resulted in a shrinking Baltic Sea. The region continues to rise, still causing weak earthquakes across Northern Europe. An equivalent event in North America was the rebound of Hudson Bay, as it shrank from its larger, immediate post-glacial Tyrrell Sea phase, to its present boundaries. Climate The climate throughout the Holocene has shown significant variability despite ice core records from Greenland suggesting a more stable climate following the preceding ice age. Marine chemical fluxes during the Holocene were lower than during the Younger Dryas, but were still considerable enough to imply notable changes in the climate. The temporal and spatial extent of climate change during the Holocene is an area of considerable uncertainty, with radiative forcing recently proposed to be the origin of cycles identified in the North Atlantic region. Climate cyclicity through the Holocene (Bond events) has been observed in or near marine settings and is strongly controlled by glacial input to the North Atlantic. Periodicities of ≈2500, ≈1500, and ≈1000 years are generally observed in the North Atlantic. At the same time spectral analyses of the continental record, which is remote from oceanic influence, reveal persistent periodicities of 1,000 and 500 years that may correspond to solar activity variations during the Holocene Epoch. A 1,500-year cycle corresponding to the North Atlantic oceanic circulation may have had widespread global distribution in the Late Holocene. From 8,500 BP to 6,700 BP, North Atlantic climate oscillations were highly irregular and erratic because of perturbations from substantial ice discharge into the ocean from the collapsing Laurentide Ice Sheet. The Greenland ice core records indicate that climate changes became more regional and had a larger effect on the mid-to-low latitudes and mid-to-high latitudes after ~5600 B.P. Human activity through land use changes already by the Mesolithic had major ecological impacts; it was an important influence on Holocene climatic changes, and is believed to be why the Holocene is an atypical interglacial that has not experienced significant cooling over its course. From the start of the Industrial Revolution onwards, large-scale anthropogenic greenhouse gas emissions caused the Earth to warm. Likewise, climatic changes have induced substantial changes in human civilisation over the course of the Holocene. During the transition from the last glacial to the Holocene, the Huelmo–Mascardi Cold Reversal in the Southern Hemisphere began before the Younger Dryas, and the maximum warmth flowed south to north from 11,000 to 7,000 years ago. It appears that this was influenced by the residual glacial ice remaining in the Northern Hemisphere until the later date. The first major phase of Holocene climate was the Preboreal. At the start of the Preboreal occurred the Preboreal Oscillation (PBO). The Holocene Climatic Optimum (HCO) was a period of warming throughout the globe but was not globally synchronous and uniform. Following the HCO, the global climate entered a broad trend of very gradual cooling known as Neoglaciation, which lasted from the end of the HCO to before the Industrial Revolution. From the 10th-14th century, the climate was similar to that of modern times during a period known as the Mediaeval Warm Period (MWP), also known as the Mediaeval Climatic Optimum (MCO). It was found that the warming that is taking place in current years is both more frequent and more spatially homogeneous than what was experienced during the MWP. A warming of +1 degree Celsius occurs 5–40 times more frequently in modern years than during the MWP. The major forcing during the MWP was due to greater solar activity, which led to heterogeneity compared to the greenhouse gas forcing of modern years that leads to more homogeneous warming. This was followed by the Little Ice Age (LIA) from the 13th or 14th century to the mid-19th century. The LIA was the coldest interval of time of the past two millennia. Following the Industrial Revolution, warm decadal intervals became more common relative to before as a consequence of anthropogenic greenhouse gases, resulting in progressive global warming. In the late 20th century, anthropogenic forcing superseded variations in solar activity as the dominant driver of climate change, though solar activity has continued to play a role. Europe Drangajökull, Iceland's northernmost glacier, melted shortly after 9,200 BP. In Northern Germany, the Middle Holocene saw a drastic increase in the amount of raised bogs, most likely related to sea level rise. Although human activity affected geomorphology and landscape evolution in Northern Germany throughout the Holocene, it only became a dominant influence in the last four centuries. In the French Alps, geochemistry and lithium isotope signatures in lake sediments have suggested gradual soil formation from the Last Glacial Period to the Holocene climatic optimum, and this soil development was altered by the settlement of human societies. Early anthropogenic activities such as deforestation and agriculture reinforced soil erosion, which peaked in the Middle Ages at an unprecedented level, marking human forcing as the most powerful factor affecting surface processes. The sedimentary record from Aitoliko Lagoon indicates that wet winters locally predominated from 210 to 160 BP, followed by dry winter dominance from 160 to 20 BP. Africa North Africa, dominated by the Sahara Desert in the present, was instead a savanna dotted with large lakes during the Early and Middle Holocene, regionally known as the African Humid Period (AHP). The northward migration of the Intertropical Convergence Zone (ITCZ) produced increased monsoon rainfall over North Africa. The lush vegetation of the Sahara brought an increase in pastoralism. The AHP ended around 5,500 BP, after which the Sahara began to dry and become the desert it is today. A stronger East African Monsoon during the Middle Holocene increased precipitation in East Africa and raised lake levels. Around 800 AD, or 1,150 BP, a marine transgression occurred in southeastern Africa; in the Lake Lungué basin, this sea level highstand occurred from 740 to 910 AD, or from 1,210 to 1,040 BP, as evidenced by the lake's connection to the Indian Ocean at this time. This transgression was followed by a period of transition that lasted until 590 BP, when the region experienced significant aridification and began to be extensively used by humans for livestock herding. In the Kalahari Desert, Holocene climate was overall very stable and environmental change was of low amplitude. Relatively cool conditions have prevailed since 4,000 BP. Middle East In the Middle East, the Holocene brought a warmer and wetter climate, in contrast to the preceding cold, dry Younger Dryas. The Early Holocene saw the advent and spread of agriculture in the Fertile Crescent—sheep, goat, cattle, and later pig were domesticated, as well as cereals, like wheat and barley, and legumes—which would later disperse into much of the world. This 'Neolithic Revolution', likely influenced by Holocene climatic changes, included an increase in sedentism and population, eventually resulting in the world's first large-scale state societies in Mesopotamia and Egypt. During the Middle Holocene, the Intertropical Convergence Zone, which governs the incursion of monsoon precipitation through the Arabian Peninsula, shifted southwards, resulting in increased aridity. In the Middle to Late Holocene, the coastline of the Levant and Persian Gulf receded, prompting a shift in human settlement patterns following this marine regression. Central Asia Central Asia experienced glacial-like temperatures until about 8,000 BP, when the Laurentide Ice Sheet collapsed. In Xinjiang, long-term Holocene warming increased meltwater supply during summers, creating large lakes and oases at low altitudes and inducing enhanced moisture recycling. In the Tien Shan, sedimentological evidence from Swan Lake suggests the period between 8,500 and 6,900 BP was relatively warm, with steppe meadow vegetation being predominant. An increase in Cyperaceae from 6,900 to 2,600 BP indicates cooling and humidification of the Tian Shan climate that was interrupted by a warm period between 5,500 and 4,500 BP. After 2,600 BP, an alpine steppe climate prevailed across the region. Sand dune evolution in the Bayanbulak Basin shows that the region was very dry from the Holocene's beginning until around 6,500 BP, when a wet interval began. In the Tibetan Plateau, the moisture optimum spanned from around 7,500 to 5,500 BP. The Tarim Basin records the onset of significant aridification around 3,000-2,000 BP. South Asia After 11,800 BP, and especially between 10,800 and 9,200 BP, Ladakh experienced tremendous moisture increase most likely related to the strengthening of the Indian Summer Monsoon (ISM). From 9,200 to 6,900 BP, relative aridity persisted in Ladakh. A second major humid phase occurred in Ladakh from 6,900 to 4,800 BP, after which the region was again arid. From 900 to 1,200 AD, during the MWP, the ISM was again strong as evidenced by low δ18O values from the Ganga Plain. The sediments of Lonar Lake in Maharashtra record dry conditions around 11,400 BP that transitioned into a much wetter climate from 11,400 to 11,100 BP due to intensification of the ISM. Over the Early Holocene, the region was very wet, but during the Middle Holocene from 6,200 to 3,900 BP, aridification occurred, with the subsequent Late Holocene being relatively arid as a whole. Coastal southwestern India experienced a stronger ISM from 9,690 to 7,560 BP, during the HCO. From 3,510 to 2,550 BP, during the Late Holocene, the ISM became weaker, although this weakening was interrupted by an interval of unusually high ISM strength from 3,400 to 3,200 BP. East Asia Southwestern China experienced long-term warming during the Early Holocene up until ~7,000 BP. Northern China experienced an abrupt aridification event approximately 4,000 BP. From around 3,500 to 3,000 BP, northeastern China underwent a prolonged cooling, manifesting itself with the disruption of Bronze Age civilisations in the region. Eastern and southern China, the monsoonal regions of China, were wetter than present in the Early and Middle Holocene. Lake Huguangyan's TOC, δ13Cwax, δ13Corg, δ15N values suggest the period of peak moisture lasted from 9,200 to 1,800 BP and was attributable to a strong East Asian Summer Monsoon (EASM). Late Holocene cooling events in the region were dominantly influenced by solar forcing, with many individual cold snaps linked to solar minima such as the Oort, Wolf, Spörer, and Maunder Minima. A notable cooling event in southeastern China occurred 3,200 BP. Strengthening of the winter monsoon occurred around 5,500, 4,000, and 2,500 BP. Monsoonal regions of China became more arid in the Late Holocene. In the Sea of Japan, the Middle Holocene was notable for its warmth, with rhythmic temperature fluctuations every 400–500 and 1,000 years. Southeast Asia Before 7,500 BP, the Gulf of Thailand was exposed above sea level and was very arid. A marine transgression occurred from 7,500 to 6,200 BP amidst global warming. North America During the Middle Holocene, western North America was drier than present, with wetter winters and drier summers. After the end of the thermal maximum of the HCO around 4,500 BP, the East Greenland Current underwent strengthening. A massive megadrought occurred from 2,800 to 1,850 BP in the Great Basin. Eastern North America underwent abrupt warming and humidification around 10,500 BP and then declined from 9,300 to 9,100 BP. The region has undergone a long term wettening since 5,500 BP occasionally interrupted by intervals of high aridity. A major cool event lasting from 5,500 to 4,700 BP was coeval with a major humidification before being terminated by a major drought and warming at the end of that interval. South America During the Early Holocene, relative sea level rose in the Bahia region, causing a landward expansion of mangroves. During the Late Holocene, the mangroves declined as sea level dropped and freshwater supply increased. In the Santa Catarina region, the maximum sea level highstand was around 2.1 metres above present and occurred about 5,800 to 5,000 BP. Sea levels at Rocas Atoll were likewise higher than present for much of the Late Holocene. Australia The Northwest Australian Summer Monsoon was in a strong phase from 8,500 to 6,400 BP, from 5,000 to 4,000 BP (possibly until 3,000 BP), and from 1,300 to 900 BP, with weak phases in between and the current weak phase beginning around 900 BP after the end of the last strong phase. New Zealand Ice core measurements imply that the sea surface temperature (SST) gradient east of New Zealand, across the subtropical front (STF), was around 2 degrees Celsius during the HCO. This temperature gradient is significantly less than modern times, which is around 6 degrees Celsius. A study utilizing five SST proxies from 37°S to 60°S latitude confirmed that the strong temperature gradient was confined to the area immediately south of the STF, and is correlated with reduced westerly winds near New Zealand. Since 7,100 BP, New Zealand experienced 53 cyclones similar in magnitude to Cyclone Bola. Pacific Evidence from the Galápagos Islands shows that the El Niño–Southern Oscillation (ENSO) was significantly weaker during the Middle Holocene, but that the strength of ENSO became moderate to high over the Late Holocene. Ecological developments Animal and plant life have not evolved much during the relatively short Holocene, but there have been major shifts in the richness and abundance of plants and animals. A number of large animals including mammoths and mastodons, saber-toothed cats like Smilodon and Homotherium, and giant sloths went extinct in the late Pleistocene and early Holocene. These extinctions can be mostly attributed to people. In America, it coincided with the arrival of the Clovis people; this culture was known for "Clovis points" which were fashioned on spears for hunting animals. Shrubs, herbs, and mosses had also changed in relative abundance from the Pleistocene to Holocene, identified by permafrost core samples. Throughout the world, ecosystems in cooler climates that were previously regional have been isolated in higher altitude ecological "islands". The 8.2-ka event, an abrupt cold spell recorded as a negative excursion in the record lasting 400 years, is the most prominent climatic event occurring in the Holocene Epoch, and may have marked a resurgence of ice cover. It has been suggested that this event was caused by the final drainage of Lake Agassiz, which had been confined by the glaciers, disrupting the thermohaline circulation of the Atlantic. This disruption was the result of an ice dam over Hudson Bay collapsing sending cold lake Agassiz water into the North Atlantic ocean. Furthermore, studies show that the melting of Lake Agassiz led to sea-level rise which flooded the North American coastal landscape. The basal peat plant was then used to determine the resulting local sea-level rise of 0.20-0.56m in the Mississippi Delta. Subsequent research, however, suggested that the discharge was probably superimposed upon a longer episode of cooler climate lasting up to 600 years and observed that the extent of the area affected was unclear. Human developments The beginning of the Holocene corresponds with the beginning of the Mesolithic age in most of Europe. In regions such as the Middle East and Anatolia, the term Epipaleolithic is preferred in place of Mesolithic, as they refer to approximately the same time period. Cultures in this period include Hamburgian, Federmesser, and the Natufian culture, during which the oldest inhabited places still existing on Earth were first settled, such as Tell es-Sultan (Jericho) in the Middle East. There is also evolving archeological evidence of proto-religion at locations such as Göbekli Tepe, as long ago as the 9th millennium BC. The preceding period of the Late Pleistocene had already brought advancements such as the bow and arrow, creating more efficient forms of hunting and replacing spear throwers. In the Holocene, however, the domestication of plants and animals allowed humans to develop villages and towns in centralized locations. Archaeological data shows that between 10,000 and 7,000 BP rapid domestication of plants and animals took place in tropical and subtropical parts of Asia, Africa, and Central America. The development of farming allowed humans to transition away from hunter-gatherer nomadic cultures, which did not establish permanent settlements, to a more sustainable sedentary lifestyle. This form of lifestyle change allowed humans to develop towns and villages in centralized locations, which gave rise to the world known today. It is believed that the domestication of plants and animals began in the early part of the Holocene in the tropical areas of the planet. Because these areas had warm, moist temperatures, the climate was perfect for effective farming. Culture development and human population change, specifically in South America, has also been linked to spikes in hydroclimate resulting in climate variability in the mid-Holocene (8.2–4.2 k cal BP). Climate change on seasonality and available moisture also allowed for favorable agricultural conditions which promoted human development for Maya and Tiwanaku regions. In the Korean Peninsula, climatic changes fostered a population boom during the Middle Chulmun period from 5,500 to 5,000 BP, but contributed to a subsequent bust during the Late and Final Chulmun periods, from 5,000 to 4,000 BP and from 4,000 to 3,500 BP respectively. Extinction event The Holocene extinction, otherwise referred to as the sixth mass extinction or Anthropocene extinction, is an ongoing extinction event of species during the present Holocene epoch (with the more recent time sometimes called Anthropocene) as a result of human activity. The included extinctions span numerous families of fungi, plants, and animals, including mammals, birds, reptiles, amphibians, fish and invertebrates. With widespread degradation of highly biodiverse habitats such as coral reefs and rainforests, as well as other areas, the vast majority of these extinctions are thought to be undocumented, as the species are undiscovered at the time of their extinction, or no one has yet discovered their extinction. The current rate of extinction of species is estimated at 100 to 1,000 times higher than natural background extinction rates. Gallery
Physical sciences
Geological periods
null
13475
https://en.wikipedia.org/wiki/Harbor
Harbor
A harbor (American English), or harbour (Australian English, British English, Canadian English, Irish English, New Zealand English; see spelling differences), is a sheltered body of water where ships, boats, and barges can be moored. The term harbor is often used interchangeably with port, which is a man-made facility built for loading and unloading vessels and dropping off and picking up passengers. Harbors usually include one or more ports. Alexandria Port in Egypt, meanwhile, is an example of a port with two harbors. Harbors may be natural or artificial. An artificial harbor can have deliberately constructed breakwaters, sea walls, or jetties or they can be constructed by dredging, which requires maintenance by further periodic dredging. An example of an artificial harbor is Long Beach Harbor, California, United States, which was an array of salt marshes and tidal flats too shallow for modern merchant ships before it was first dredged in the early 20th century. In contrast, a natural harbor is surrounded on several sides by land. Examples of natural harbors include Sydney Harbour, New South Wales, Australia, Halifax Harbour in Halifax, Nova Scotia, Canada and Trincomalee Harbour in Sri Lanka. Artificial harbors Artificial harbors are frequently built for use as ports. The oldest artificial harbor known is the Ancient Egyptian site at Wadi al-Jarf, on the Red Sea coast, which is at least 4500 years old (ca. 2600–2550 BCE, reign of King Khufu). The largest artificially created harbor is Jebel Ali in Dubai. Other large and busy artificial harbors include: Port of Houston, Texas, United States Port of Long Beach, California, United States Port of Los Angeles in San Pedro, California, United States Port of Rotterdam, Netherlands Port of Savannah, Georgia, United States The Ancient Carthaginians constructed fortified, artificial harbors called cothons. Natural harbors A natural harbor is a landform where a section of a body of water is protected and deep enough to allow anchorage. Many such harbors are rias. Natural harbors have long been of great strategic naval and economic importance, and many great cities of the world are located on them. Having a protected harbor reduces or eliminates the need for breakwaters as it will result in calmer waves inside the harbor. Some examples are: Bali Strait, Indonesia Berehaven Harbour, Ireland Balikpapan Bay in East Kalimantan, Indonesia Mumbai in Maharashtra, India Boston Harbor in Massachusetts, United States Burrard Inlet in Vancouver, British Columbia, Canada Chittagong in Chittagong Division, Bangladesh Cork Harbour, Ireland Grand Harbour, Malta Guantánamo Bay, Cuba Gulf of Paria, Trinidad and Tobago Haifa Bay, in Haifa, Israel Halifax Harbour in Nova Scotia, Canada Hamilton Harbour in Ontario, Canada Killybegs in County Donegal, Ireland Kingston Harbour, Jamaica Mahón harbour, in Menorca, Spain Marsamxett Harbour, Malta Milford Haven in Wales, United Kingdom New York Harbor in the United States Pago Pago Harbor in American Samoa Pearl Harbor in Hawaii, United States Poole Harbour in England, United Kingdom Port Hercules, Monaco Sydney Harbour in New South Wales, Australia, technically a ria Port Stephens in Australia Tanjung Perak in Surabaya, Indonesia Port of Tobruk in Tobruk, Libya Presque Isle Bay in Pennsylvania, United States Prince William Sound in Alaska, United States Puget Sound in Washington state, United States Rías Altas and Rías Baixas in Galicia, Spain Roadstead of Brest in Brittany, France San Francisco Bay in California, United States Scapa Flow in Scotland, United Kingdom Sept-Îles in Côte-Nord, Quebec, Canada Shelburne in Nova Scotia, Canada Subic Bay in Zambales, Philippines Tallinn Bay in Tallinn, Estonia Tampa Bay in Florida, United States Trincomalee Harbour, Sri Lanka Tuticorin in Tamil Nadu, India Victoria Harbour in Hong Kong Visakhapatnam Harbour, India Vizhinjam in Trivandrum, India Waitematā Harbour in Auckland, New Zealand Manukau Harbour in Auckland, New Zealand Wellington Harbour in Wellington, New Zealand Port Foster in Deception Island, Antarctica Ice-free harbors For harbors near the North and South poles, being ice-free is an important advantage, especially when it is year-round. Examples of these are: Hammerfest, Norway Liinakhamari, Russia Murmansk, Russia Nakhodka in Nakhodka Bay, Russia Pechenga, Russia Prince Rupert, Canada Valdez, United States Vardø, Norway Vostochny Port, Russia The world's southernmost harbor, located at Antarctica's Winter Quarters Bay (77° 50′ South), is sometimes ice-free, depending on the summertime pack ice conditions. Important harbors Although the world's busiest port is a contested title, in 2017 the world's busiest harbor by cargo tonnage was the Port of Ningbo-Zhoushan. The following are large natural harbors:
Technology
Coastal infrastructure
null
13480
https://en.wikipedia.org/wiki/Horseshoe
Horseshoe
A horseshoe is a product designed to protect a horse hoof from wear. Shoes are attached on the palmar surface (ground side) of the hooves, usually nailed through the insensitive hoof wall that is anatomically akin to the human toenail, although much larger and thicker. However, there are also cases where shoes are glued. Horseshoes are available in a wide variety of materials and styles, developed for different types of horses and for the work they do. The most common materials are steel and aluminium, but specialized shoes may include use of rubber, plastic, magnesium, titanium, or copper. Steel tends to be preferred in sports in which a strong, long-wearing shoe is needed, such as polo, eventing, show jumping, and western riding events. Aluminium shoes are lighter, making them common in horse racing where a lighter shoe is desired, and often facilitate certain types of movement; they are often favored in the discipline of dressage. Some horseshoes have "caulkins", "caulks", or "calks": protrusions at the toe or heels of the shoe, or both, to provide additional traction. The fitting of horseshoes is a professional occupation, conducted by a farrier, who specializes in the preparation of feet, assessing potential lameness issues, and fitting appropriate shoes, including remedial features where required. In some countries, such as the UK, horseshoeing is legally restricted to people with specific qualifications and experience. In others, such as the United States, where professional licensing is not legally required, professional organizations provide certification programs that publicly identify qualified individuals. When kept as a talisman, a horseshoe is said to bring good luck. A stylized variation of the horseshoe is used for a popular throwing game, horseshoes. History Since the early history of domestication of the horse, working animals were found to be exposed to many conditions that created breakage or excessive hoof wear. Ancient people recognized the need for the walls (and sometimes the sole) of domestic horses' hooves to have additional protection over and above any natural hardness. An early form of hoof protection was seen in ancient Asia, where horses' hooves were wrapped in rawhide, leather, or other materials for both therapeutic purposes and protection from wear. From archaeological finds in Great Britain, the Romans appeared to have attempted to protect their horses' feet with a strap-on, solid-bottomed "hipposandal" that has a slight resemblance to the modern hoof boot. Historians differ on the origin of the horseshoe. Because iron was a valuable commodity, and any worn out items were generally reforged and reused, it is difficult to locate clear archaeological evidence. Although some credit the Druids, there is no hard evidence to support this claim. In 1897 four bronze horseshoes with what are apparently nail holes were found in an Etruscan tomb dated around 400 BC. The assertion by some historians that the Romans invented the "mule shoes" sometime after 100 BC is supported by a reference by Catullus who died in 54 BC. However, these references to use of horseshoes and muleshoes in Rome may have been to the "hipposandal"—leather boots, reinforced by an iron plate, rather than to nailed horseshoes. Existing references to the nailed shoe are relatively late, first known to have appeared around AD 900, but there may have been earlier uses given that some have been found in layers of dirt. There are no extant references to nailed horseshoes prior to the reign of Byzantine Emperor Leo VI, and by 973 occasional references to them can be found. The earliest clear written record of iron horseshoes is a reference to "crescent figured irons and their nails" in AD 910. There is very little evidence of any sort that suggests the existence of nailed-on shoes prior to AD 500 or 600, though there is a find dated to the fifth century AD of a horseshoe, complete with nails, found in the tomb of the Frankish King Childeric I at Tournai, Belgium. Around 1000 AD, cast bronze horseshoes with nail holes became common in Europe. A design with a scalloped outer rim and six nail holes was common. According to Gordon Ward the scalloped edges were created by double punching the nail holes causing the edges to bulge. The 13th and 14th centuries brought the widespread manufacturing of iron horseshoes. By the time of the Crusades (1096–1270), horseshoes were widespread and frequently mentioned in various written sources. In that period, due to the value of iron, horseshoes were even accepted in lieu of coin to pay taxes. By the 13th century, shoes were forged in large quantities and could be bought ready made. Hot shoeing, the process of shaping a heated horseshoe immediately before placing it on the horse, became common in the 16th century. From the need for horseshoes, the craft of blacksmithing became "one of the great staple crafts of medieval and modern times and contributed to the development of metallurgy." A treatise titled "No Foot, No Horse" was published in England in 1751. In 1835, the first U.S. patent for a horseshoe manufacturing machine capable of making up to 60 horseshoes per hour was issued to Henry Burden. In mid-19th-century Canada, marsh horseshoes kept horses from sinking into the soft intertidal mud during dike-building. In a common design, a metal horseshoe holds a flat wooden shoe in place. China In China, iron horseshoes became common during the Yuan dynasty (1271–1368), prior to which rattan and leather shoes were used to preserve animal hooves. Evidence of the preservation of horse hooves in China dates to the Warring States period (476–221 BC), during which Zhuangzi recommended shaving horse hooves to keep them in good shape. The Discourses on Salt and Iron in 81 BC mentions using leather shoes, but it is not clear if they were used for protecting horse hooves or to aid in mounting the horse. Remnants of iron horseshoes have been found in what is now northeast China, but the tombs date to the Goguryeo period in 414 AD. A mural in the Mogao Caves dated to 584 AD depicts a man caring for a horse's hoof, which some speculate might be depicting horseshoe nailing, but the mural is too eroded to tell clearly. The earliest reference to iron horseshoes in China dates to 938 AD during the Five Dynasties and Ten Kingdoms period. A monk named Gao Juhui sent to the Western Regions writes that the people in Ganzhou (now Zhangye) taught him how to make "horse hoof muse", which had four holes in it that connected to four holes in the horse's hoof, and were thus put together. They also recommended using yak skin shoes for camel hooves. Iron horseshoes however did not become common for another three centuries. Zhao Rukuo writes in Zhu Fan Zhi, finished in 1225, that the horses of the Arabs and Persians used metal for horse shoes, implying that horses in China did not. After the establishment of the Yuan dynasty in 1271 AD, iron horseshoes became more common in northern China. When Thomas Blakiston travelled up the Yangtze, he noted that in Sichuan "cattle wore straw shoes to prevent their slipping on the wet ground" while in northern China, "horses and cattle are shod with iron shoes and nails." The majority of Chinese horseshoe discoveries have been in Jilin, Heilongjiang, Liaoning, Sichuan, and Tibet. Reasons for use Environmental changes linked to domestication Many changes brought about by the domestication of the horse, such as putting them in wetter climates and exercising them less, have led to horses' hooves hardening less and being more vulnerable to injury. In the wild, a horse may travel up to per day to obtain adequate forage. While horses in the wild cover large areas of terrain, they usually do so at relatively slow speeds, unless being chased by a predator. They also tend to live in arid steppe climates. The consequence of slow but nonstop travel in a dry climate is that horses' feet are naturally worn to a small, smooth, even, and hard state. The continual stimulation of the sole of the foot keeps it thick and hard. However, in domestication, the manner in which horses are used is different. Domesticated horses are brought to colder and wetter areas than their ancestral habitat. These softer and heavier soils soften the hooves and make them prone to splitting, thus making hoof protection necessary. Physical stresses requiring horseshoes Abnormal stress: Horses' hooves can become quite worn out when subjected to the added weight and stress of a rider, pack load, cart, or wagon. Corrective shoeing: The shape, weight, and thickness of a horseshoe can significantly affect the horse's gait. Farriers may forge custom shoes to help horses with bone or muscle problems in their legs, or fit commercially available remedial shoes. Traction: Traction devices such as borium for ice, horse shoe studs for muddy or slick conditions, calks, carbide-tipped road nails and rims are useful for performance horses such as eventers, show jumpers, polo ponies, and other horses that perform at high speeds, over changing terrain, or in less-than-ideal footing. Gait manipulation: Some breeds such as the Saddlebred, Tennessee Walking Horse, and other gaited horses are judged on their high-stepping movement. Special shoeing can help enhance their natural movement. Racing horses with weakness in their foot or leg require specialized horseshoes. Horseshoeing theories and debates Domestic horses do not always require shoes. When possible, a "barefoot" hoof, at least for part of every year, is a healthy option for most horses. However, horseshoes have their place and can help prevent excess or abnormal hoof wear and injury to the foot. Many horses go without shoes year round, some using temporary protection such as hoof boots for short-term use. Process of shoeing Shoeing, when performed correctly, causes no pain to the animal. Farriers trim the insensitive part of the hoof, which is the same area into which they drive the nails. This is analogous to a manicure on a human fingernail, only on a much larger scale. Before beginning to shoe, the farrier removes the old shoe using pincers (shoe pullers) and trims the hoof wall to the desired length with nippers, a sharp pliers-like tool, and the sole and frog of the hoof with a hoof knife. Shoes do not allow the hoof to wear down as it naturally would in the wild, and it can then become too long. The coffin bone inside the hoof should line up straight with both bones in the pastern. If the excess hoof is not trimmed, the bones will become misaligned, which would place stress on the legs of the animal. Shoes are then measured to the foot and bent to the correct shape using a hammer, anvil, forge, and other modifications, such as taps for shoe studs, are added. Farriers may either cold shoe, in which they bend the metal shoe without heating it, or hot shoe, in which they place the metal in a forge before bending it. Hot shoeing can be more time-consuming, and requires the farrier to have access to a forge; however, it usually provides a better fit, as the mark made on the hoof from the hot shoe can show how even it lies. It also allows the farrier to make more modifications to the shoe, such as drawing toe- and quarter-clips. The farrier must take care not to hold the hot shoe against the hoof too long, as the heat can damage the hoof. Hot shoes are placed in water to cool them. The farrier then nails the shoes on by driving the nails into the hoof wall at the white line of the hoof. The nails are shaped in such a way that they bend outward as they are driven in, avoiding the sensitive inner part of the foot, so they emerge on the sides of the hoof. When the nail has been completely driven, the farrier cuts off the sharp points and uses a clincher (a form of tongs made especially for this purpose) or a clinching block with hammer to bend the rest of the nail so it is almost flush with the hoof wall. This prevents the nail from getting caught on anything, and also helps to hold the nail, and therefore the shoe, in place. The farrier then uses a rasp (large file), to smooth the edge where it meets the shoe and eliminate any sharp edges left from cutting off the nails. In culture Superstition Horseshoes have long been considered lucky. They were originally made of iron, a material that was believed to ward off evil spirits, and traditionally were held in place with seven nails, seven being the luckiest number. The superstition acquired a further Christian twist due to a legend surrounding the tenth-century saint Dunstan, who worked as a blacksmith before becoming Archbishop of Canterbury. The legend recounts that, one day, the Devil walked into Dunstan's shop and asked him to shoe his horse. Dunstan pretended not to recognize him, and agreed to the request; but rather than nailing the shoe to the horse's hoof, he nailed it to the Devil's own foot, causing him great pain. Dunstan eventually agreed to remove the shoe, but only after extracting a promise that the Devil would never enter a household with a horseshoe nailed to the door. In the tale of Saint Dunstan, it appears that hanging a horseshoe with the open end facing downward is the most accurate interpretation. This is suggested by a passage from the story: “He will not through Granāda march, For there he knows the horse-shoe arch At every gate attends him. Nor partridges can he digest, Since the dire horse-shoe on the breast, Most grievously offends him.” The mention of the "horse-shoe arch" likely refers to a horseshoe with its open ends facing downward, consistent with the illustrations found throughout the tale. Blacksmiths and Horseshoes also have a connection. Blacksmiths themselves were historically considered lucky and revered for their craft, as they worked with fire and iron, both seen as powerful and protective elements. Their association with luck extended to the horseshoes they forged, which became symbols of protection and good fortune. Blacksmiths often hung horseshoes with the ends pointing down, believing this orientation would allow blessings and luck to pour onto their work. Opinion is divided as to which way up the horseshoe ought to be nailed. Some say the ends should point up, so that the horseshoe catches the luck, and that a horseshoe with ends pointing down allows the good luck to be lost; others say the ends should point down, so that the luck is poured upon those entering the home. Superstitious sailors believe that nailing a horseshoe to the mast will help their vessel avoid storms. Heraldry In heraldry, horseshoes most often occur as canting charges, such as in the arms of families with names like Farrier, Marshall, and Smith. A horseshoe (together with two hammers) also appears in the arms of Hammersmith and Fulham, a borough in London. The flag of Rutland, England's smallest historic county, consists of a golden horseshoe laid over a field scattered with acorns. This refers to an ancient tradition in which every noble visiting Oakham, Rutland's county town, presents a horseshoe to the Lord of the Manor, which is then nailed to the wall of Oakham Castle. Over the centuries, the Castle has amassed a vast collection of horseshoes, the oldest of which date from the 15th century. Monuments and structures A massive golden horseshoe structure is erected over the shopping mall of the Tuuri village in Alavus, a town of Finland. It is one of the most famous monuments in the locality; however, it stands at number three in Reuters' list of world's ugliest buildings and monuments. Sport The sport of horseshoes involves a horseshoe being thrown as close as possible to a rod in order to score points. As far as it is known, the sport is as old as horseshoes themselves. While traditional horseshoes can still be used, most organized versions of the game use specialized sport horseshoes, which do not fit on horses' hooves.
Technology
Animal husbandry
null
13483
https://en.wikipedia.org/wiki/Hemoglobin
Hemoglobin
Hemoglobin (haemoglobin, Hb or Hgb) is a protein containing iron that facilitates the transportation of oxygen in red blood cells. Almost all vertebrates contain hemoglobin, with the sole exception of the fish family Channichthyidae. Hemoglobin in the blood carries oxygen from the respiratory organs (lungs or gills) to the other tissues of the body, where it releases the oxygen to enable aerobic respiration which powers an animal's metabolism. A healthy human has 12to 20grams of hemoglobin in every 100mL of blood. Hemoglobin is a metalloprotein, a chromoprotein, and globulin. In mammals, hemoglobin makes up about 96% of a red blood cell's dry weight (excluding water), and around 35% of the total weight (including water). Hemoglobin has an oxygen-binding capacity of 1.34mL of O2 per gram, which increases the total blood oxygen capacity seventy-fold compared to dissolved oxygen in blood plasma alone. The mammalian hemoglobin molecule can bind and transport up to four oxygen molecules. Hemoglobin also transports other gases. It carries off some of the body's respiratory carbon dioxide (about 20–25% of the total) as carbaminohemoglobin, in which CO2 binds to the heme protein. The molecule also carries the important regulatory molecule nitric oxide bound to a thiol group in the globin protein, releasing it at the same time as oxygen. Hemoglobin is also found in other cells, including in the A9 dopaminergic neurons of the substantia nigra, macrophages, alveolar cells, lungs, retinal pigment epithelium, hepatocytes, mesangial cells of the kidney, endometrial cells, cervical cells, and vaginal epithelial cells. In these tissues, hemoglobin absorbs unneeded oxygen as an antioxidant, and regulates iron metabolism. Excessive glucose in the blood can attach to hemoglobin and raise the level of hemoglobin A1c. Hemoglobin and hemoglobin-like molecules are also found in many invertebrates, fungi, and plants. In these organisms, hemoglobins may carry oxygen, or they may transport and regulate other small molecules and ions such as carbon dioxide, nitric oxide, hydrogen sulfide and sulfide. A variant called leghemoglobin serves to scavenge oxygen away from anaerobic systems such as the nitrogen-fixing nodules of leguminous plants, preventing oxygen poisoning. The medical condition hemoglobinemia, a form of anemia, is caused by intravascular hemolysis, in which hemoglobin leaks from red blood cells into the blood plasma. Research history In 1825, Johann Friedrich Engelhart discovered that the ratio of iron to protein is identical in the hemoglobins of several species. From the known atomic mass of iron, he calculated the molecular mass of hemoglobin to n × 16000 (n=number of iron atoms per hemoglobin molecule, now known to be 4), the first determination of a protein's molecular mass. This "hasty conclusion" drew ridicule from colleagues who could not believe that any molecule could be so large. However, Gilbert Smithson Adair confirmed Engelhart's results in 1925 by measuring the osmotic pressure of hemoglobin solutions. Although blood had been known to carry oxygen since at least 1794, the oxygen-carrying property of hemoglobin was described by Hünefeld in 1840. In 1851, German physiologist Otto Funke published a series of articles in which he described growing hemoglobin crystals by successively diluting red blood cells with a solvent such as pure water, alcohol or ether, followed by slow evaporation of the solvent from the resulting protein solution. Hemoglobin's reversible oxygenation was described a few years later by Felix Hoppe-Seyler. With the development of X-ray crystallography, it became possible to sequence protein structures. In 1959, Max Perutz determined the molecular structure of hemoglobin. For this work he shared the 1962 Nobel Prize in Chemistry with John Kendrew, who sequenced the globular protein myoglobin. The role of hemoglobin in the blood was elucidated by French physiologist Claude Bernard. The name hemoglobin (or haemoglobin) is derived from the words heme (or haem) and globin, reflecting the fact that each subunit of hemoglobin is a globular protein with an embedded heme group. Each heme group contains one iron atom, that can bind one oxygen molecule through ion-induced dipole forces. The most common type of hemoglobin in mammals contains four such subunits. Genetics Hemoglobin consists of protein subunits (globin molecules), which are polypeptides, long folded chains of specific amino acids which determine the protein's chemical properties and function. The amino acid sequence of any polypeptide is translated from a segment of DNA, the corresponding gene. There is more than one hemoglobin gene. In humans, hemoglobin A (the main form of hemoglobin in adults) is coded by genes HBA1, HBA2, and HBB. Alpha 1 and alpha 2 subunits are respectively coded by genes HBA1 and HBA2 close together on chromosome 16, while the beta subunit is coded by gene HBB on chromosome 11. The amino acid sequences of the globin subunits usually differ between species, with the difference growing with evolutionary distance. For example, the most common hemoglobin sequences in humans, bonobos and chimpanzees are completely identical, with exactly the same alpha and beta globin protein chains. Human and gorilla hemoglobin differ in one amino acid in both alpha and beta chains, and these differences grow larger between less closely related species. Mutations in the genes for hemoglobin can result in variants of hemoglobin within a single species, although one sequence is usually "most common" in each species. Many of these mutations cause no disease, but some cause a group of hereditary diseases called hemoglobinopathies. The best known hemoglobinopathy is sickle-cell disease, which was the first human disease whose mechanism was understood at the molecular level. A mostly separate set of diseases called thalassemias involves underproduction of normal and sometimes abnormal hemoglobins, through problems and mutations in globin gene regulation. All these diseases produce anemia. Variations in hemoglobin sequences, as with other proteins, may be adaptive. For example, hemoglobin has been found to adapt in different ways to the thin air at high altitudes, where lower partial pressure of oxygen diminishes its binding to hemoglobin compared to the higher pressures at sea level. Recent studies of deer mice found mutations in four genes that can account for differences between high- and low-elevation populations. It was found that the genes of the two breeds are "virtually identical—except for those that govern the oxygen-carrying capacity of their hemoglobin. . . . The genetic difference enables highland mice to make more efficient use of their oxygen." Mammoth hemoglobin featured mutations that allowed for oxygen delivery at lower temperatures, thus enabling mammoths to migrate to higher latitudes during the Pleistocene. This was also found in hummingbirds that inhabit the Andes. Hummingbirds already expend a lot of energy and thus have high oxygen demands and yet Andean hummingbirds have been found to thrive in high altitudes. Non-synonymous mutations in the hemoglobin gene of multiple species living at high elevations (Oreotrochilus, A. castelnaudii, C. violifer, P. gigas, and A. viridicuada) have caused the protein to have less of an affinity for inositol hexaphosphate (IHP), a molecule found in birds that has a similar role as 2,3-BPG in humans; this results in the ability to bind oxygen in lower partial pressures. Birds' unique circulatory lungs also promote efficient use of oxygen at low partial pressures of O2. These two adaptations reinforce each other and account for birds' remarkable high-altitude performance. Hemoglobin adaptation extends to humans, as well. There is a higher offspring survival rate among Tibetan women with high oxygen saturation genotypes residing at 4,000 m. Natural selection seems to be the main force working on this gene because the mortality rate of offspring is significantly lower for women with higher hemoglobin-oxygen affinity when compared to the mortality rate of offspring from women with low hemoglobin-oxygen affinity. While the exact genotype and mechanism by which this occurs is not yet clear, selection is acting on these women's ability to bind oxygen in low partial pressures, which overall allows them to better sustain crucial metabolic processes. Synthesis Hemoglobin (Hb) is synthesized in a complex series of steps. The heme part is synthesized in a series of steps in the mitochondria and the cytosol of immature red blood cells, while the globin protein parts are synthesized by ribosomes in the cytosol. Production of Hb continues in the cell throughout its early development from the proerythroblast to the reticulocyte in the bone marrow. At this point, the nucleus is lost in mammalian red blood cells, but not in birds and many other species. Even after the loss of the nucleus in mammals, residual ribosomal RNA allows further synthesis of Hb until the reticulocyte loses its RNA soon after entering the vasculature (this hemoglobin-synthetic RNA in fact gives the reticulocyte its reticulated appearance and name). Structure of heme Hemoglobin has a quaternary structure characteristic of many multi-subunit globular proteins. Most of the amino acids in hemoglobin form alpha helices, and these helices are connected by short non-helical segments. Hydrogen bonds stabilize the helical sections inside this protein, causing attractions within the molecule, which then causes each polypeptide chain to fold into a specific shape. Hemoglobin's quaternary structure comes from its four subunits in roughly a tetrahedral arrangement. In most vertebrates, the hemoglobin molecule is an assembly of four globular protein subunits. Each subunit is composed of a protein chain tightly associated with a non-protein prosthetic heme group. Each protein chain arranges into a set of alpha-helix structural segments connected together in a globin fold arrangement. Such a name is given because this arrangement is the same folding motif used in other heme/globin proteins such as myoglobin. This folding pattern contains a pocket that strongly binds the heme group. A heme group consists of an iron (Fe) ion held in a heterocyclic ring, known as a porphyrin. This porphyrin ring consists of four pyrrole molecules cyclically linked together (by methine bridges) with the iron ion bound in the center. The iron ion, which is the site of oxygen binding, coordinates with the four nitrogen atoms in the center of the ring, which all lie in one plane. The heme is bound strongly (covalently) to the globular protein via the N atoms of the imidazole ring of F8 histidine residue (also known as the proximal histidine) below the porphyrin ring. A sixth position can reversibly bind oxygen by a coordinate covalent bond, completing the octahedral group of six ligands. This reversible bonding with oxygen is why hemoglobin is so useful for transporting oxygen around the body. Oxygen binds in an "end-on bent" geometry where one oxygen atom binds to Fe and the other protrudes at an angle. When oxygen is not bound, a very weakly bonded water molecule fills the site, forming a distorted octahedron. Even though carbon dioxide is carried by hemoglobin, it does not compete with oxygen for the iron-binding positions but is bound to the amine groups of the protein chains attached to the heme groups. The iron ion may be either in the ferrous Fe2+ or in the ferric Fe3+ state, but ferrihemoglobin (methemoglobin) (Fe3+) cannot bind oxygen. In binding, oxygen temporarily and reversibly oxidizes (Fe2+) to (Fe3+) while oxygen temporarily turns into the superoxide ion, thus iron must exist in the +2 oxidation state to bind oxygen. If superoxide ion associated to Fe3+ is protonated, the hemoglobin iron will remain oxidized and incapable of binding oxygen. In such cases, the enzyme methemoglobin reductase will be able to eventually reactivate methemoglobin by reducing the iron center. In adult humans, the most common hemoglobin type is a tetramer (which contains four subunit proteins) called hemoglobin A, consisting of two α and two β subunits non-covalently bound, each made of 141 and 146 amino acid residues, respectively. This is denoted as α2β2. The subunits are structurally similar and about the same size. Each subunit has a molecular weight of about 16,000 daltons, for a total molecular weight of the tetramer of about 64,000 daltons (64,458 g/mol). Thus, 1 g/dL=0.1551 mmol/L. Hemoglobin A is the most intensively studied of the hemoglobin molecules. In human infants, the fetal hemoglobin molecule is made up of 2 α chains and 2 γ chains. The γ chains are gradually replaced by β chains as the infant grows. The four polypeptide chains are bound to each other by salt bridges, hydrogen bonds, and the hydrophobic effect. Oxygen saturation In general, hemoglobin can be saturated with oxygen molecules (oxyhemoglobin), or desaturated with oxygen molecules (deoxyhemoglobin). Oxyhemoglobin Oxyhemoglobin is formed during physiological respiration when oxygen binds to the heme component of the protein hemoglobin in red blood cells. This process occurs in the pulmonary capillaries adjacent to the alveoli of the lungs. The oxygen then travels through the blood stream to be dropped off at cells where it is utilized as a terminal electron acceptor in the production of ATP by the process of oxidative phosphorylation. It does not, however, help to counteract a decrease in blood pH. Ventilation, or breathing, may reverse this condition by removal of carbon dioxide, thus causing a shift up in pH. Hemoglobin exists in two forms, a taut (tense) form (T) and a relaxed form (R). Various factors such as low pH, high CO2 and high 2,3 BPG at the level of the tissues favor the taut form, which has low oxygen affinity and releases oxygen in the tissues. Conversely, a high pH, low CO2, or low 2,3 BPG favors the relaxed form, which can better bind oxygen. The partial pressure of the system also affects O2 affinity where, at high partial pressures of oxygen (such as those present in the alveoli), the relaxed (high affinity, R) state is favoured. Inversely, at low partial pressures (such as those present in respiring tissues), the (low affinity, T) tense state is favoured. Additionally, the binding of oxygen to the iron(II) heme pulls the iron into the plane of the porphyrin ring, causing a slight conformational shift. The shift encourages oxygen to bind to the three remaining heme units within hemoglobin (thus, oxygen binding is cooperative). Classically, the iron in oxyhemoglobin is seen as existing in the iron(II) oxidation state. However, the complex of oxygen with heme iron is diamagnetic, whereas both oxygen and high-spin iron(II) are paramagnetic. Experimental evidence strongly suggests heme iron is in the iron(III) oxidation state in oxyhemoglobin, with the oxygen existing as superoxide anion (O2•−) or in a covalent charge-transfer complex. Deoxygenated hemoglobin Deoxygenated hemoglobin (deoxyhemoglobin) is the form of hemoglobin without the bound oxygen. The absorption spectra of oxyhemoglobin and deoxyhemoglobin differ. The oxyhemoglobin has significantly lower absorption of the 660 nm wavelength than deoxyhemoglobin, while at 940 nm its absorption is slightly higher. This difference is used for the measurement of the amount of oxygen in a patient's blood by an instrument called a pulse oximeter. This difference also accounts for the presentation of cyanosis, the blue to purplish color that tissues develop during hypoxia. Deoxygenated hemoglobin is paramagnetic; it is weakly attracted to magnetic fields. In contrast, oxygenated hemoglobin exhibits diamagnetism, a weak repulsion from a magnetic field. Evolution of vertebrate hemoglobin Scientists agree that the event that separated myoglobin from hemoglobin occurred after lampreys diverged from jawed vertebrates. This separation of myoglobin and hemoglobin allowed for the different functions of the two molecules to arise and develop: myoglobin has more to do with oxygen storage while hemoglobin is tasked with oxygen transport. The α- and β-like globin genes encode the individual subunits of the protein. The predecessors of these genes arose through another duplication event also after the gnathosome common ancestor derived from jawless fish, approximately 450–500 million years ago. Ancestral reconstruction studies suggest that the preduplication ancestor of the α and β genes was a dimer made up of identical globin subunits, which then evolved to assemble into a tetrameric architecture after the duplication. The development of α and β genes created the potential for hemoglobin to be composed of multiple distinct subunits, a physical composition central to hemoglobin's ability to transport oxygen. Having multiple subunits contributes to hemoglobin's ability to bind oxygen cooperatively as well as be regulated allosterically. Subsequently, the α gene also underwent a duplication event to form the HBA1 and HBA2 genes. These further duplications and divergences have created a diverse range of α- and β-like globin genes that are regulated so that certain forms occur at different stages of development. Most ice fish of the family Channichthyidae have lost their hemoglobin genes as an adaptation to cold water. Cooperativity When oxygen binds to the iron complex, it causes the iron atom to move back toward the center of the plane of the porphyrin ring (see moving diagram). At the same time, the imidazole side-chain of the histidine residue interacting at the other pole of the iron is pulled toward the porphyrin ring. This interaction forces the plane of the ring sideways toward the outside of the tetramer, and also induces a strain in the protein helix containing the histidine as it moves nearer to the iron atom. This strain is transmitted to the remaining three monomers in the tetramer, where it induces a similar conformational change in the other heme sites such that binding of oxygen to these sites becomes easier. As oxygen binds to one monomer of hemoglobin, the tetramer's conformation shifts from the T (tense) state to the R (relaxed) state. This shift promotes the binding of oxygen to the remaining three monomers' heme groups, thus saturating the hemoglobin molecule with oxygen. In the tetrameric form of normal adult hemoglobin, the binding of oxygen is, thus, a cooperative process. The binding affinity of hemoglobin for oxygen is increased by the oxygen saturation of the molecule, with the first molecules of oxygen bound influencing the shape of the binding sites for the next ones, in a way favorable for binding. This positive cooperative binding is achieved through steric conformational changes of the hemoglobin protein complex as discussed above; i.e., when one subunit protein in hemoglobin becomes oxygenated, a conformational or structural change in the whole complex is initiated, causing the other subunits to gain an increased affinity for oxygen. As a consequence, the oxygen binding curve of hemoglobin is sigmoidal, or S-shaped, as opposed to the normal hyperbolic curve associated with noncooperative binding. The dynamic mechanism of the cooperativity in hemoglobin and its relation with low-frequency resonance has been discussed. Binding of ligands other than oxygen Besides the oxygen ligand, which binds to hemoglobin in a cooperative manner, hemoglobin ligands also include competitive inhibitors such as carbon monoxide (CO) and allosteric ligands such as carbon dioxide (CO2) and nitric oxide (NO). The carbon dioxide is bound to amino groups of the globin proteins to form carbaminohemoglobin; this mechanism is thought to account for about 10% of carbon dioxide transport in mammals. Nitric oxide can also be transported by hemoglobin; it is bound to specific thiol groups in the globin protein to form an S-nitrosothiol, which dissociates into free nitric oxide and thiol again, as the hemoglobin releases oxygen from its heme site. This nitric oxide transport to peripheral tissues is hypothesized to assist oxygen transport in tissues, by releasing vasodilatory nitric oxide to tissues in which oxygen levels are low. Competitive The binding of oxygen is affected by molecules such as carbon monoxide (for example, from tobacco smoking, exhaust gas, and incomplete combustion in furnaces). CO competes with oxygen at the heme binding site. Hemoglobin's binding affinity for CO is 250 times greater than its affinity for oxygen, Since carbon monoxide is a colorless, odorless and tasteless gas, and poses a potentially fatal threat, carbon monoxide detectors have become commercially available to warn of dangerous levels in residences. When hemoglobin combines with CO, it forms a very bright red compound called carboxyhemoglobin, which may cause the skin of CO poisoning victims to appear pink in death, instead of white or blue. When inspired air contains CO levels as low as 0.02%, headache and nausea occur; if the CO concentration is increased to 0.1%, unconsciousness will follow. In heavy smokers, up to 20% of the oxygen-active sites can be blocked by CO. In similar fashion, hemoglobin also has competitive binding affinity for cyanide (CN−), sulfur monoxide (SO), and sulfide (S2−), including hydrogen sulfide (H2S). All of these bind to iron in heme without changing its oxidation state, but they nevertheless inhibit oxygen-binding, causing grave toxicity. The iron atom in the heme group must initially be in the ferrous (Fe2+) oxidation state to support oxygen and other gases' binding and transport (it temporarily switches to ferric during the time oxygen is bound, as explained above). Initial oxidation to the ferric (Fe3+) state without oxygen converts hemoglobin into "hemiglobin" or methemoglobin, which cannot bind oxygen. Hemoglobin in normal red blood cells is protected by a reduction system to keep this from happening. Nitric oxide is capable of converting a small fraction of hemoglobin to methemoglobin in red blood cells. The latter reaction is a remnant activity of the more ancient nitric oxide dioxygenase function of globins. Allosteric Carbon dioxide occupies a different binding site on the hemoglobin. At tissues, where carbon dioxide concentration is higher, carbon dioxide binds to allosteric site of hemoglobin, facilitating unloading of oxygen from hemoglobin and ultimately its removal from the body after the oxygen has been released to tissues undergoing metabolism. This increased affinity for carbon dioxide by the venous blood is known as the Bohr effect. Through the enzyme carbonic anhydrase, carbon dioxide reacts with water to give carbonic acid, which decomposes into bicarbonate and protons: CO2 + H2O → H2CO3 → HCO3− + H+ Hence, blood with high carbon dioxide levels is also lower in pH (more acidic). Hemoglobin can bind protons and carbon dioxide, which causes a conformational change in the protein and facilitates the release of oxygen. Protons bind at various places on the protein, while carbon dioxide binds at the α-amino group. Carbon dioxide binds to hemoglobin and forms carbaminohemoglobin. This decrease in hemoglobin's affinity for oxygen by the binding of carbon dioxide and acid is known as the Bohr effect. The Bohr effect favors the T state rather than the R state. (shifts the O2-saturation curve to the right). Conversely, when the carbon dioxide levels in the blood decrease (i.e., in the lung capillaries), carbon dioxide and protons are released from hemoglobin, increasing the oxygen affinity of the protein. A reduction in the total binding capacity of hemoglobin to oxygen (i.e. shifting the curve down, not just to the right) due to reduced pH is called the root effect. This is seen in bony fish. It is necessary for hemoglobin to release the oxygen that it binds; if not, there is no point in binding it. The sigmoidal curve of hemoglobin makes it efficient in binding (taking up O2 in lungs), and efficient in unloading (unloading O2 in tissues). In people acclimated to high altitudes, the concentration of 2,3-Bisphosphoglycerate (2,3-BPG) in the blood is increased, which allows these individuals to deliver a larger amount of oxygen to tissues under conditions of lower oxygen tension. This phenomenon, where molecule Y affects the binding of molecule X to a transport molecule Z, is called a heterotropic allosteric effect. Hemoglobin in organisms at high altitudes has also adapted such that it has less of an affinity for 2,3-BPG and so the protein will be shifted more towards its R state. In its R state, hemoglobin will bind oxygen more readily, thus allowing organisms to perform the necessary metabolic processes when oxygen is present at low partial pressures. Animals other than humans use different molecules to bind to hemoglobin and change its O2 affinity under unfavorable conditions. Fish use both ATP and GTP. These bind to a phosphate "pocket" on the fish hemoglobin molecule, which stabilizes the tense state and therefore decreases oxygen affinity. GTP reduces hemoglobin oxygen affinity much more than ATP, which is thought to be due to an extra hydrogen bond formed that further stabilizes the tense state. Under hypoxic conditions, the concentration of both ATP and GTP is reduced in fish red blood cells to increase oxygen affinity. A variant hemoglobin, called fetal hemoglobin (HbF, α2γ2), is found in the developing fetus, and binds oxygen with greater affinity than adult hemoglobin. This means that the oxygen binding curve for fetal hemoglobin is left-shifted (i.e., a higher percentage of hemoglobin has oxygen bound to it at lower oxygen tension), in comparison to that of adult hemoglobin. As a result, fetal blood in the placenta is able to take oxygen from maternal blood. Hemoglobin also carries nitric oxide (NO) in the globin part of the molecule. This improves oxygen delivery in the periphery and contributes to the control of respiration. NO binds reversibly to a specific cysteine residue in globin; the binding depends on the state (R or T) of the hemoglobin. The resulting S-nitrosylated hemoglobin influences various NO-related activities such as the control of vascular resistance, blood pressure and respiration. NO is not released in the cytoplasm of red blood cells but transported out of them by an anion exchanger called AE1. Types of hemoglobin in humans Hemoglobin variants are a part of the normal embryonic and fetal development. They may also be pathologic mutant forms of hemoglobin in a population, caused by variations in genetics. Some well-known hemoglobin variants, such as sickle-cell anemia, are responsible for diseases and are considered hemoglobinopathies. Other variants cause no detectable pathology, and are thus considered non-pathological variants. In embryos: Gower 1 (ζ2ε2). Gower 2 (α2ε2) (). Hemoglobin Portland I (ζ2γ2). Hemoglobin Portland II (ζ2β2). In fetuses: Hemoglobin F (α2γ2) (). In neonates (newborns inmmediately after birth): Hemoglobin A (adult hemoglobin) (α2β2) () – The most common with a normal amount over 95% Hemoglobin A2 (α2δ2) – δ chain synthesis begins late in the third trimester and, in adults, it has a normal range of 1.5–3.5% Hemoglobin F (fetal hemoglobin) (α2γ2) – In adults Hemoglobin F is restricted to a limited population of red cells called F-cells. However, the level of Hb F can be elevated in persons with sickle-cell disease and beta-thalassemia. Abnormal forms that occur in diseases: Hemoglobin D – (α2βD2) – A variant form of hemoglobin. Hemoglobin H (β4) – A variant form of hemoglobin, formed by a tetramer of β chains, which may be present in variants of α thalassemia. Hemoglobin Barts (γ4) – A variant form of hemoglobin, formed by a tetramer of γ chains, which may be present in variants of α thalassemia. Hemoglobin S (α2βS2) – A variant form of hemoglobin found in people with sickle cell disease. There is a variation in the β-chain gene, causing a change in the properties of hemoglobin, which results in sickling of red blood cells. Hemoglobin C (α2βC2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia. Hemoglobin E (α2βE2) – Another variant due to a variation in the β-chain gene. This variant causes a mild chronic hemolytic anemia. Hemoglobin AS – A heterozygous form causing sickle cell trait with one adult gene and one sickle cell disease gene Hemoglobin SC disease – A compound heterozygous form with one sickle gene and another encoding hemoglobin C. Hemoglobin Hopkins-2 – A variant form of hemoglobin that is sometimes viewed in combination with hemoglobin S to produce sickle cell disease. Degradation in vertebrate animals When red blood cells reach the end of their life due to aging or defects, they are removed from the circulation by the phagocytic activity of macrophages in the spleen or the liver or hemolyze within the circulation. Free hemoglobin is then cleared from the circulation via the hemoglobin transporter CD163, which is exclusively expressed on monocytes or macrophages. Within these cells the hemoglobin molecule is broken up, and the iron gets recycled. This process also produces one molecule of carbon monoxide for every molecule of heme degraded. Heme degradation is the only natural source of carbon monoxide in the human body, and is responsible for the normal blood levels of carbon monoxide in people breathing normal air. The other major final product of heme degradation is bilirubin. Increased levels of this chemical are detected in the blood if red blood cells are being destroyed more rapidly than usual. Improperly degraded hemoglobin protein or hemoglobin that has been released from the blood cells too rapidly can clog small blood vessels, especially the delicate blood filtering vessels of the kidneys, causing kidney damage. Iron is removed from heme and salvaged for later use, it is stored as hemosiderin or ferritin in tissues and transported in plasma by beta globulins as transferrins. When the porphyrin ring is broken up, the fragments are normally secreted as a yellow pigment called bilirubin, which is secreted into the intestines as bile. Intestines metabolize bilirubin into urobilinogen. Urobilinogen leaves the body in faeces, in a pigment called stercobilin. Globulin is metabolized into amino acids that are then released into circulation. Diseases related to hemoglobin Hemoglobin deficiency can be caused either by a decreased amount of hemoglobin molecules, as in anemia, or by decreased ability of each molecule to bind oxygen at the same partial pressure of oxygen. Hemoglobinopathies (genetic defects resulting in abnormal structure of the hemoglobin molecule) may cause both. In any case, hemoglobin deficiency decreases blood oxygen-carrying capacity. Hemoglobin deficiency is, in general, strictly distinguished from hypoxemia, defined as decreased partial pressure of oxygen in blood, although both are causes of hypoxia (insufficient oxygen supply to tissues). Other common causes of low hemoglobin include loss of blood, nutritional deficiency, bone marrow problems, chemotherapy, kidney failure, or abnormal hemoglobin (such as that of sickle-cell disease). The ability of each hemoglobin molecule to carry oxygen is normally modified by altered blood pH or CO2, causing an altered oxygen–hemoglobin dissociation curve. However, it can also be pathologically altered in, e.g., carbon monoxide poisoning. Decrease of hemoglobin, with or without an absolute decrease of red blood cells, leads to symptoms of anemia. Anemia has many different causes, although iron deficiency and its resultant iron deficiency anemia are the most common causes in the Western world. As absence of iron decreases heme synthesis, red blood cells in iron deficiency anemia are hypochromic (lacking the red hemoglobin pigment) and microcytic (smaller than normal). Other anemias are rarer. In hemolysis (accelerated breakdown of red blood cells), associated jaundice is caused by the hemoglobin metabolite bilirubin, and the circulating hemoglobin can cause kidney failure. Some mutations in the globin chain are associated with the hemoglobinopathies, such as sickle-cell disease and thalassemia. Other mutations, as discussed at the beginning of the article, are benign and are referred to merely as hemoglobin variants. There is a group of genetic disorders, known as the porphyrias that are characterized by errors in metabolic pathways of heme synthesis. King George III of the United Kingdom was probably the most famous porphyria sufferer. To a small extent, hemoglobin A slowly combines with glucose at the terminal valine (an alpha aminoacid) of each β chain. The resulting molecule is often referred to as Hb A1c, a glycated hemoglobin. The binding of glucose to amino acids in the hemoglobin takes place spontaneously (without the help of an enzyme) in many proteins, and is not known to serve a useful purpose. However, as the concentration of glucose in the blood increases, the percentage of Hb A that turns into Hb A1c increases. In diabetics whose glucose usually runs high, the percent Hb A1c also runs high. Because of the slow rate of Hb A combination with glucose, the Hb A1c percentage reflects a weighted average of blood glucose levels over the lifetime of red cells, which is approximately 120 days. The levels of glycated hemoglobin are therefore measured in order to monitor the long-term control of the chronic disease of type 2 diabetes mellitus (T2DM). Poor control of T2DM results in high levels of glycated hemoglobin in the red blood cells. The normal reference range is approximately 4.0–5.9%. Though difficult to obtain, values less than 7% are recommended for people with T2DM. Levels greater than 9% are associated with poor control of the glycated hemoglobin, and levels greater than 12% are associated with very poor control. Diabetics who keep their glycated hemoglobin levels close to 7% have a much better chance of avoiding the complications that may accompany diabetes (than those whose levels are 8% or higher). In addition, increased glycated of hemoglobin increases its affinity for oxygen, therefore preventing its release at the tissue and inducing a level of hypoxia in extreme cases. Elevated levels of hemoglobin are associated with increased numbers or sizes of red blood cells, called polycythemia. This elevation may be caused by congenital heart disease, cor pulmonale, pulmonary fibrosis, too much erythropoietin, or polycythemia vera. High hemoglobin levels may also be caused by exposure to high altitudes, smoking, dehydration (artificially by concentrating Hb), advanced lung disease and certain tumors. Diagnostic uses Hemoglobin concentration measurement is among the most commonly performed blood tests, usually as part of a complete blood count. For example, it is typically tested before or after blood donation. Results are reported in g/L, g/dL or mol/L. 1 g/dL equals about 0.6206 mmol/L, although the latter units are not used as often due to uncertainty regarding the polymeric state of the molecule. This conversion factor, using the single globin unit molecular weight of 16,000 Da, is more common for hemoglobin concentration in blood. For MCHC (mean corpuscular hemoglobin concentration) the conversion factor 0.155, which uses the tetramer weight of 64,500 Da, is more common. Normal levels are: Men: 13.8 to 18.0 g/dL (138 to 180 g/L, or 8.56 to 11.17 mmol/L) Women: 12.1 to 15.1 g/dL (121 to 151 g/L, or 7.51 to 9.37 mmol/L) Children: 11 to 16 g/dL (110 to 160 g/L, or 6.83 to 9.93 mmol/L) Pregnant women: 11 to 14 g/dL (110 to 140 g/L, or 6.83 to 8.69 mmol/L) (9.5 to 15 usual value during pregnancy) Normal values of hemoglobin in the 1st and 3rd trimesters of pregnant women must be at least 11 g/dL and at least 10.5 g/dL during the 2nd trimester. Dehydration or hyperhydration can greatly influence measured hemoglobin levels. Albumin can indicate hydration status. If the concentration is below normal, this is called anemia. Anemias are classified by the size of red blood cells, the cells that contain hemoglobin in vertebrates. The anemia is called "microcytic" if red cells are small, "macrocytic" if they are large, and "normocytic" otherwise. Hematocrit, the proportion of blood volume occupied by red blood cells, is typically about three times the hemoglobin concentration measured in g/dL. For example, if the hemoglobin is measured at 17 g/dL, that compares with a hematocrit of 51%. Laboratory hemoglobin test methods require a blood sample (arterial, venous, or capillary) and analysis on hematology analyzer and CO-oximeter. Additionally, a new noninvasive hemoglobin (SpHb) test method called Pulse CO-Oximetry is also available with comparable accuracy to invasive methods. Concentrations of oxy- and deoxyhemoglobin can be measured continuously, regionally and noninvasively using NIRS. NIRS can be used both on the head and on muscles. This technique is often used for research in e.g. elite sports training, ergonomics, rehabilitation, patient monitoring, neonatal research, functional brain monitoring, brain–computer interface, urology (bladder contraction), neurology (Neurovascular coupling) and more. Hemoglobin mass can be measured in humans using the non-radioactive, carbon monoxide (CO) rebreathing technique that has been used for more than 100 years. With this technique, a small volume of pure CO gas is inhaled and rebreathed for a few minutes. During rebreathing, CO binds to hemoglobin present in red blood cells. Based on the increase in blood CO after the rebreathing period, the hemoglobin mass can be determined through the dilution principle. Long-term control of blood sugar concentration can be measured by the concentration of Hb A1c. Measuring it directly would require many samples because blood sugar levels vary widely through the day. Hb A1c is the product of the irreversible reaction of hemoglobin A with glucose. A higher glucose concentration results in more Hb A1c. Because the reaction is slow, the Hb A1c proportion represents glucose level in blood averaged over the half-life of red blood cells, is typically ~120 days. An Hb A1c proportion of 6.0% or less show good long-term glucose control, while values above 7.0% are elevated. This test is especially useful for diabetics. The functional magnetic resonance imaging (fMRI) machine uses the signal from deoxyhemoglobin, which is sensitive to magnetic fields since it is paramagnetic. Combined measurement with NIRS shows good correlation with both the oxy- and deoxyhemoglobin signal compared to the BOLD signal. Athletic tracking and self-tracking uses Hemoglobin can be tracked noninvasively, to build an individual data set tracking the hemoconcentration and hemodilution effects of daily activities for better understanding of sports performance and training. Athletes are often concerned about endurance and intensity of exercise. The sensor uses light-emitting diodes that emit red and infrared light through the tissue to a light detector, which then sends a signal to a processor to calculate the absorption of light by the hemoglobin protein. This sensor is similar to a pulse oximeter, which consists of a small sensing device that clips to the finger. Analogues in non-vertebrate organisms A variety of oxygen-transport and -binding proteins exist in organisms throughout the animal and plant kingdoms. Organisms including bacteria, protozoans, and fungi all have hemoglobin-like proteins whose known and predicted roles include the reversible binding of gaseous ligands. Since many of these proteins contain globins and the heme moiety (iron in a flat porphyrin support), they are often called hemoglobins, even if their overall tertiary structure is very different from that of vertebrate hemoglobin. In particular, the distinction of "myoglobin" and hemoglobin in lower animals is often impossible, because some of these organisms do not contain muscles. Or, they may have a recognizable separate circulatory system but not one that deals with oxygen transport (for example, many insects and other arthropods). In all these groups, heme/globin-containing molecules (even monomeric globin ones) that deal with gas-binding are referred to as oxyhemoglobins. In addition to dealing with transport and sensing of oxygen, they may also deal with NO, CO2, sulfide compounds, and even O2 scavenging in environments that must be anaerobic. They may even deal with detoxification of chlorinated materials in a way analogous to heme-containing P450 enzymes and peroxidases. The structure of hemoglobins varies across species. Hemoglobin occurs in all kingdoms of organisms, but not in all organisms. Primitive species such as bacteria, protozoa, algae, and plants often have single-globin hemoglobins. Many nematode worms, molluscs, and crustaceans contain very large multisubunit molecules, much larger than those in vertebrates. In particular, chimeric hemoglobins found in fungi and giant annelids may contain both globin and other types of proteins. One of the most striking occurrences and uses of hemoglobin in organisms is in the giant tube worm (Riftia pachyptila, also called Vestimentifera), which can reach 2.4 meters length and populates ocean volcanic vents. Instead of a digestive tract, these worms contain a population of bacteria constituting half the organism's weight. The bacteria oxidize H2S from the vent with O2 from the water to produce energy to make food from H2O and CO2. The worms' upper end is a deep-red fan-like structure ("plume"), which extends into the water and absorbs H2S and O2 for the bacteria, and CO2 for use as synthetic raw material similar to photosynthetic plants. The structures are bright red due to their content of several extraordinarily complex hemoglobins that have up to 144 globin chains, each including associated heme structures. These hemoglobins are remarkable for being able to carry oxygen in the presence of sulfide, and even to carry sulfide, without being completely "poisoned" or inhibited by it as hemoglobins in most other species are. Other oxygen-binding proteins Myoglobin Found in the muscle tissue of many vertebrates, including humans, it gives muscle tissue a distinct red or dark gray color. It is very similar to hemoglobin in structure and sequence, but is not a tetramer; instead, it is a monomer that lacks cooperative binding. It is used to store oxygen rather than transport it. Hemocyanin The second most common oxygen-transporting protein found in nature, it is found in the blood of many arthropods and molluscs. Uses copper prosthetic groups instead of iron heme groups and is blue in color when oxygenated. Hemerythrin Some marine invertebrates and a few species of annelid use this iron-containing non-heme protein to carry oxygen in their blood. Appears pink/violet when oxygenated, clear when not. Chlorocruorin Found in many annelids, it is very similar to erythrocruorin, but the heme group is significantly different in structure. Appears green when deoxygenated and red when oxygenated. Vanabins Also known as vanadium chromagens, they are found in the blood of sea squirts. They were once hypothesized to use the metal vanadium as an oxygen binding prosthetic group. However, although they do contain vanadium by preference, they apparently bind little oxygen, and thus have some other function, which has not been elucidated (sea squirts also contain some hemoglobin). They may act as toxins. Erythrocruorin Found in many annelids, including earthworms, it is a giant free-floating blood protein containing many dozens—possibly hundreds—of iron- and heme-bearing protein subunits bound together into a single protein complex with a molecular mass greater than 3.5 million daltons. Leghemoglobin In leguminous plants, such as alfalfa or soybeans, the nitrogen fixing bacteria in the roots are protected from oxygen by this iron heme containing oxygen-binding protein. The specific enzyme protected is nitrogenase, which is unable to reduce nitrogen gas in the presence of free oxygen. Coboglobin A synthetic cobalt-based porphyrin. Coboprotein would appear colorless when oxygenated, but yellow when in veins. Presence in nonerythroid cells Some nonerythroid cells (i.e., cells other than the red blood cell line) contain hemoglobin. In the brain, these include the A9 dopaminergic neurons in the substantia nigra, astrocytes in the cerebral cortex and hippocampus, and in all mature oligodendrocytes. It has been suggested that brain hemoglobin in these cells may enable the "storage of oxygen to provide a homeostatic mechanism in anoxic conditions, which is especially important for A9 DA neurons that have an elevated metabolism with a high requirement for energy production". It has been noted further that "A9 dopaminergic neurons may be at particular risk of anoxic degeneration since in addition to their high mitochondrial activity they are under intense oxidative stress caused by the production of hydrogen peroxide via autoxidation and/or monoamine oxidase (MAO)-mediated deamination of dopamine and the subsequent reaction of accessible ferrous iron to generate highly toxic hydroxyl radicals". This may explain the risk of degeneration of these cells in Parkinson's disease. The hemoglobin-derived iron in these cells is not the cause of the post-mortem darkness of these cells (origin of the Latin name, substantia nigra), but rather is due to neuromelanin. Outside the brain, hemoglobin has non-oxygen-carrying functions as an antioxidant and a regulator of iron metabolism in macrophages, alveolar cells, and mesangial cells in the kidney. In history, art, and music Historically, an association between the color of blood and rust occurs in the association of the planet Mars, with the Roman god of war, since the planet is an orange-red, which reminded the ancients of blood. Although the color of the planet is due to iron compounds in combination with oxygen in the Martian soil, it is a common misconception that the iron in hemoglobin and its oxides gives blood its red color. The color is actually due to the porphyrin moiety of hemoglobin to which the iron is bound, not the iron itself, although the ligation and redox state of the iron can influence the pi to pi* or n to pi* electronic transitions of the porphyrin and hence its optical characteristics. Artist Julian Voss-Andreae created a sculpture called Heart of Steel (Hemoglobin) in 2005, based on the protein's backbone. The sculpture was made from glass and weathering steel. The intentional rusting of the initially shiny work of art mirrors hemoglobin's fundamental chemical reaction of oxygen binding to iron. Montreal artist Nicolas Baier created Lustre (Hémoglobine), a sculpture in stainless steel that shows the structure of the hemoglobin molecule. It is displayed in the atrium of McGill University Health Centre's research centre in Montreal. The sculpture measures about 10 metres × 10 metres × 10 metres.
Biology and health sciences
Biochemistry and molecular biology
null
13492
https://en.wikipedia.org/wiki/Hyperthyroidism
Hyperthyroidism
Hyperthyroidism is the condition that occurs due to excessive production of thyroid hormones by the thyroid gland. Thyrotoxicosis is the condition that occurs due to excessive thyroid hormone of any cause and therefore includes hyperthyroidism. Some, however, use the terms interchangeably. Signs and symptoms vary between people and may include irritability, muscle weakness, sleeping problems, a fast heartbeat, heat intolerance, diarrhea, enlargement of the thyroid, hand tremor, and weight loss. Symptoms are typically less severe in the elderly and during pregnancy. An uncommon but life-threatening complication is thyroid storm in which an event such as an infection results in worsening symptoms such as confusion and a high temperature; this often results in death. The opposite is hypothyroidism, when the thyroid gland does not make enough thyroid hormone. Graves' disease is the cause of about 50% to 80% of the cases of hyperthyroidism in the United States. Other causes include multinodular goiter, toxic adenoma, inflammation of the thyroid, eating too much iodine, and too much synthetic thyroid hormone. A less common cause is a pituitary adenoma. The diagnosis may be suspected based on signs and symptoms and then confirmed with blood tests. Typically blood tests show a low thyroid stimulating hormone (TSH) and raised T3 or T4. Radioiodine uptake by the thyroid, thyroid scan, and measurement of antithyroid autoantibodies (thyroidal thyrotropin receptor antibodies are positive in Graves disease) may help determine the cause. Treatment depends partly on the cause and severity of disease. There are three main treatment options: radioiodine therapy, medications, and thyroid surgery. Radioiodine therapy involves taking iodine-131 by mouth which is then concentrated in and destroys the thyroid over weeks to months. The resulting hypothyroidism is treated with synthetic thyroid hormone. Medications such as beta blockers may control the symptoms, and anti-thyroid medications such as methimazole may temporarily help people while other treatments are having an effect. Surgery to remove the thyroid is another option. This may be used in those with very large thyroids or when cancer is a concern. In the United States hyperthyroidism affects about 1.2% of the population. Worldwide, hyperthyroidism affects 2.5% of adults. It occurs between two and ten times more often in women. Onset is commonly between 20 and 50 years of age. Overall the disease is more common in those over the age of 60 years. Signs and symptoms Hyperthyroidism may be asymptomatic or present with significant symptoms. Some of the symptoms of hyperthyroidism include nervousness, irritability, increased perspiration, heart racing, hand tremors, anxiety, trouble sleeping, thinning of the skin, fine brittle hair, and muscular weakness—especially in the upper arms and thighs. More frequent bowel movements may occur, and diarrhea is common. Weight loss, sometimes significant, may occur despite a good appetite (though 10% of people with a hyperactive thyroid experience weight gain), vomiting may occur, and, for women, menstrual flow may lighten and menstrual periods may occur less often, or with longer cycles than usual. Thyroid hormone is critical to normal function of cells. In excess, it both overstimulates metabolism and disrupts the normal functioning of sympathetic nervous system, causing "speeding up" of various body systems and symptoms resembling an overdose of epinephrine (adrenaline). These include fast heartbeat and symptoms of palpitations, nervous system tremor such as of the hands and anxiety symptoms, digestive system hypermotility, unintended weight loss, and, in lipid panel blood tests, a lower and sometimes unusually low serum cholesterol. Major clinical signs of hyperthyroidism include weight loss (often accompanied by an increased appetite), anxiety, heat intolerance, hair loss (especially of the outer third of the eyebrows), muscle aches, weakness, fatigue, hyperactivity, irritability, high blood sugar, excessive urination, excessive thirst, delirium, tremor, pretibial myxedema (in Graves' disease), emotional lability, and sweating. Panic attacks, inability to concentrate, and memory problems may also occur. Psychosis and paranoia, common during thyroid storm, are rare with milder hyperthyroidism. Many persons will experience complete remission of symptoms 1 to 2 months after a euthyroid state is obtained, with a marked reduction in anxiety, sense of exhaustion, irritability, and depression. Some individuals may have an increased rate of anxiety or persistence of affective and cognitive symptoms for several months to up to 10 years after a euthyroid state is established. In addition, those with hyperthyroidism may present with a variety of physical symptoms such as palpitations and abnormal heart rhythms (the notable ones being atrial fibrillation), shortness of breath (dyspnea), loss of libido, amenorrhea, nausea, vomiting, diarrhea, gynecomastia and feminization. Long term untreated hyperthyroidism can lead to osteoporosis. These classical symptoms may not be present often in the elderly. Bone loss, which is associated with overt but not subclinical hyperthyroidism, may occur in 10 to 20% of patients. This may be due to an increase in bone remodelling and a decrease in bone density, and increases fracture risk. It is more common in postmenopausal women; less so in younger women, and men. Bone disease related to hyperthyroidism was first described by Frederick von Recklinghausen, in 1891; he described the bones of a woman who died of hyperthyroidism as appearing "worm-eaten". Neurological manifestations can include tremors, chorea, myopathy, and in some susceptible individuals (in particular of Asian descent) periodic paralysis. An association between thyroid disease and myasthenia gravis has been recognized. Thyroid disease, in this condition, is autoimmune in nature and approximately 5% of people with myasthenia gravis also have hyperthyroidism. Myasthenia gravis rarely improves after thyroid treatment and the relationship between the two entities is becoming better understood over the past 15 years.In Graves' disease, ophthalmopathy may cause the eyes to look enlarged because the eye muscles swell and push the eye forward. Sometimes, one or both eyes may bulge. Some have swelling of the front of the neck from an enlarged thyroid gland (a goiter). Minor ocular (eye) signs, which may be present in any type of hyperthyroidism, are eyelid retraction ("stare"), extraocular muscle weakness, and lid-lag. In hyperthyroid stare (Dalrymple sign) the eyelids are retracted upward more than normal (the normal position is at the superior corneoscleral limbus, where the "white" of the eye begins at the upper border of the iris). Extraocular muscle weakness may present with double vision. In lid-lag (von Graefe's sign), when the person tracks an object downward with their eyes, the eyelid fails to follow the downward moving iris, and the same type of upper globe exposure which is seen with lid retraction occurs, temporarily. These signs disappear with treatment of the hyperthyroidism. Neither of these ocular signs should be confused with exophthalmos (protrusion of the eyeball), which occurs specifically and uniquely in hyperthyroidism caused by Graves' disease (note that not all exophthalmos is caused by Graves' disease, but when present with hyperthyroidism is diagnostic of Graves' disease). This forward protrusion of the eyes is due to immune-mediated inflammation in the retro-orbital (eye socket) fat. Exophthalmos, when present, may exacerbate hyperthyroid lid-lag and stare. Thyroid storm Thyroid storm is a severe form of thyrotoxicosis characterized by rapid and often irregular heart beat, high temperature, vomiting, diarrhea, and mental agitation. Symptoms may not be typical in the young, old, or pregnant. It usually occurs due to untreated hyperthyroidism and can be provoked by infections. It is a medical emergency and requires hospital care to control the symptoms rapidly. The mortality rate in thyroid storm is 3.6-17%, usually due to multi-organ system failure. Hypothyroidism Hyperthyroidism due to certain types of thyroiditis can eventually lead to hypothyroidism (a lack of thyroid hormone), as the thyroid gland is damaged. Also, radioiodine treatment of Graves' disease often eventually leads to hypothyroidism. Such hypothyroidism may be diagnosed with thyroid hormone testing and treated by oral thyroid hormone supplementation. Causes There are several causes of hyperthyroidism. Most often, the entire gland is overproducing thyroid hormone. Less commonly, a single nodule is responsible for the excess hormone secretion, called a "hot" nodule. Thyroiditis (inflammation of the thyroid) can also cause hyperthyroidism. Functional thyroid tissue producing an excess of thyroid hormone occurs in a number of clinical conditions. The major causes in humans are: Graves' disease. An autoimmune disease (usually, the most common cause with 50–80% worldwide, although this varies substantially with location- i.e., 47% in Switzerland (Horst et al., 1987) to 90% in the USA (Hamburger et al. 1981)). Thought to be due to varying levels of iodine in the diet. It is eight times more common in females than males and often occurs in young females, around 20 to 40 years of age. Toxic thyroid adenoma (the most common cause in Switzerland, 53%, thought to be atypical due to a low level of dietary iodine in this country) Toxic multinodular goiter High blood levels of thyroid hormones (most accurately termed hyperthyroxinemia) can occur for a number of other reasons: Inflammation of the thyroid is called thyroiditis. There are several different kinds of thyroiditis including Hashimoto's thyroiditis (Hypothyroidism immune-mediated), and subacute thyroiditis (de Quervain's). These may be initially associated with secretion of excess thyroid hormone but usually progress to gland dysfunction and, thus, to hormone deficiency and hypothyroidism. Oral consumption of excess thyroid hormone tablets is possible (surreptitious use of thyroid hormone), as is the rare event of eating ground beef or pork contaminated with thyroid tissue, and thus thyroid hormones (termed hamburger thyrotoxicosis or alimentary thyrotoxicosis). Pharmacy compounding errors may also be a cause. Amiodarone, an antiarrhythmic drug, is structurally similar to thyroxine and may cause either under-or overactivity of the thyroid. Postpartum thyroiditis (PPT) occurs in about 7% of women during the year after they give birth. PPT typically has several phases, the first of which is hyperthyroidism. This form of hyperthyroidism usually corrects itself within weeks or months without the need for treatment. A struma ovarii is a rare form of monodermal teratoma that contains mostly thyroid tissue, which leads to hyperthyroidism. Excess iodine consumption notably from algae such as kelp. Thyrotoxicosis can also occur after taking too much thyroid hormone in the form of supplements, such as levothyroxine (a phenomenon known as exogenous thyrotoxicosis, alimentary thyrotoxicosis, or occult factitial thyrotoxicosis). Hypersecretion of thyroid stimulating hormone (TSH), which in turn is almost always caused by a pituitary adenoma, accounts for much less than 1 percent of hyperthyroidism cases. Diagnosis Measuring the level of thyroid-stimulating hormone (TSH), produced by the pituitary gland (which in turn is also regulated by the hypothalamus's TSH Releasing Hormone) in the blood is typically the initial test for suspected hyperthyroidism. A low TSH level typically indicates that the pituitary gland is being inhibited or "instructed" by the brain to cut back on stimulating the thyroid gland, having sensed increased levels of T4 and/or T3 in the blood. In rare circumstances, a low TSH indicates primary failure of the pituitary, or temporary inhibition of the pituitary due to another illness (euthyroid sick syndrome) and so checking the T4 and T3 is still clinically useful. Measuring specific antibodies, such as anti-TSH-receptor antibodies in Graves' disease, or anti-thyroid peroxidase in Hashimoto's thyroiditis—a common cause of hypothyroidism—may also contribute to the diagnosis. The diagnosis of hyperthyroidism is confirmed by blood tests that show a decreased thyroid-stimulating hormone (TSH) level and elevated T4 and T3 levels. TSH is a hormone made by the pituitary gland in the brain that tells the thyroid gland how much hormone to make. When there is too much thyroid hormone, the TSH will be low. A radioactive iodine uptake test and thyroid scan together characterizes or enables radiologists and doctors to determine the cause of hyperthyroidism. The uptake test uses radioactive iodine injected or taken orally on an empty stomach to measure the amount of iodine absorbed by the thyroid gland. Persons with hyperthyroidism absorb much more iodine than healthy persons which includes radioactive iodine which is easy to measure. A thyroid scan producing images is typically conducted in connection with the uptake test to allow visual examination of the over-functioning gland. Thyroid scintigraphy is a useful test to characterize (distinguish between causes of) hyperthyroidism, and this entity from thyroiditis. This test procedure typically involves two tests performed in connection with each other: an iodine uptake test and a scan (imaging) with a gamma camera. The uptake test involves administering a dose of radioactive iodine (radioiodine), traditionally iodine-131 (131I), and more recently iodine-123 (123I). Iodine-123 may be the preferred radionuclide in some clinics due to its more favorable radiation dosimetry (i.e. less radiation dose to the person per unit administered radioactivity) and a gamma photon energy more amenable to imaging with the gamma camera. For the imaging scan, I-123 is considered an almost ideal isotope of iodine for imaging thyroid tissue and thyroid cancer metastasis. Thyroid scintigraphy should not be performed in those who are pregnant, a thyroid ultrasound with color flow doppler may be obtained as an alternative in these circumstances. Typical administration involves a pill or liquid containing sodium iodide (NaI) taken orally, which contains a small amount of iodine-131, amounting to perhaps less than a grain of salt. A 2-hour fast of no food prior to and for 1 hour after ingesting the pill is required. This low dose of radioiodine is typically tolerated by individuals otherwise allergic to iodine (such as those unable to tolerate contrast mediums containing larger doses of iodine such as used in CT scan, intravenous pyelogram (IVP), and similar imaging diagnostic procedures). Excess radioiodine that does not get absorbed into the thyroid gland is eliminated by the body in urine. Some people with hyperthyroidism may experience a slight allergic reaction to the diagnostic radioiodine and may be given an antihistamine. The person returns 24 hours later to have the level of radioiodine "uptake" (absorbed by the thyroid gland) measured by a device with a metal bar placed against the neck, which measures the radioactivity emitting from the thyroid. This test takes about 4 minutes while the uptake % (i.e., percentage) is accumulated (calculated) by the machine software. A scan is also performed, wherein images (typically a center, left and right angle) are taken of the contrasted thyroid gland with a gamma camera; a radiologist will read and prepare a report indicating the uptake % and comments after examining the images. People with hyperthyroid will typically "take up" higher than normal levels of radioiodine. Normal ranges for RAI uptake are from 10 to 30%. In addition to testing the TSH levels, many doctors test for T3, Free T3, T4, and/or Free T4 for more detailed results. Free T4 is unbound to any protein in the blood. Adult limits for these hormones are: TSH (units): 0.45 – 4.50 uIU/mL; T4 Free/Direct (nanograms): 0.82 – 1.77 ng/dl; and T3 (nanograms): 71 – 180 ng/dl. Persons with hyperthyroidism can easily exhibit levels many times these upper limits for T4 and/or T3. See a complete table of normal range limits for thyroid function at the thyroid gland article. In hyperthyroidism CK-MB (Creatine kinase) is usually elevated. Subclinical In overt primary hyperthyroidism, TSH levels are low and T4 and T3 levels are high. Subclinical hyperthyroidism is a milder form of hyperthyroidism characterized by low or undetectable serum TSH level, but with a normal serum free thyroxine level. Although the evidence for doing so is not definitive, treatment of elderly persons having subclinical hyperthyroidism could reduce the number of cases of atrial fibrillation. There is also an increased risk of bone fractures (by 42%) in people with subclinical hyperthyroidism; there is insufficient evidence to say whether treatment with antithyroid medications would reduce that risk. A 2022 meta-analysis found subclinical hyperthyroidism to be associated with cardiovascular death. Screening In those without symptoms who are not pregnant there is little evidence for or against screening. Treatment Antithyroid drugs Thyrostatics (antithyroid drugs) are drugs that inhibit the production of thyroid hormones, such as carbimazole (used in the UK) and methimazole (used in the US, Germany and Russia), and propylthiouracil. Thyrostatics are believed to work by inhibiting the iodination of thyroglobulin by thyroperoxidase and, thus, the formation of tetraiodothyronine (T4). Propylthiouracil also works outside the thyroid gland, preventing the conversion of (mostly inactive) T4 to the active form T3. Because thyroid tissue usually contains a substantial reserve of thyroid hormone, thyrostatics can take weeks to become effective and the dose often needs to be carefully titrated over a period of months, with regular doctor visits and blood tests to monitor results. Beta-blockers Many of the common symptoms of hyperthyroidism such as palpitations, trembling, and anxiety are mediated by increases in beta-adrenergic receptors on cell surfaces. Beta blockers, typically used to treat high blood pressure, are a class of drugs that offset this effect, reducing rapid pulse associated with the sensation of palpitations, and decreasing tremor and anxiety. Thus, a person with hyperthyroidism can often obtain immediate temporary relief until the hyperthyroidism can be characterized with the Radioiodine test noted above and more permanent treatment take place. Note that these drugs do not treat hyperthyroidism or any of its long-term effects if left untreated, but, rather, they treat or reduce only symptoms of the condition. Some minimal effect on thyroid hormone production however also comes with propranolol—which has two roles in the treatment of hyperthyroidism, determined by the different isomers of propranolol. L-propranolol causes beta-blockade, thus treating the symptoms associated with hyperthyroidism such as tremor, palpitations, anxiety, and heat intolerance. D-propranolol inhibits thyroxine deiodinase, thereby blocking the conversion of T4 to T3, providing some though minimal therapeutic effect. Other beta-blockers are used to treat only the symptoms associated with hyperthyroidism. Propranolol in the UK, and metoprolol in the US, are most frequently used to augment treatment for people with hyperthyroid . Diet People with autoimmune hyperthyroidism (such as in Grave's disease) should not eat foods high in iodine, such as edible seaweed and seafood. From a public health perspective, the general introduction of iodized salt in the United States in 1924 resulted in lower disease, goiters, as well as improving the lives of children whose mothers would not have eaten enough iodine during pregnancy which would have lowered the IQs of their children. Surgery Surgery (thyroidectomy to remove the whole thyroid or a part of it) is not extensively used because most common forms of hyperthyroidism are quite effectively treated by the radioactive iodine method, and because there is a risk of also removing the parathyroid glands, and of cutting the recurrent laryngeal nerve, making swallowing difficult, and even simply generalized staphylococcal infection as with any major surgery. Some people with Graves' may opt for surgical intervention. This includes those that cannot tolerate medicines for one reason or another, people that are allergic to iodine, or people that refuse radioiodine. A 2019 systematic review concluded that the available evidence shows no difference between visually identifying the nerve or utilizing intraoperative neuroimaging during surgery, when trying to prevent injury to recurrent laryngeal nerve during thyroid surgery. If people have toxic nodules treatments typically include either removal or injection of the nodule with alcohol. Radioiodine In iodine-131 (radioiodine) radioisotope therapy, which was first pioneered by Dr. Saul Hertz, radioactive iodine-131 is given orally (either by pill or liquid) on a one-time basis, to severely restrict, or altogether destroy the function of a hyperactive thyroid gland. This isotope of radioactive iodine used for ablative treatment is more potent than diagnostic radioiodine (usually iodine-123 or a very low amount of iodine-131), which has a biological half-life from 8–13 hours. Iodine-131, which also emits beta particles that are far more damaging to tissues at short range, has a half-life of approximately 8 days. People not responding sufficiently to the first dose are sometimes given an additional radioiodine treatment, at a larger dose. Iodine-131 in this treatment is picked up by the active cells in the thyroid and destroys them, rendering the thyroid gland mostly or completely inactive. Since iodine is picked up more readily (though not exclusively) by thyroid cells, and (more important) is picked up even more readily by over-active thyroid cells, the destruction is local, and there are no widespread side effects with this therapy. Radioiodine ablation has been used for over 50 years, and the only major reasons for not using it are pregnancy and breastfeeding (breast tissue also picks up and concentrates iodine). Once the thyroid function is reduced, replacement hormone therapy (levothyroxine) taken orally each day replaces the thyroid hormone that is normally produced by the body. There is extensive experience, over many years, of the use of radioiodine in the treatment of thyroid overactivity and this experience does not indicate any increased risk of thyroid cancer following treatment. However, a study from 2007 has reported an increased number of cancer cases after radioiodine treatment for hyperthyroidism. The principal advantage of radioiodine treatment for hyperthyroidism is that it tends to have a much higher success rate than medications. Depending on the dose of radioiodine chosen, and the disease under treatment (Graves' vs. toxic goiter, vs. hot nodule etc.), the success rate in achieving definitive resolution of the hyperthyroidism may vary from 75 to 100%. A major expected side-effect of radioiodine in people with Graves' disease is the development of lifelong hypothyroidism, requiring daily treatment with thyroid hormone. On occasion, some people may require more than one radioactive treatment, depending on the type of disease present, the size of the thyroid, and the initial dose administered. People with Graves' disease manifesting moderate or severe Graves' ophthalmopathy are cautioned against radioactive iodine-131 treatment, since it has been shown to exacerbate existing thyroid eye disease. People with mild or no ophthalmic symptoms can mitigate their risk with a concurrent six-week course of prednisone. The mechanisms proposed for this side effect involve a TSH receptor common to both thyrocytes and retro-orbital tissue. As radioactive iodine treatment results in the destruction of thyroid tissue, there is often a transient period of several days to weeks when the symptoms of hyperthyroidism may actually worsen following radioactive iodine therapy. In general, this happens as a result of thyroid hormones being released into the blood following the radioactive iodine-mediated destruction of thyroid cells that contain thyroid hormone. In some people, treatment with medications such as beta blockers (propranolol, atenolol, etc.) may be useful during this period of time. Most people do not experience any difficulty after the radioactive iodine treatment, usually given as a small pill. On occasion, neck tenderness or a sore throat may become apparent after a few days, if moderate inflammation in the thyroid develops and produces discomfort in the neck or throat area. This is usually transient, and not associated with a fever, etc. It is recommended that breastfeeding be stopped at least six weeks before radioactive iodine treatment and that it not be resumed, although it can be done in future pregnancies. It also shouldn't be done during pregnancy, and pregnancy should be put off until at least 6–12 months after treatment. A common outcome following radioiodine is a swing from hyperthyroidism to the easily treatable hypothyroidism, which occurs in 78% of those treated for Graves' thyrotoxicosis and in 40% of those with toxic multinodular goiter or solitary toxic adenoma. Use of higher doses of radioiodine reduces the number of cases of treatment failure, with penalty for higher response to treatment consisting mostly of higher rates of eventual hypothyroidism which requires hormone treatment for life. There is increased sensitivity to radioiodine therapy in thyroids appearing on ultrasound scans as more uniform (hypoechogenic), due to densely packed large cells, with 81% later becoming hypothyroid, compared to just 37% in those with more normal scan appearances (normoechogenic). Thyroid storm Thyroid storm presents with extreme symptoms of hyperthyroidism. It is treated aggressively with resuscitation measures along with a combination of the above modalities including: an intravenous beta blockers such as propranolol, followed by a thioamide such as methimazole, an iodinated radiocontrast agent or an iodine solution if the radiocontrast agent is not available, and an intravenous steroid such as hydrocortisone. Propylthiouracil is the preferred thioamide in thyroid storm as it can prevent the conversion of T4 to the more active T3 in the peripheral tissues in addition to inhibiting thyroid hormone production. Alternative medicine In countries such as China, herbs used alone or with antithyroid medications are used to treat hyperthyroidism. Very low quality evidence suggests that traditional Chinese herbal medications may be beneficial when taken along with routine hyperthyroid medications, however, there is no reliable evidence to determine the effectiveness of Chinese herbal medications for treating hyperthyroidism. Epidemiology In the United States hyperthyroidism affects about 1.2% of the population. About half of these cases have obvious symptoms while the other half do not. It occurs between two and ten times more often in women. The disease is more common in those over the age of 60 years. Subclinical hyperthyroidism modestly increases the risk of cognitive impairment and dementia. History Caleb Hillier Parry first made the association between the goiter and protrusion of the eyes in 1786, however, did not publish his findings until 1825. In 1835, Irish doctor Robert James Graves discovered a link between the protrusion of the eyes and goiter, giving his name to the autoimmune disease now known as Graves' Disease. Pregnancy Recognizing and evaluating hyperthyroidism in pregnancy is a diagnostic challenge. Thyroid hormones are commonly elevated during the first trimester of pregnancy as the pregnancy hormone human chorionic gonadotropin (hCG) stimulates thyroid hormone production, in a condition known as gestational transient thyrotoxicosis. Gestational transient thyrotoxicosis generally abates in the second trimester as hCG levels decline and thyroid function normalizes. Hyperthyroidism can increase the risk of complications for mother and child. Such risks include pregnancy-related hypertension, pregnancy loss, low-birth weight, pre-eclampsia, preterm delivery, still birth and behavioral disorders later in the child's life. Nonetheless, high maternal FT4 levels during pregnancy have been associated with impaired brain developmental outcomes of the offspring and this was independent of hCG levels. Propylthiouracil is the preferred antithyroid medication in the 1st trimester of pregnancy as it is less teratogenic than methimazole. Other animals Cats Hyperthyroidism is one of the most common endocrine conditions affecting older domesticated housecats. In the United States, up to 10% of cats over ten years old have hyperthyroidism. The disease has become significantly more common since the first reports of feline hyperthyroidism in the 1970s. The most common cause of hyperthyroidism in cats is the presence of benign tumors called adenomas. 98% of cases are caused by the presence of an adenoma, but the reason these cats develop such tumors continues to be studied. The most common presenting symptoms are: rapid weight loss, tachycardia (rapid heart rate), vomiting, diarrhea, increased consumption of fluids (polydipsia), increased appetite (polyphagia), and increased urine production (polyuria). Other symptoms include hyperactivity, possible aggression, an unkempt appearance, and large, thick claws. Heart murmurs and a gallop rhythm can develop due to secondary hypertrophic cardiomyopathy. About 70% of affected cats also have enlarged thyroid glands (goiter). 10% of cats exhibit "apathetic hyperthyroidism", which is characterized by anorexia and lethargy. The same three treatments used with humans are also options in treating feline hyperthyroidism (surgery, radioiodine treatment, and anti-thyroid drugs). There is also a special low iodine diet available that will control the symptoms providing no other food is fed; Hill's y/d formula, when given exclusively, decreases T4 production by limiting the amount of iodine needed for thyroid hormone production. It is the only available commercial diet that focuses on managing feline hyperthyroidism. Medical and dietary management using methimazole and Hill's y/d cat food will give hyperthyroid cats an average of 2 years before dying due to secondary conditions such as heart and kidney failure. Drugs used to help manage the symptoms of hyperthyroidism are methimazole and carbimazole. Drug therapy is the least expensive option, even though the drug must be administered daily for the remainder of the cat's life. Carbimazole is only available as a once daily tablet. Methimazole is available as an oral solution, a tablet, and compounded as a topical gel that is applied using a finger cot to the hairless skin inside a cat's ear. Many cat owners find this gel a good option for cats that don't like being given pills. Radioiodine treatment, however, is not available in all areas, as this treatment requires nuclear radiological expertise and facilities that not only board the cat, but are specially equipped to manage the cat's urine, sweat, saliva, and stool, which are radioactive for several days after the treatment, usually for a total of 3 weeks (the cat spends the first week in total isolation and the next two weeks in close confinement). In the United States, the guidelines for radiation levels vary from state to state; some states such as Massachusetts allow hospitalization for as little as two days before the animal is sent home with care instructions. Dogs Hyperthyroidism is much less common in dogs compared to cats. Hyperthyroidism may be caused by a thyroid tumor. This may be a thyroid carcinoma. About 90% of carcinomas are very aggressive; they invade the surrounding tissues and metastasize (spread) to other tissues, particularly the lungs. This has a poor prognosis. Surgery to remove the tumor is often very difficult due to metastasis into arteries, the esophagus, or the windpipe. It may be possible to reduce the size of the tumor, thus relieving symptoms and allowing time for other treatments to work. About 10% of thyroid tumors are benign; these often cause few symptoms. In dogs treated for hypothyroidism (lack of thyroid hormone), iatrogenic hyperthyroidism may occur as a result of an overdose of the thyroid hormone replacement medication, levothyroxine; in this case, treatment involves reducing the dose of levothyroxine. Dogs which display coprophagy, the consumption of feces, and also live in a household with a dog receiving levothyroxine treatment, may develop hyperthyroidism if they frequently eat the feces from the dog receiving levothyroxine treatment. Hyperthyroidism may occur if a dog eats an excessive amount of thyroid gland tissue. This has occurred in dogs fed commercial dog food.
Biology and health sciences
Specific diseases
Health
13564
https://en.wikipedia.org/wiki/Homomorphism
Homomorphism
In algebra, a homomorphism is a structure-preserving map between two algebraic structures of the same type (such as two groups, two rings, or two vector spaces). The word homomorphism comes from the Ancient Greek language: () meaning "same" and () meaning "form" or "shape". However, the word was apparently introduced to mathematics due to a (mis)translation of German meaning "similar" to meaning "same". The term "homomorphism" appeared as early as 1892, when it was attributed to the German mathematician Felix Klein (1849–1925). Homomorphisms of vector spaces are also called linear maps, and their study is the subject of linear algebra. The concept of homomorphism has been generalized, under the name of morphism, to many other structures that either do not have an underlying set, or are not algebraic. This generalization is the starting point of category theory. A homomorphism may also be an isomorphism, an endomorphism, an automorphism, etc. (see below). Each of those can be defined in a way that may be generalized to any class of morphisms. Definition A homomorphism is a map between two algebraic structures of the same type (e.g. two groups, two fields, two vector spaces), that preserves the operations of the structures. This means a map between two sets , equipped with the same structure such that, if is an operation of the structure (supposed here, for simplification, to be a binary operation), then for every pair , of elements of . One says often that preserves the operation or is compatible with the operation. Formally, a map preserves an operation of arity , defined on both and if for all elements in . The operations that must be preserved by a homomorphism include 0-ary operations, that is the constants. In particular, when an identity element is required by the type of structure, the identity element of the first structure must be mapped to the corresponding identity element of the second structure. For example: A semigroup homomorphism is a map between semigroups that preserves the semigroup operation. A monoid homomorphism is a map between monoids that preserves the monoid operation and maps the identity element of the first monoid to that of the second monoid (the identity element is a 0-ary operation). A group homomorphism is a map between groups that preserves the group operation. This implies that the group homomorphism maps the identity element of the first group to the identity element of the second group, and maps the inverse of an element of the first group to the inverse of the image of this element. Thus a semigroup homomorphism between groups is necessarily a group homomorphism. A ring homomorphism is a map between rings that preserves the ring addition, the ring multiplication, and the multiplicative identity. Whether the multiplicative identity is to be preserved depends upon the definition of ring in use. If the multiplicative identity is not preserved, one has a rng homomorphism. A linear map is a homomorphism of vector spaces; that is, a group homomorphism between vector spaces that preserves the abelian group structure and scalar multiplication. A module homomorphism, also called a linear map between modules, is defined similarly. An algebra homomorphism is a map that preserves the algebra operations. An algebraic structure may have more than one operation, and a homomorphism is required to preserve each operation. Thus a map that preserves only some of the operations is not a homomorphism of the structure, but only a homomorphism of the substructure obtained by considering only the preserved operations. For example, a map between monoids that preserves the monoid operation and not the identity element, is not a monoid homomorphism, but only a semigroup homomorphism. The notation for the operations does not need to be the same in the source and the target of a homomorphism. For example, the real numbers form a group for addition, and the positive real numbers form a group for multiplication. The exponential function satisfies and is thus a homomorphism between these two groups. It is even an isomorphism (see below), as its inverse function, the natural logarithm, satisfies and is also a group homomorphism. Examples The real numbers are a ring, having both addition and multiplication. The set of all 2×2 matrices is also a ring, under matrix addition and matrix multiplication. If we define a function between these rings as follows: where is a real number, then is a homomorphism of rings, since preserves both addition: and multiplication: For another example, the nonzero complex numbers form a group under the operation of multiplication, as do the nonzero real numbers. (Zero must be excluded from both groups since it does not have a multiplicative inverse, which is required for elements of a group.) Define a function from the nonzero complex numbers to the nonzero real numbers by That is, is the absolute value (or modulus) of the complex number . Then is a homomorphism of groups, since it preserves multiplication: Note that cannot be extended to a homomorphism of rings (from the complex numbers to the real numbers), since it does not preserve addition: As another example, the diagram shows a monoid homomorphism from the monoid to the monoid . Due to the different names of corresponding operations, the structure preservation properties satisfied by amount to and . A composition algebra over a field has a quadratic form, called a norm, , which is a group homomorphism from the multiplicative group of to the multiplicative group of . Special homomorphisms Several kinds of homomorphisms have a specific name, which is also defined for general morphisms. Isomorphism An isomorphism between algebraic structures of the same type is commonly defined as a bijective homomorphism. In the more general context of category theory, an isomorphism is defined as a morphism that has an inverse that is also a morphism. In the specific case of algebraic structures, the two definitions are equivalent, although they may differ for non-algebraic structures, which have an underlying set. More precisely, if is a (homo)morphism, it has an inverse if there exists a homomorphism such that If and have underlying sets, and has an inverse , then is bijective. In fact, is injective, as implies , and is surjective, as, for any in , one has , and is the image of an element of . Conversely, if is a bijective homomorphism between algebraic structures, let be the map such that is the unique element of such that . One has and it remains only to show that is a homomorphism. If is a binary operation of the structure, for every pair , of elements of , one has and is thus compatible with As the proof is similar for any arity, this shows that is a homomorphism. This proof does not work for non-algebraic structures. For example, for topological spaces, a morphism is a continuous map, and the inverse of a bijective continuous map is not necessarily continuous. An isomorphism of topological spaces, called homeomorphism or bicontinuous map, is thus a bijective continuous map, whose inverse is also continuous. Endomorphism An endomorphism is a homomorphism whose domain equals the codomain, or, more generally, a morphism whose source is equal to its target. The endomorphisms of an algebraic structure, or of an object of a category, form a monoid under composition. The endomorphisms of a vector space or of a module form a ring. In the case of a vector space or a free module of finite dimension, the choice of a basis induces a ring isomorphism between the ring of endomorphisms and the ring of square matrices of the same dimension. Automorphism An automorphism is an endomorphism that is also an isomorphism. The automorphisms of an algebraic structure or of an object of a category form a group under composition, which is called the automorphism group of the structure. Many groups that have received a name are automorphism groups of some algebraic structure. For example, the general linear group is the automorphism group of a vector space of dimension over a field . The automorphism groups of fields were introduced by Évariste Galois for studying the roots of polynomials, and are the basis of Galois theory. Monomorphism For algebraic structures, monomorphisms are commonly defined as injective homomorphisms. In the more general context of category theory, a monomorphism is defined as a morphism that is left cancelable. This means that a (homo)morphism is a monomorphism if, for any pair , of morphisms from any other object to , then implies . These two definitions of monomorphism are equivalent for all common algebraic structures. More precisely, they are equivalent for fields, for which every homomorphism is a monomorphism, and for varieties of universal algebra, that is algebraic structures for which operations and axioms (identities) are defined without any restriction (the fields do not form a variety, as the multiplicative inverse is defined either as a unary operation or as a property of the multiplication, which are, in both cases, defined only for nonzero elements). In particular, the two definitions of a monomorphism are equivalent for sets, magmas, semigroups, monoids, groups, rings, fields, vector spaces and modules. A split monomorphism is a homomorphism that has a left inverse and thus it is itself a right inverse of that other homomorphism. That is, a homomorphism is a split monomorphism if there exists a homomorphism such that A split monomorphism is always a monomorphism, for both meanings of monomorphism. For sets and vector spaces, every monomorphism is a split monomorphism, but this property does not hold for most common algebraic structures. An injective homomorphism is left cancelable: If one has for every in , the common source of and . If is injective, then , and thus . This proof works not only for algebraic structures, but also for any category whose objects are sets and arrows are maps between these sets. For example, an injective continuous map is a monomorphism in the category of topological spaces. For proving that, conversely, a left cancelable homomorphism is injective, it is useful to consider a free object on . Given a variety of algebraic structures a free object on is a pair consisting of an algebraic structure of this variety and an element of satisfying the following universal property: for every structure of the variety, and every element of , there is a unique homomorphism such that . For example, for sets, the free object on is simply ; for semigroups, the free object on is which, as, a semigroup, is isomorphic to the additive semigroup of the positive integers; for monoids, the free object on is which, as, a monoid, is isomorphic to the additive monoid of the nonnegative integers; for groups, the free object on is the infinite cyclic group which, as, a group, is isomorphic to the additive group of the integers; for rings, the free object on is the polynomial ring for vector spaces or modules, the free object on is the vector space or free module that has as a basis. If a free object over exists, then every left cancelable homomorphism is injective: let be a left cancelable homomorphism, and and be two elements of such . By definition of the free object , there exist homomorphisms and from to such that and . As , one has by the uniqueness in the definition of a universal property. As is left cancelable, one has , and thus . Therefore, is injective. Existence of a free object on for a variety (see also ): For building a free object over , consider the set of the well-formed formulas built up from and the operations of the structure. Two such formulas are said equivalent if one may pass from one to the other by applying the axioms (identities of the structure). This defines an equivalence relation, if the identities are not subject to conditions, that is if one works with a variety. Then the operations of the variety are well defined on the set of equivalence classes of for this relation. It is straightforward to show that the resulting object is a free object on . Epimorphism In algebra, epimorphisms are often defined as surjective homomorphisms. On the other hand, in category theory, epimorphisms are defined as right cancelable morphisms. This means that a (homo)morphism is an epimorphism if, for any pair , of morphisms from to any other object , the equality implies . A surjective homomorphism is always right cancelable, but the converse is not always true for algebraic structures. However, the two definitions of epimorphism are equivalent for sets, vector spaces, abelian groups, modules (see below for a proof), and groups. The importance of these structures in all mathematics, especially in linear algebra and homological algebra, may explain the coexistence of two non-equivalent definitions. Algebraic structures for which there exist non-surjective epimorphisms include semigroups and rings. The most basic example is the inclusion of integers into rational numbers, which is a homomorphism of rings and of multiplicative semigroups. For both structures it is a monomorphism and a non-surjective epimorphism, but not an isomorphism. A wide generalization of this example is the localization of a ring by a multiplicative set. Every localization is a ring epimorphism, which is not, in general, surjective. As localizations are fundamental in commutative algebra and algebraic geometry, this may explain why in these areas, the definition of epimorphisms as right cancelable homomorphisms is generally preferred. A split epimorphism is a homomorphism that has a right inverse and thus it is itself a left inverse of that other homomorphism. That is, a homomorphism is a split epimorphism if there exists a homomorphism such that A split epimorphism is always an epimorphism, for both meanings of epimorphism. For sets and vector spaces, every epimorphism is a split epimorphism, but this property does not hold for most common algebraic structures. In summary, one has the last implication is an equivalence for sets, vector spaces, modules, abelian groups, and groups; the first implication is an equivalence for sets and vector spaces. Let be a homomorphism. We want to prove that if it is not surjective, it is not right cancelable. In the case of sets, let be an element of that not belongs to , and define such that is the identity function, and that for every except that is any other element of . Clearly is not right cancelable, as and In the case of vector spaces, abelian groups and modules, the proof relies on the existence of cokernels and on the fact that the zero maps are homomorphisms: let be the cokernel of , and be the canonical map, such that . Let be the zero map. If is not surjective, , and thus (one is a zero map, while the other is not). Thus is not cancelable, as (both are the zero map from to ). Kernel Any homomorphism defines an equivalence relation on by if and only if . The relation is called the kernel of . It is a congruence relation on . The quotient set can then be given a structure of the same type as , in a natural way, by defining the operations of the quotient set by , for each operation of . In that case the image of in under the homomorphism is necessarily isomorphic to ; this fact is one of the isomorphism theorems. When the algebraic structure is a group for some operation, the equivalence class of the identity element of this operation suffices to characterize the equivalence relation. In this case, the quotient by the equivalence relation is denoted by (usually read as " mod "). Also in this case, it is , rather than , that is called the kernel of . The kernels of homomorphisms of a given type of algebraic structure are naturally equipped with some structure. This structure type of the kernels is the same as the considered structure, in the case of abelian groups, vector spaces and modules, but is different and has received a specific name in other cases, such as normal subgroup for kernels of group homomorphisms and ideals for kernels of ring homomorphisms (in the case of non-commutative rings, the kernels are the two-sided ideals). Relational structures In model theory, the notion of an algebraic structure is generalized to structures involving both operations and relations. Let L be a signature consisting of function and relation symbols, and A, B be two L-structures. Then a homomorphism from A to B is a mapping h from the domain of A to the domain of B such that h(FA(a1,...,an)) = FB(h(a1),...,h(an)) for each n-ary function symbol F in L, RA(a1,...,an) implies RB(h(a1),...,h(an)) for each n-ary relation symbol R in L. In the special case with just one binary relation, we obtain the notion of a graph homomorphism. Formal language theory Homomorphisms are also used in the study of formal languages and are often briefly referred to as morphisms. Given alphabets and , a function such that for all is called a homomorphism on . If is a homomorphism on and denotes the empty string, then is called an -free homomorphism when for all in . A homomorphism on that satisfies for all is called a -uniform homomorphism. If for all (that is, is 1-uniform), then is also called a coding or a projection. The set of words formed from the alphabet may be thought of as the free monoid generated by Here the monoid operation is concatenation and the identity element is the empty word. From this perspective, a language homomorphism is precisely a monoid homomorphism.
Mathematics
Abstract algebra
null
13570
https://en.wikipedia.org/wiki/Histology
Histology
Histology, also known as microscopic anatomy or microanatomy, is the branch of biology that studies the microscopic anatomy of biological tissues. Histology is the microscopic counterpart to gross anatomy, which looks at larger structures visible without a microscope. Although one may divide microscopic anatomy into organology, the study of organs, histology, the study of tissues, and cytology, the study of cells, modern usage places all of these topics under the field of histology. In medicine, histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. In the field of paleontology, the term paleohistology refers to the histology of fossil organisms. Biological tissues Animal tissue classification There are four basic types of animal tissues: muscle tissue, nervous tissue, connective tissue, and epithelial tissue. All animal tissues are considered to be subtypes of these four principal tissue types (for example, blood is classified as connective tissue, since the blood cells are suspended in an extracellular matrix, the plasma). Plant tissue classification For plants, the study of their tissues falls under the field of plant anatomy, with the following four main types: Dermal tissue Vascular tissue Ground tissue Meristematic tissue Medical histology Histopathology is the branch of histology that includes the microscopic identification and study of diseased tissue. It is an important part of anatomical pathology and surgical pathology, as accurate diagnosis of cancer and other diseases often requires histopathological examination of tissue samples. Trained physicians, frequently licensed pathologists, perform histopathological examination and provide diagnostic information based on their observations. Occupations The field of histology that includes the preparation of tissues for microscopic examination is known as histotechnology. Job titles for the trained personnel who prepare histological specimens for examination are numerous and include histotechnicians, histotechnologists, histology technicians and technologists, medical laboratory technicians, and biomedical scientists. Sample preparation Most histological samples need preparation before microscopic observation; these methods depend on the specimen and method of observation. Fixation Chemical fixatives are used to preserve and maintain the structure of tissues and cells; fixation also hardens tissues which aids in cutting the thin sections of tissue needed for observation under the microscope. Fixatives generally preserve tissues (and cells) by irreversibly cross-linking proteins. The most widely used fixative for light microscopy is 10% neutral buffered formalin, or NBF (4% formaldehyde in phosphate buffered saline). For electron microscopy, the most commonly used fixative is glutaraldehyde, usually as a 2.5% solution in phosphate buffered saline. Other fixatives used for electron microscopy are osmium tetroxide or uranyl acetate. The main action of these aldehyde fixatives is to cross-link amino groups in proteins through the formation of methylene bridges (-CH2-), in the case of formaldehyde, or by C5H10 cross-links in the case of glutaraldehyde. This process, while preserving the structural integrity of the cells and tissue can damage the biological functionality of proteins, particularly enzymes. Formalin fixation leads to degradation of mRNA, miRNA, and DNA as well as denaturation and modification of proteins in tissues. However, extraction and analysis of nucleic acids and proteins from formalin-fixed, paraffin-embedded tissues is possible using appropriate protocols. Selection and trimming Selection is the choice of relevant tissue in cases where it is not necessary to put the entire original tissue mass through further processing. The remainder may remain fixed in case it needs to be examined at a later time. Trimming is the cutting of tissue samples in order to expose the relevant surfaces for later sectioning. It also creates tissue samples of appropriate size to fit into cassettes. Embedding Tissues are embedded in a harder medium both as a support and to allow the cutting of thin tissue slices. In general, water must first be removed from tissues (dehydration) and replaced with a medium that either solidifies directly, or with an intermediary fluid (clearing) that is miscible with the embedding media. Paraffin wax For light microscopy, paraffin wax is the most frequently used embedding material. Paraffin is immiscible with water, the main constituent of biological tissue, so it must first be removed in a series of dehydration steps. Samples are transferred through a series of progressively more concentrated ethanol baths, up to 100% ethanol to remove remaining traces of water. Dehydration is followed by a clearing agent (typically xylene although other environmental safe substitutes are in use) which removes the alcohol and is miscible with the wax, finally melted paraffin wax is added to replace the xylene and infiltrate the tissue. In most histology, or histopathology laboratories the dehydration, clearing, and wax infiltration are carried out in tissue processors which automate this process. Once infiltrated in paraffin, tissues are oriented in molds which are filled with wax; once positioned, the wax is cooled, solidifying the block and tissue. Other materials Paraffin wax does not always provide a sufficiently hard matrix for cutting very thin sections (which are especially important for electron microscopy). Paraffin wax may also be too soft in relation to the tissue, the heat of the melted wax may alter the tissue in undesirable ways, or the dehydrating or clearing chemicals may harm the tissue. Alternatives to paraffin wax include, epoxy, acrylic, agar, gelatin, celloidin, and other types of waxes. In electron microscopy epoxy resins are the most commonly employed embedding media, but acrylic resins are also used, particularly where immunohistochemistry is required. For tissues to be cut in a frozen state, tissues are placed in a water-based embedding medium. Pre-frozen tissues are placed into molds with the liquid embedding material, usually a water-based glycol, OCT, TBS, Cryogen, or resin, which is then frozen to form hardened blocks. Sectioning For light microscopy, a knife mounted in a microtome is used to cut tissue sections (typically between 5-15 micrometers thick) which are mounted on a glass microscope slide. For transmission electron microscopy (TEM), a diamond or glass knife mounted in an ultramicrotome is used to cut between 50 and 150 nanometer thick tissue sections. A limited number of manufacturers are recognized for their production of microtomes, including vibrating microtomes commonly referred to as vibratomes, primarily for research and clinical studies. Additionally, Leica Biosystems is known for its production of products related to light microscopy in the context of research and clinical studies. Staining Biological tissue has little inherent contrast in either the light or electron microscope. Staining is employed to give both contrast to the tissue as well as highlighting particular features of interest. When the stain is used to target a specific chemical component of the tissue (and not the general structure), the term histochemistry is used. Light microscopy Hematoxylin and eosin (H&E stain) is one of the most commonly used stains in histology to show the general structure of the tissue. Hematoxylin stains cell nuclei blue; eosin, an acidic dye, stains the cytoplasm and other tissues in different stains of pink. In contrast to H&E, which is used as a general stain, there are many techniques that more selectively stain cells, cellular components, and specific substances. A commonly performed histochemical technique that targets a specific chemical is the Perls' Prussian blue reaction, used to demonstrate iron deposits in diseases like hemochromatosis. The Nissl method for Nissl substance and Golgi's method (and related silver stains) are useful in identifying neurons are other examples of more specific stains. Historadiography In historadiography, a slide (sometimes stained histochemically) is X-rayed. More commonly, autoradiography is used in visualizing the locations to which a radioactive substance has been transported within the body, such as cells in S phase (undergoing DNA replication) which incorporate tritiated thymidine, or sites to which radiolabeled nucleic acid probes bind in in situ hybridization. For autoradiography on a microscopic level, the slide is typically dipped into liquid nuclear tract emulsion, which dries to form the exposure film. Individual silver grains in the film are visualized with dark field microscopy. Immunohistochemistry Recently, antibodies have been used to specifically visualize proteins, carbohydrates, and lipids. This process is called immunohistochemistry, or when the stain is a fluorescent molecule, immunofluorescence. This technique has greatly increased the ability to identify categories of cells under a microscope. Other advanced techniques, such as nonradioactive in situ hybridization, can be combined with immunochemistry to identify specific DNA or RNA molecules with fluorescent probes or tags that can be used for immunofluorescence and enzyme-linked fluorescence amplification (especially alkaline phosphatase and tyramide signal amplification). Fluorescence microscopy and confocal microscopy are used to detect fluorescent signals with good intracellular detail. Electron microscopy For electron microscopy heavy metals are typically used to stain tissue sections. Uranyl acetate and lead citrate are commonly used to impart contrast to tissue in the electron microscope. Specialized techniques Cryosectioning Similar to the frozen section procedure employed in medicine, cryosectioning is a method to rapidly freeze, cut, and mount sections of tissue for histology. The tissue is usually sectioned on a cryostat or freezing microtome. The frozen sections are mounted on a glass slide and may be stained to enhance the contrast between different tissues. Unfixed frozen sections can be used for studies requiring enzyme localization in tissues and cells. Tissue fixation is required for certain procedures such as antibody-linked immunofluorescence staining. Frozen sections are often prepared during surgical removal of tumors to allow rapid identification of tumor margins, as in Mohs surgery, or determination of tumor malignancy, when a tumor is discovered incidentally during surgery. Ultramicrotomy Ultramicrotomy is a method of preparing extremely thin sections for transmission electron microscope (TEM) analysis. Tissues are commonly embedded in epoxy or other plastic resin. Very thin sections (less than 0.1 micrometer in thickness) are cut using diamond or glass knives on an ultramicrotome. Artifacts Artifacts are structures or features in tissue that interfere with normal histological examination. Artifacts interfere with histology by changing the tissues appearance and hiding structures. Tissue processing artifacts can include pigments formed by fixatives, shrinkage, washing out of cellular components, color changes in different tissues types and alterations of the structures in the tissue. An example is mercury pigment left behind after using Zenker's fixative to fix a section. Formalin fixation can also leave a brown to black pigment under acidic conditions. History In the 17th century the Italian Marcello Malpighi used microscopes to study tiny biological entities; some regard him as the founder of the fields of histology and microscopic pathology. Malpighi analyzed several parts of the organs of bats, frogs and other animals under the microscope. While studying the structure of the lung, Malpighi noticed its membranous alveoli and the hair-like connections between veins and arteries, which he named capillaries. His discovery established how the oxygen breathed in enters the blood stream and serves the body. In the 19th century histology was an academic discipline in its own right. The French anatomist Xavier Bichat introduced the concept of tissue in anatomy in 1801, and the term "histology" (), coined to denote the "study of tissues", first appeared in a book by Karl Meyer in 1819. Bichat described twenty-one human tissues, which can be subsumed under the four categories currently accepted by histologists. The usage of illustrations in histology, deemed as useless by Bichat, was promoted by Jean Cruveilhier. In the early 1830s Purkynĕ invented a microtome with high precision. During the 19th century many fixation techniques were developed by Adolph Hannover (solutions of chromates and chromic acid), Franz Schulze and Max Schultze (osmic acid), Alexander Butlerov (formaldehyde) and Benedikt Stilling (freezing). Mounting techniques were developed by Rudolf Heidenhain (1824–1898), who introduced gum Arabic; Salomon Stricker (1834–1898), who advocated a mixture of wax and oil; and Andrew Pritchard (1804–1884) who, in 1832, used a gum/isinglass mixture. In the same year, Canada balsam appeared on the scene, and in 1869 Edwin Klebs (1834–1913) reported that he had for some years embedded his specimens in paraffin. The 1906 Nobel Prize in Physiology or Medicine was awarded to histologists Camillo Golgi and Santiago Ramon y Cajal. They had conflicting interpretations of the neural structure of the brain based on differing interpretations of the same images. Ramón y Cajal won the prize for his correct theory, and Golgi for the silver-staining technique that he invented to make it possible. Future directions In vivo histology There is interest in developing techniques for in vivo histology (predominantly using MRI), which would enable doctors to non-invasively gather information about healthy and diseased tissues in living patients, rather than from fixed tissue samples.
Biology and health sciences
Basic anatomy
Biology
13586
https://en.wikipedia.org/wiki/HTTPS
HTTPS
Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It uses encryption for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS) or, formerly, Secure Sockets Layer (SSL). The protocol is therefore also referred to as HTTP over TLS, or HTTP over SSL. The principal motivations for HTTPS are authentication of the accessed website and protection of the privacy and integrity of the exchanged data while it is in transit. It protects against man-in-the-middle attacks, and the bidirectional block cipher encryption of communications between a client and server protects the communications against eavesdropping and tampering. The authentication aspect of HTTPS requires a trusted third party to sign server-side digital certificates. This was historically an expensive operation, which meant fully authenticated HTTPS connections were usually found only on secured payment transaction services and other secured corporate information systems on the World Wide Web. In 2016, a campaign by the Electronic Frontier Foundation with the support of web browser developers led to the protocol becoming more prevalent. HTTPS is now used more often by web users than the original, non-secure HTTP, primarily to protect page authenticity on all types of websites, secure accounts, and keep user communications, identity, and web browsing private. Overview The Uniform Resource Identifier (URI) scheme HTTPS has identical usage syntax to the HTTP scheme. However, HTTPS signals the browser to use an added encryption layer of SSL/TLS to protect the traffic. SSL/TLS is especially suited for HTTP, since it can provide some protection even if only one side of the communication is authenticated. This is the case with HTTP transactions over the Internet, where typically only the server is authenticated (by the client examining the server's certificate). HTTPS creates a secure channel over an insecure network. This ensures reasonable protection from eavesdroppers and man-in-the-middle attacks, provided that adequate cipher suites are used and that the server certificate is verified and trusted. Because HTTPS piggybacks HTTP entirely on top of TLS, the entirety of the underlying HTTP protocol can be encrypted. This includes the request's URL, query parameters, headers, and cookies (which often contain identifying information about the user). However, because website addresses and port numbers are necessarily part of the underlying TCP/IP protocols, HTTPS cannot protect their disclosure. In practice this means that even on a correctly configured web server, eavesdroppers can infer the IP address and port number of the web server, and sometimes even the domain name (e.g. www.example.org, but not the rest of the URL) that a user is communicating with, along with the amount of data transferred and the duration of the communication, though not the content of the communication. Web browsers know how to trust HTTPS websites based on certificate authorities that come pre-installed in their software. Certificate authorities are in this way being trusted by web browser creators to provide valid certificates. Therefore, a user should trust an HTTPS connection to a website if and only if all of the following are true: The user trusts that their device, hosting the browser and the method to get the browser itself, is not compromised (i.e. there is no supply chain attack). The user trusts that the browser software correctly implements HTTPS with correctly pre-installed certificate authorities. The user trusts the certificate authority to vouch only for legitimate websites (i.e. the certificate authority is not compromised and there is no mis-issuance of certificates). The website provides a valid certificate, which means it was signed by a trusted authority. The certificate correctly identifies the website (e.g., when the browser visits "https://example.com", the received certificate is properly for "example.com" and not some other entity). The user trusts that the protocol's encryption layer (SSL/TLS) is sufficiently secure against eavesdroppers. HTTPS is especially important over insecure networks and networks that may be subject to tampering. Insecure networks, such as public Wi-Fi access points, allow anyone on the same local network to packet-sniff and discover sensitive information not protected by HTTPS. Additionally, some free-to-use and paid WLAN networks have been observed tampering with webpages by engaging in packet injection in order to serve their own ads on other websites. This practice can be exploited maliciously in many ways, such as by injecting malware onto webpages and stealing users' private information. HTTPS is also important for connections over the Tor network, as malicious Tor nodes could otherwise damage or alter the contents passing through them in an insecure fashion and inject malware into the connection. This is one reason why the Electronic Frontier Foundation and the Tor Project started the development of HTTPS Everywhere, which is included in Tor Browser. As more information is revealed about global mass surveillance and criminals stealing personal information, the use of HTTPS security on all websites is becoming increasingly important regardless of the type of Internet connection being used. Even though metadata about individual pages that a user visits might not be considered sensitive, when aggregated it can reveal a lot about the user and compromise the user's privacy. Deploying HTTPS also allows the use of HTTP/2 and HTTP/3 (and their predecessors SPDY and QUIC), which are new HTTP versions designed to reduce page load times, size, and latency. It is recommended to use HTTP Strict Transport Security (HSTS) with HTTPS to protect users from man-in-the-middle attacks, especially SSL stripping. HTTPS should not be confused with the seldom-used Secure HTTP (S-HTTP) specified in RFC 2660. Usage in websites , 33.2% of Alexa top 1,000,000 websites use HTTPS as default and 70% of page loads (measured by Firefox Telemetry) use HTTPS. , 58.4% of the Internet's 135,422 most popular websites have a secure implementation of HTTPS, However, despite TLS 1.3's release in 2018, adoption has been slow, with many still remaining on the older TLS 1.2 protocol. Browser integration Most browsers display a warning if they receive an invalid certificate. Older browsers, when connecting to a site with an invalid certificate, would present the user with a dialog box asking whether they wanted to continue. Newer browsers display a warning across the entire window. Newer browsers also prominently display the site's security information in the address bar. Extended validation certificates show the legal entity on the certificate information. Most browsers also display a warning to the user when visiting a site that contains a mixture of encrypted and unencrypted content. Additionally, many web filters return a security warning when visiting prohibited websites. The Electronic Frontier Foundation, opining that "In an ideal world, every web request could be defaulted to HTTPS", has provided an add-on called HTTPS Everywhere for Mozilla Firefox, Google Chrome, Chromium, and Android, which enables HTTPS by default for hundreds of frequently used websites. Forcing a web browser to load only HTTPS content has been supported in Firefox starting in version 83. Starting in version 94, Google Chrome is able to "always use secure connections" if toggled in the browser's settings. Security The security of HTTPS is that of the underlying TLS, which typically uses long-term public and private keys to generate a short-term session key, which is then used to encrypt the data flow between the client and the server. X.509 certificates are used to authenticate the server (and sometimes the client as well). As a consequence, certificate authorities and public key certificates are necessary to verify the relation between the certificate and its owner, as well as to generate, sign, and administer the validity of certificates. While this can be more beneficial than verifying the identities via a web of trust, the 2013 mass surveillance disclosures drew attention to certificate authorities as a potential weak point allowing man-in-the-middle attacks. An important property in this context is forward secrecy, which ensures that encrypted communications recorded in the past cannot be retrieved and decrypted should long-term secret keys or passwords be compromised in the future. Not all web servers provide forward secrecy. For HTTPS to be effective, a site must be completely hosted over HTTPS. If some of the site's contents are loaded over HTTP (scripts or images, for example), or if only a certain page that contains sensitive information, such as a log-in page, is loaded over HTTPS while the rest of the site is loaded over plain HTTP, the user will be vulnerable to attacks and surveillance. Additionally, cookies on a site served through HTTPS must have the secure attribute enabled. On a site that has sensitive information on it, the user and the session will get exposed every time that site is accessed with HTTP instead of HTTPS. Technical Difference from HTTP HTTPS URLs begin with "https://" and use port 443 by default, whereas, HTTP URLs begin with "http://" and use port 80 by default. HTTP is not encrypted and thus is vulnerable to man-in-the-middle and eavesdropping attacks, which can let attackers gain access to website accounts and sensitive information, and modify webpages to inject malware or advertisements. HTTPS is designed to withstand such attacks and is considered secure against them (with the exception of HTTPS implementations that use deprecated versions of SSL). Network layers HTTP operates at the highest layer of the TCP/IP model—the application layer; as does the TLS security protocol (operating as a lower sublayer of the same layer), which encrypts an HTTP message prior to transmission and decrypts a message upon arrival. Strictly speaking, HTTPS is not a separate protocol, but refers to the use of ordinary HTTP over an encrypted SSL/TLS connection. HTTPS encrypts all message contents, including the HTTP headers and the request/response data. With the exception of the possible CCA cryptographic attack described in the limitations section below, an attacker should at most be able to discover that a connection is taking place between two parties, along with their domain names and IP addresses. Server setup To prepare a web server to accept HTTPS connections, the administrator must create a public key certificate for the web server. This certificate must be signed by a trusted certificate authority for the web browser to accept it without warning. The authority certifies that the certificate holder is the operator of the web server that presents it. Web browsers are generally distributed with a list of signing certificates of major certificate authorities so that they can verify certificates signed by them. Acquiring certificates A number of commercial certificate authorities exist, offering paid-for SSL/TLS certificates of a number of types, including Extended Validation Certificates. Let's Encrypt, launched in April 2016, provides free and automated service that delivers basic SSL/TLS certificates to websites. According to the Electronic Frontier Foundation, Let's Encrypt will make switching from HTTP to HTTPS "as easy as issuing one command, or clicking one button." The majority of web hosts and cloud providers now leverage Let's Encrypt, providing free certificates to their customers. Use as access control The system can also be used for client authentication in order to limit access to a web server to authorized users. To do this, the site administrator typically creates a certificate for each user, which the user loads into their browser. Normally, the certificate contains the name and e-mail address of the authorized user and is automatically checked by the server on each connection to verify the user's identity, potentially without even requiring a password. In case of compromised secret (private) key An important property in this context is perfect forward secrecy (PFS). Possessing one of the long-term asymmetric secret keys used to establish an HTTPS session should not make it easier to derive the short-term session key to then decrypt the conversation, even at a later time. Diffie–Hellman key exchange (DHE) and Elliptic-curve Diffie–Hellman key exchange (ECDHE) are in 2013 the only schemes known to have that property. In 2013, only 30% of Firefox, Opera, and Chromium Browser sessions used it, and nearly 0% of Apple's Safari and Microsoft Internet Explorer sessions. TLS 1.3, published in August 2018, dropped support for ciphers without forward secrecy. , 96.6% of web servers surveyed support some form of forward secrecy, and 52.1% will use forward secrecy with most browsers. , 99.6% of web servers surveyed support some form of forward secrecy, and 75.2% will use forward secrecy with most browsers. Certificate revocation A certificate may be revoked before it expires, for example because the secrecy of the private key has been compromised. Newer versions of popular browsers such as Firefox, Opera, and Internet Explorer on Windows Vista implement the Online Certificate Status Protocol (OCSP) to verify that this is not the case. The browser sends the certificate's serial number to the certificate authority or its delegate via OCSP (Online Certificate Status Protocol) and the authority responds, telling the browser whether the certificate is still valid or not. The CA may also issue a CRL to tell people that these certificates are revoked. CRLs are no longer required by the CA/Browser forum, nevertheless, they are still widely used by the CAs. Most revocation statuses on the Internet disappear soon after the expiration of the certificates. Limitations SSL (Secure Sockets Layer) and TLS (Transport Layer Security) encryption can be configured in two modes: simple and mutual. In simple mode, authentication is only performed by the server. The mutual version requires the user to install a personal client certificate in the web browser for user authentication. In either case, the level of protection depends on the correctness of the implementation of the software and the cryptographic algorithms in use. SSL/TLS does not prevent the indexing of the site by a web crawler, and in some cases the URI of the encrypted resource can be inferred by knowing only the intercepted request/response size. This allows an attacker to have access to the plaintext (the publicly available static content), and the encrypted text (the encrypted version of the static content), permitting a cryptographic attack. Because TLS operates at a protocol level below that of HTTP and has no knowledge of the higher-level protocols, TLS servers can only strictly present one certificate for a particular address and port combination. In the past, this meant that it was not feasible to use name-based virtual hosting with HTTPS. A solution called Server Name Indication (SNI) exists, which sends the hostname to the server before encrypting the connection, although older browsers do not support this extension. Support for SNI is available since Firefox 2, Opera 8, Apple Safari 2.1, Google Chrome 6, and Internet Explorer 7 on Windows Vista. A sophisticated type of man-in-the-middle attack called SSL stripping was presented at the 2009 Blackhat Conference. This type of attack defeats the security provided by HTTPS by changing the link into an link, taking advantage of the fact that few Internet users actually type "https" into their browser interface: they get to a secure site by clicking on a link, and thus are fooled into thinking that they are using HTTPS when in fact they are using HTTP. The attacker then communicates in clear with the client. This prompted the development of a countermeasure in HTTP called HTTP Strict Transport Security. HTTPS has been shown to be vulnerable to a range of traffic analysis attacks. Traffic analysis attacks are a type of side-channel attack that relies on variations in the timing and size of traffic in order to infer properties about the encrypted traffic itself. Traffic analysis is possible because SSL/TLS encryption changes the contents of traffic, but has minimal impact on the size and timing of traffic. In May 2010, a research paper by researchers from Microsoft Research and Indiana University discovered that detailed sensitive user data can be inferred from side channels such as packet sizes. The researchers found that, despite HTTPS protection in several high-profile, top-of-the-line web applications in healthcare, taxation, investment, and web search, an eavesdropper could infer the illnesses/medications/surgeries of the user, his/her family income, and investment secrets. The fact that most modern websites, including Google, Yahoo!, and Amazon, use HTTPS causes problems for many users trying to access public Wi-Fi hot spots, because a captive portal Wi-Fi hot spot login page fails to load if the user tries to open an HTTPS resource. Several websites, such as NeverSSL, guarantee that they will always remain accessible by HTTP. History Netscape Communications created HTTPS in 1994 for its Netscape Navigator web browser. Originally, HTTPS was used with the SSL protocol. As SSL evolved into Transport Layer Security (TLS), HTTPS was formally specified by RFC 2818 in May 2000. Google announced in February 2018 that its Chrome browser would mark HTTP sites as "Not Secure" after July 2018. This move was to encourage website owners to implement HTTPS, as an effort to make the World Wide Web more secure.
Technology
Internet
null
13595
https://en.wikipedia.org/wiki/Heathrow%20Airport
Heathrow Airport
London Heathrow Airport colloquially known as Heathrow () and named London Airport until 1966, is the primary and largest international airport serving London, the capital and most populous city of England and the United Kingdom. It is the largest of the six international airports in the London airport system (the others being Gatwick, Stansted, Luton, City and Southend). The airport is owned and operated by Heathrow Airport Holdings. In 2023, Heathrow was the busiest airport in Europe, the fourth-busiest airport in the world by passenger traffic and the second-busiest airport in the world by international passenger traffic. As of 2023, Heathrow is the airport with the most international connections in the world. Heathrow was founded as a small airfield in 1930 but was developed into a much larger airport after World War II. It lies west of Central London on a site that covers . It was gradually expanded over 75 years and now has two parallel east–west runways, four operational passengers terminals and one cargo terminal. The airport is the primary hub for British Airways and Virgin Atlantic. Location Heathrow is west of Central London. It is located west of Hounslow, south of Hayes, and north-east of Staines-upon-Thames. Heathrow falls entirely within the boundaries of the London Borough of Hillingdon, and under the Twickenham postcode area, with the postcode TW6. It is surrounded by the villages of Sipson, Harlington, Harmondsworth, and Longford to the north and the neighbourhoods of Cranford and Hatton to the east. To the south lie Feltham, Bedfont, and Stanwell while to the west Heathrow is separated from Slough, Horton and Windsor in Berkshire by the M25 motorway. The airport is located within the Hayes and Harlington parliamentary constituency. As the airport is located west of London and as its runways run east–west, an aircraft's landing approach is usually directly over the Greater London Urban Area when the wind has a westerly component—as it often has. The airport forms part of a travel to work area consisting of (most of) Greater London, and neighbouring parts of the surrounding Home Counties. History Heathrow Airport began in 1929 as a small airfield (Great West Aerodrome) on land southeast of the hamlet of Heathrow from which the airport takes its name. At that time the land consisted of farms, market gardens and orchards; there was a "Heathrow Farm" approximately where the modern Terminal 2 is situated, a "Heathrow Hall" and a "Heathrow House." This hamlet was largely along a country lane (Heathrow Road), which ran roughly along the east and south edges of the present central terminals area. Development of the whole Heathrow area as a much larger airport began in 1944 during World War II. It was intended for long-distance military aircraft bound for the Far East. By the time some of the airfield's runways were usable, World War II had ended, and the UK Government continued to develop the site as a civil airport. The airport was opened on 25 March 1946 as London Airport. The airport was renamed Heathrow Airport in the last week of September 1966, to avoid confusion with the other two airports which serve London, Gatwick and Stansted. The design for the airport was by Sir Frederick Gibberd. He set out the original terminals and central-area buildings, including the original control tower and the multi-faith Chapel of St George's. Operations Facilities Heathrow Airport is used by over 89 airlines flying to 214 destinations in 84 countries. The airport is the primary hub of British Airways and is a base for Virgin Atlantic. It has four passenger terminals (numbered 2 to 5) and a cargo terminal. In 2021 Heathrow served 19.4 million passengers, of which 17 million were international and 2.4 million domestic. The busiest year ever recorded was 2019 when 80.9 million passengers travelled through the airport. Heathrow is the UK's largest port by value with a network of over 218 destinations worldwide. The busiest single destination in passenger numbers is New York, with over threemillion passengers flying between Heathrow and JFK Airport in 2021. In the 1950s, Heathrow had six runways, arranged in three pairs at different angles in the shape of a hexagram with the permanent passenger terminal in the middle and the older terminal along the north edge of the field; two of its runways would always be within 30° of the wind direction. As the required length for runways has grown, Heathrow now has only two parallel runways running east–west. These are extended versions of the two east–west runways from the original hexagram. From the air, almost all of the original runways can still be seen, incorporated into the present system of taxiways. North of the northern runway and the former taxiway and aprons, now the site of extensive car parks, is the entrance to the access tunnel and the site of Heathrow's unofficial "gate guardian". For many years the home of a 40% scale model of a British Airways Concorde, G-CONC; the site has been occupied by a model of an Emirates Airbus A380 since 2008. Heathrow Airport has Anglican, Catholic, Free Church, Hindu, Jewish, Muslim and Sikh chaplains. There is a multi-faith prayer room and counselling room in each terminal, in addition to St. George's Interdenominational Chapel in an underground vault adjacent to the old control tower, where Christian services take place. The chaplains organise and lead prayers at certain times in the prayer room. The airport has its resident press corps, consisting of six photographers and one TV crew, serving all the major newspapers and television stations around the world. Most of Heathrow's internal roads’ names are coded by their first letter: N in the north (e.g. Newall Road), E in the east (e.g. Elmdon Road), S in the south (e.g. Stratford Road), W in the west (e.g. Walrus Road), C in the centre (e.g. Camborne Road). Cargo The top cargo export destinations include the United States, China and the United Arab Emirates handling 1.4 million tonnes of cargo in 2022. The top products exported were books, salmon and medicine. Flight movements Aircraft destined for Heathrow are usually routed to one of four holding points. Air traffic controllers at Heathrow Approach Control (based in Swanwick, Hampshire) then guide the aircraft to their final approach, merging aircraft from the four holds into a single stream of traffic, sometimes as close as apart. Considerable use is made of continuous descent approach techniques to minimise the environmental effects of incoming aircraft, particularly at night. Once an aircraft is established on its final approach, control is handed over to Heathrow Tower. When runway alternation was introduced, aircraft generated significantly more noise on departure than when landing, so a preference for westerly operations during daylight was introduced, which continues to this day. In this mode, aircraft take off towards the west and land from the east over London, thereby minimising the impact of noise on the most densely populated areas. Heathrow's two runways generally operate in segregated mode, whereby landings are allocated to one runway and takeoffs to the other. To further reduce noise nuisance, the use of runways 27R and 27L is swapped at 15:00 each day if the wind is from the west. When landings are easterly there is no alternation; 09L remains the landing runway and 09R the takeoff runway due to the legacy of the now rescinded Cranford Agreement, pending taxiway works to allow the roles to be reversed. Occasionally, landings are allowed on the nominated departure runway, to help reduce airborne delays and to position landing aircraft closer to their terminal, reducing taxi times. Night-time flights at Heathrow are subject to restrictions. Between 23:00 and 04:00, the noisiest aircraft (rated QC/8 and QC/16) cannot be scheduled for operation. Also, during the night quota period (23:30–06:00) there are four limits: A limit on the number of flights allowed. A Quota Count system which limits the total amount of noise permitted, but allows operators to choose to operate fewer noisy aircraft or a greater number of quieter planes. QC/4 aircraft cannot be scheduled for operation. A voluntary agreement with the airlines that no early-morning arrivals will be scheduled to land before 04:30. A trial of "noise-relief zones" ran from December 2012 to March 2013, which concentrated approach flight paths into defined areas compared with the existing paths which were spread out. The zones used alternated weekly, meaning residents in the "no-fly" areas received respite from aircraft noise for set periods. However, it was concluded that some residents in other areas experienced more noise as a consequence of the trial and that it should therefore not be taken forward in its current form. Heathrow received more than 25,000 noise complaints in just three months over the summer of 2016, but around half were made by the same ten people. In 2017, Heathrow introduced "Fly Quiet & Green", a quarterly published league table (suspended in 2020 due to the Covid pandemic) that awards points to the 50 busiest airlines at the airport, ostensibly based on their performance relative to each other across a range of seven environmental benchmarks, such as emissions. Heathrow has acknowledged, but not attempted to refute, criticism over discrepancies and a lack of transparency over the way in which the figures are calculated. The airport has always refused to publish a breakdown showing how many "Fly Quiet points" each performance benchmark has contributed towards the total score it awards to an airline, thereby putting obstacles in the way of any independent auditing of the published results. Among other criticisms of the league table are the unexplained omission of some of the poorer performers among the 50 busiest airlines and the emphasis on relative rather than absolute performance, so an airline could well improve its "Fly Quiet" score quarter-on-quarter even if its environmental performance had in fact worsened over the period. In October 2024, Heathrow finally reinstated the programme, rebadged as “Fly Quieter & Greener”. Two more environmental benchmarks were added to the previous seven, but in all other respects the aforementioned deficiencies of the original scheme remain. Due to the COVID-19 pandemic Heathrow has seen a large increase in cargo-only flights, not only by already established carriers at the airport operating cargo-only flights using passenger aircraft but also by several cargo-only airlines. Arrival stacks Inbound aircraft to London Heathrow Airport typically follow one of several Standard Arrival Routes (STARs). The STARs each terminate at one of four different VOR installations, and these also define four "stacks" where aircraft can be held if necessary until they are cleared to begin their approach to land. Stacks are sections of airspace where inbound aircraft will normally use the pattern closest to their arrival route. They can be visualised as a helix in the sky. Each stack descends in intervals from down to . Aircraft hold between at 1,000-foot intervals. If these holds become full, aircraft are held at more distant points before being cleared onward to one of the four main holds. The following four stacks are currently in place: The Bovingdon stack is for arrivals from the northwest. It extends above the village of Bovingdon and the town of Chesham, and uses the VOR BNN ("Bovingdon"), which is situated on the former RAF Bovingdon airfield. The Biggin Hill stack on the southeast edge of Greater London is for arrivals from the southeast. It uses the VOR BIG ("Biggin"), which is situated on London Biggin Hill Airport. The Lambourne stack in Essex is for arrivals from the northeast. It uses the VOR LAM ("Lambourne"), which is situated adjacent to Stapleford Aerodrome. The Ockham stack in Surrey is for arrivals from the southwest. It uses the VOR OCK ("Ockham"), which is situated on the former Wisley Airfield. In high-traffic situations, air traffic controllers can opt to use a number of RNAV STARs either to send traffic to a non-standard stack or to move traffic from one stack to another. These are not allowed to be used for flight planning and will be assigned by ATC tactically. Third runway In September 2012, the British government established the Airports Commission, an independent commission chaired by Sir Howard Davies to examine various options for increasing capacity at UK airports. In July 2015, the commission backed a third runway at Heathrow, which the government approved in October 2016. However, the Court of Appeal rejected this plan, on the basis that the government failed to consider climate change and the environmental impact of aviation. On 16 December 2020, the UK Supreme Court lifted the ban on the third runway expansion, allowing the construction plan to go ahead. Regulation Until it was required to sell Gatwick and Stansted Airports, Heathrow Airport Holdings, owned mostly by FGP and Qatar Investment Authority and CDPQ held a dominant position in the London aviation market and has been heavily regulated by the Civil Aviation Authority (CAA) as to how much it can charge airlines to land. The annual increase in landing charge per passenger was capped at inflation minus 3% until 1 April 2003. From 2003 to 2007 charges increased by inflation plus 6.5% per year, taking the fee to £9.28 per passenger in 2007. In March 2008, the CAA announced that the charge would be allowed to increase by 23.5% to £12.80 from 1 April 2008 and by inflation plus 7.5% for each of the following four years. In April 2013, the CAA announced a proposal for Heathrow to charge fees calculated by inflation minus 1.3%, continuing until 2019. Whilst the charges for landing at Heathrow are determined by the CAA and Heathrow Airport Holdings, the allocation of landing slots to airlines is carried out by Airport Co-ordination Limited (ACL). Until 2008, air traffic between Heathrow and the United States was strictly governed by the countries' bilateral Bermuda II treaty. The treaty originally allowed only British Airways, Pan Am and TWA to fly from Heathrow to designated gateways in the US. In 1991, Pan Am and TWA sold their rights to United Airlines and American Airlines respectively, while Virgin Atlantic was added to the list of airlines allowed to operate on these routes. The Bermuda II Air Service Agreement was superseded by a new "open skies" agreement that was signed by the United States and the European Union on 30 April 2007 and came into effect on 30 March 2008. Shortly afterwards, additional US airlines, including Northwest Airlines, Continental Airlines, US Airways and Delta Air Lines started services to Heathrow after previously having to use Gatwick Airport. Following Brexit, the US and UK signed a new US-UK Air Transport Agreement in November 2020 incorporating the essential elements of Open Skies, which came into effect in March 2021. The airport was criticised in 2007 for overcrowding and delays; according to Heathrow Airport Holdings, Heathrow's facilities were originally designed to accommodate 55million passengers annually. The number of passengers using the airport reached a record 70million in 2012. In 2007 the airport was voted the world's least favourite, alongside Chicago O'Hare, in a TripAdvisor survey. However, the opening of Terminal 5 in 2008 has relieved some pressure on terminal facilities, increasing the airport's terminal capacity to 90million passengers per year. A tie-up is also in place with McLaren Applied Technologies to optimise the general procedure, reducing delays and pollution. With only two runways operating at over 98% of their capacity, Heathrow has little room for more flights, although the use of larger aircraft such as the Airbus A380 has allowed some increase in passenger numbers. It is difficult for existing airlines to obtain landing slots to enable them to increase their services from the airport, or for new airlines to start operations. To increase the number of flights, Heathrow Airport Holdings has proposed using the existing two runways in 'mixed mode' whereby aircraft would be allowed to take off and land on the same runway. This would increase the airport's capacity from its current 480,000 movements per year to as many as 550,000 according to former British Airways CEO Willie Walsh. Heathrow Airport Holdings has also proposed building a third runway to the north of the airport, which would significantly increase traffic capacity. Security Policing of the airport is the responsibility of the aviation security, a unit of the Metropolitan Police, although the British Army, including armoured vehicles of the Household Cavalry, has occasionally been deployed at the airport during periods of heightened security. Full body scanners are now used at the airport, and passengers who refuse to use them are required to submit to a hand search in a private room. The scanners display passengers' bodies as cartoon figures, with indicators showing where concealed items may be. For many decades Heathrow had a reputation for theft from baggage by baggage handlers. This led to the airport being nicknamed "Thiefrow", with periodic arrests of baggage handlers. Following the widespread disruption caused by reports of drone sightings at Gatwick Airport, and a subsequent incident at Heathrow, a drone-detection system was installed airport-wide to attempt to combat disruption caused by the illegal use of drones. Terminal 2 The airport's newest terminal, officially known as the Queen's Terminal, was opened on 4 June 2014 and has 24 gates. Designed by Spanish architect Luis Vidal, it was built on the site that had been occupied by the original Terminal 2 and the Queens Building. The main complex was completed in November 2013 and underwent six months of testing before opening to passengers. It includes a satellite pier (T2B), a 1,340-space car park, and a cooling station to generate chilled water. There are 52 shops and 17 bars and restaurants. The airlines moved from their original locations over six months, with only 10% of flights operating from there in the first six weeks (United Airlines' transatlantic flights) to avoid the opening problems seen at Terminal 5. On 4 June 2014, United became the first airline to move into Terminal 2 from Terminals 1 and 4 followed by All Nippon Airways, Air Canada and Air China from Terminal 3. Air New Zealand, Asiana Airlines, Croatia Airlines, LOT Polish Airlines, South African Airways, and TAP Air Portugal moved in on 22 October 2014. Flights using Terminal 2 primarily originate from northern Europe or western Europe. It is primarily used by Star Alliance airlines (consolidating the airlines under Star Alliance's co-location policy "Move Under One Roof"). The terminal is also used by a few non-aligned airlines. Terminal 2 is one of the two terminals that operate UK and Irish domestic flights. Although Scandinavian Airlines is now part of the SkyTeam alliance as of 1 September, 2024, it still uses Terminal 2. The original Terminal 2 opened as the Europa Building in 1955 and was the airport's oldest terminal. It had an area of and was designed to handle around 1.2million passengers annually. In its final years, it accommodated up to 8million. A total of 316million passengers passed through the terminal in its lifetime. The building was demolished in 2010, along with the Queens Building which had housed airline company offices. Terminal 3 Terminal 3 opened as the Oceanic Terminal on 13 November 1961 to handle flight departures for long-haul routes for foreign carriers to the United States and Asia. At this time the airport had a direct helicopter service to central London from the gardens on the roof of the terminal building. Renamed Terminal 3 in 1968, it was expanded in 1970 with the addition of an arrivals building. Other facilities added included the UK's first moving walkways. In 2006, the new £105million Pier 6 was completed to accommodate the Airbus A380 superjumbo; Emirates and Qantas operate regular flights from Terminal 3 using the Airbus A380. Redevelopment of Terminal 3's forecourt by the addition of a new four-lane drop-off area and a large pedestrianised plaza, complete with a canopy to the front of the terminal building, was completed in 2007. These improvements were intended to improve passengers' experience, reduce traffic congestion and improve security. As part of this project, Virgin Atlantic was assigned its dedicated check-in area, known as 'Zone A', which features a large sculpture and atrium. , Terminal 3 has an area of with 28 gates, and in 2011 it handled 19.8million passengers on 104,100flights. Most flights from Terminal 3 are long-haul flights from North America, Asia and other foreign countries other than Europe. Terminal 3 is home to Oneworld members (with the exception of Malaysia Airlines, Qatar Airways and Royal Air Maroc, all of which use Terminal 4), SkyTeam members Aeroméxico, China Airlines, Delta Air Lines, Middle East Airlines, Virgin Atlantic, and several long haul unaffiliated carriers. British Airways also operates several flights from this terminal, as do Iberia and Vueling. Terminal 4 Opened in 1986, Terminal 4 has 22 gates. It is situated to the south of the southern runway next to the cargo terminal and is connected to Terminals 2 and 3 by the Heathrow Cargo Tunnel. The terminal has an area of and is now home to the SkyTeam alliance; except Scandinavian Airlines which uses Terminal 2, and China Airlines, Aeroméxico, Delta Air Lines, Middle East Airlines, and Virgin Atlantic which use Terminal 3 - Oneworld carriers Malaysia Airlines, Qatar Airways, Royal Air Maroc, and Gulf Air and to most unaffiliated carriers. It has undergone a £200million upgrade to enable it to accommodate 45airlines with an upgraded forecourt to reduce traffic congestion and improve security. Most flights using Terminal 4 are those from/to East Europe, Central Asia, North Africa and the Middle East as well as a few flights from/to Europe. An extended check-in area with renovated piers and departure lounges and a new baggage system were installed, and four new stands were built to accommodate the Airbus A380; Qatar Airways operates regular A380 flights. Terminal 5 Terminal 5 lies between the northern and southern runways at the western end of the Heathrow site and was opened by Queen Elizabeth II on 14 March 2008, 19 years after its inception. It opened to the public on 27 March 2008, and British Airways and its partner company Iberia have exclusive use of this terminal, which has 50gates, including three hardstands. The first passenger to enter Terminal 5 was a UK ex-pat from Kenya who passed through security at 04:30 on the day. He was presented with a boarding pass by British Airways CEO Willie Walsh for the first departing flight, BA302 to Paris. During the two weeks after its opening, operations were disrupted by problems with the terminal's IT systems, coupled with insufficient testing and staff training, which caused over 500 flights to be cancelled. Terminal 5 is exclusively used by British Airways as its global hub. However, because of the merger, between 25 March 2012 and 12 July 2022, Iberia's operations at Heathrow were moved to the terminal, making it the home of International Airlines Group. On 12 July 2022, Iberia's flight operations were moved back to Terminal 3. On 7 July 2020, American moved to Terminal 5, to allow for easier connections from American's transatlantic flights to British Airways flights during the pandemic. However, all American flights, except JFK, have returned to Terminal 3. China Southern Airlines used Terminal 5 due to the pandemic until it was relocated to Terminal 4 in November 2022. Built for £4.3billion, the terminal consists of a four-story main terminal building (Concourse A) and two satellite buildings linked to the main terminal by an underground people mover transit system. Concourse A is dedicated to British Airways's narrowbody fleet for flights around the UK and the rest of Europe, the first satellite (Concourse B) includes dedicated stands for BA and Iberia's widebody fleet except for the Airbus A380, and the second satellite (Concourse C), includes 7 dedicated aircraft stands for the A380. It became fully operational on 1 June 2011. Terminal 5 was voted Skytrax World's Best Airport Terminal 2014 in the Annual World Airport Awards. The main terminal building (Concourse A) has an area of while Concourse B covers . It has 60 aircraft stands and capacity for 30million passengers annually as well as more than 100shops and restaurants. It is also home to British Airways' Flagship lounge, the Concorde Room, alongside four further British Airways branded lounges. One of those lounges is the British Airways Arrivals Lounge which is located land-side. A further building, designated Concourse D and of similar size to Concourse C, may yet be built to the east of the existing site, providing up to another 16stands. Following British Airways' merger with Iberia, this may become a priority since the combined business will require accommodation at Heathrow under one roof to maximise the cost savings envisaged under the deal. A proposal for Concourse D was featured in Heathrow's Capital Investment Plan 2009. The transport network around the airport has been extended to cope with the increase in passenger numbers. New branches of both the Heathrow Express and the Underground's Piccadilly line serve a new shared Heathrow Terminal 5 station. A dedicated motorway spur links the terminal to the M25 (between junctions 14 and 15). The terminal has 3,800spaces multi-storey car park. A more distant long-stay car park for business passengers is connected to the terminal by a personal rapid transit system, the Heathrow Pod, which became operational in the spring of 2011. An automated people mover (APM) system, known as the Transit, transports airside passengers between the main terminal building and the satellite concourses. Terminal assignments As of 2025, Heathrow's four passenger terminals are assigned as follows: Terminal 2 (Queen’s terminal) Star Alliance SkyTeam (Scandinavian Airlines) Several non-aligned airlines Terminal 3 (Oceanic terminal) Oneworld (except Iberia, Malaysia Airlines, Royal Air Maroc and Qatar Airways) SkyTeam (Aeroméxico, China Airlines, Delta Air Lines, Middle East Airlines, Virgin Atlantic) British Airways (some destinations) HNA Aviation Group airlines (Hainan Airlines, Tianjin Airlines, Capital Airlines) Several non-aligned airlines Terminal 4 SkyTeam (except Aeroméxico, China Airlines, Delta Air Lines, Middle East Airlines, Virgin Atlantic and Scandinavian Airlines) Most non-aligned airlines Terminal 5 British Airways (most destinations) Iberia Following the opening of Terminal 5 in March 2008, a complex programme of terminal moves was implemented. This saw many airlines move to be grouped in terminals by airline alliance as far as possible. Following the opening of Phase 1 of the new Terminal 2 in June 2014, all Star Alliance member airlines (with the exception of new member Air India which moved in early 2017) along with Aer Lingus and Germanwings relocated to Terminal 2 in a phased process completed on 22 October 2014. Additionally, by 30 June 2015 all airlines left Terminal 1 in preparation for its demolition to make room for the construction of Phase 2 of Terminal 2. Some other airlines made further minor moves at a later point, e.g. Delta Air Lines merging all departures in Terminal 3 instead of a split between Terminals 3 and 4. Iberia moved to Terminal 5 on 1 June 2023. Terminal usage during the COVID-19 pandemic Heathrow Airport has four terminals with a total of 115 gates, 66 of which can support wide-body aircraft and 24 gates that can support an Airbus A380. Due to the COVID-19 pandemic, Heathrow's services were sharply reduced. It announced that as of 6 April 2020, the airport would be transitioning to single-runway operations and that it would be temporarily closing Terminals 3 and 4, moving all remaining flights into Terminals 2 or 5. Dual runway operations were restored in August 2020. Heathrow returned to single-runway operations on 9 November 2020. On 11 December 2020, Heathrow announced Terminal 4 would be shut until the end of 2021. Terminal 4 was used sporadically during 2021 for red list passengers who would be subject to mandatory hotel quarantine. Terminal 3 was reopened for use by Virgin Atlantic and Delta on 15 July 2021, and Terminal 4 was reopened to normal operations on 14 June 2022. Former Terminal 1 Terminal 1 opened in 1968 and was inaugurated by Queen Elizabeth II in April 1969. Terminal 1 was the Heathrow base for British Airways' domestic and European network and a few of its long haul routes before Terminal 5 opened. The acquisition of British Midland International (BMI) in 2012 by British Airways' owner International Airlines Group meant British Airways took over BMI's short-haul and medium-haul destinations from the terminal. Terminal 1 was also the main base for most Star Alliance members though some were also based at Terminal 3. Prior to the opening of Terminal 5, all domestic and Common Travel Area departures and arrivals needed to use Terminal 1, which had separate departure piers for these flights. Terminal 1 closed at the end of June 2015, and the site is now being used to extend Terminal 2 which opened in June 2014. A number of the newer gates used by Terminal 1 were built as part of the Terminal 2 development and are being retained. The last tenants along with British Airways were El Al, Icelandair (moved to Terminal 2 on 25 March 2015) and LATAM Brasil (the third to move in to Terminal 3 on 27 May 2015). British Airways was the last operator in Terminal 1. Two flights of this carrier, one departing to Hanover and one arriving from Baku, marked the terminal closure on 29 June 2015. British Airways operations have been relocated to Terminals 3 and 5. Airlines and destinations Passenger The following airlines operate regularly scheduled passenger flights at London Heathrow Airport: Cargo Air traffic and statistics Overview When ranked by passenger traffic, Heathrow is the eighth busiest airport internationally, behind Hartsfield–Jackson Atlanta International Airport, Dallas/Fort Worth International Airport, Denver International Airport, Chicago O'Hare International Airport, Dubai International Airport, Los Angeles International Airport, and Istanbul Airport, for the 12 months ending December 2022. London Heathrow Airport was noted as the best-connected airport globally in 2019 according to the OAG's Megahubs Index with a connectivity score of 317. Dominant carrier British Airways was recorded as holding a 51% share of flights at the hub. In 2015, Heathrow was the busiest airport in Europe in total passenger traffic, with 14% more passengers than Paris–Charles de Gaulle Airport and 22% more than Istanbul Atatürk Airport. Heathrow was the fourth busiest European airport by cargo traffic in 2013, after Frankfurt Airport, Paris–Charles de Gaulle and Amsterdam Airport Schiphol. In 2020, Heathrow's passenger numbers dropped sharply by over 72%, (a decrease of 58million travellers compared to 2019), due to the impact caused by restrictions and/or bans on travel caused by the global COVID-19 pandemic. More than four million passengers travelled on domestic and international flights in and out of Heathrow in March 2023, meaning it was once again the busiest airport in Europe after falling to the second spot in November 2022. On 29 November 2024, it was reported that Heathrow Airport is testing the usage of Artificial Intelligence, a system known as Amy, to assist air controllers in managing one of the world's busiest airports. The system, which relies heavily on the efficiency of coordination, is capable of tracking aircraft across a wide airspace with the combination of radar and video data collected from the ground. Annual traffic statistics Busiest routes Other facilities The head office of Heathrow Airport Holdings (formerly BAA Limited) is located in the Compass Centre by Heathrow's northern runway, a building that previously served as a British Airways flight crew centre. The World Business Centre Heathrow consists of three buildings. 1 World Business Centre houses offices of Heathrow Airport Holdings, Heathrow Airport itself, and Scandinavian Airlines. Previously International Airlines Group had its head office in 2 World Business Centre. At one time the British Airways head office was located within Heathrow Airport at Speedbird House before the completion of Waterside, the current BA head office in Harmondsworth, in June 1998. To the north of the airfield lies the Northern Perimeter Road, along which most of Heathrow's car rental agencies are based, and Bath Road, which runs parallel to it, but outside the airport campus. Ground transport Public transport Train There are three train services to Central London: Heathrow Express: a non-stop service direct to London Paddington; trains leave every 15 minutes for the 15-minute journey (21 minutes to and from Terminal 5). Trains depart from Heathrow Terminal 5 station or Heathrow Central station (Terminals 2 & 3). There is a free transfer service between Terminal 4 and Heathrow Central to connect with services from London and Terminal 5. Elizabeth line: a stopping service to Abbey Wood and Shenfield via Paddington and central London – 6 trains per hour, two originating from Terminal 5 and four originating from Terminal 4. Calls at Hayes & Harlington for connecting trains to Reading. Scheduled journey time into Central London is around 35 minutes. London Underground (Piccadilly line): four stations serve the airport: Terminal 2 & 3, Terminal 4 and Terminal 5 serve the passenger terminals; Hatton Cross serves the maintenance areas. The usual journey time from Heathrow Central to Central London is around 40–50 minutes. Bus and coach Many bus and coach services operate from Heathrow Central bus station, which serves Terminal 2 and Terminal 3. Services also operate from the bus stations located at Terminal 4 and Terminal 5. Inter-terminal transport Terminals 2 and 3 are within walking distance of each other. Transfers from Terminals 2 and 3 to Terminals 4 and 5 are provided by Elizabeth line and Heathrow Express trains and the London Underground Piccadilly line. Direct transfer between Terminals 4 and 5 is provided for free by route H30, introduced by Diamond Buses on 1 December 2022. Transit passengers remaining airside are provided with free dedicated transfer buses between terminals. These use dedicated airside tunnels (Heathrow Cargo Tunnel between Terminals 2/3 and 4, Heathrow Airside Road Tunnel between Terminals 2/3 and 5) to minimise disruption to aircraft operations. The Heathrow Pod personal rapid transit system shuttles passengers between Terminal 5 and the business car park using 21 small, driverless transportation pods. The pods are battery-powered and run on-demand on a four-kilometre track, each able to carry up to four adults, two children, and their luggage. Plans exist to extend the Pod system to connect Terminals 2 and 3 to remote car parks. An underground automated people mover system known as the Transit operates within Terminal 5, linking the main terminal with the satellite Terminals 5B and 5C. The Transit operates entirely airside using Bombardier Innovia APM 200 people mover vehicles. Hotel access Some hotels are directly connected to the terminals, and therefore are walkable without any transfers. Many more hotels are easily accessible using the local buses which depart from all terminals. The Hotel Hoppa bus network also connects all terminals to major hotels in the area. Taxi Taxis are available at all terminals. Car Heathrow is accessible via the nearby M4 motorway or A4 road (Terminals 2–3), the M25 motorway (Terminals 4 and 5) and the A30 road (Terminal 4). There are drop-off and pick-up areas at all terminals and short- and long-stay multi-storey car parks. All the Heathrow forecourts are drop-off only. There are further car parks, not run by Heathrow Airport Holdings, just outside the airport: the most recognisable is the National Car Parks facility, although there are many other options; these car parks are connected to the terminals by shuttle buses. Four parallel tunnels under the northern runway connect the M4 Heathrow spur and the A4 road to Terminals 2–3. The two larger tunnels are each two lanes wide and are used for motorised traffic. The two smaller tunnels were originally reserved for pedestrians and bicycles; to increase traffic capacity the cycle lanes have been modified to each take a single lane of cars, although bicycles still have priority over cars. Pedestrian access to the smaller tunnels has been discontinued, with the free bus services being used instead. Bicycle There are (mainly off-road) bicycle routes to some of the terminals. Free bicycle parking places are available in car parks 1 and 1A, at Terminal 4, and to the North and South of Terminal 5's Interchange Plaza. Cycling is not currently allowed through the main tunnel to access the central area and Terminals 2 and 3. Incidents and accidents On 3 March 1948, a Sabena Douglas DC-3 (registration: OO-AWH) crashed in fog. Three crew and 19 of the 22 passengers on board died. On 31 October 1950, a BEA Vickers Viking (registration: G-AHPN) crashed at Heathrow after hitting the runway during a go-around. Three crew and 25 passengers died. On 16 January 1955, a BEA Vickers Viscount (registered as G-AMOK) crashed into barriers whilst taking off in the fog from a disused runway strip parallel to the desired runway. There were two injuries. On 22 June 1955, a BOAC de Havilland Dove (registration: G-ALTM) crashed just short of the runway during a filming flight when the pilot shut down the incorrect engine. There were no casualties. On 1 October 1956, XA897, an Avro Vulcan strategic bomber of the Royal Air Force, crashed at Heathrow after an approach in bad weather. The Vulcan was the first to be delivered to the RAF and was returning from a demonstration flight to Australia and New Zealand. The pilot and co-pilot ejected and survived, but the four other occupants were killed. On 7 January 1960, a Vickers Viscount (registration: G-AOHU) of BEA was damaged beyond economic repair when the nose wheel collapsed on landing. A fire then developed and burnt out the fuselage. There were no casualties among the 59 people on board. On 27 October 1965, a BEA Vickers Vanguard (registration: G-APEE), flying from Edinburgh, crashed on Runway 28R while attempting to land in poor visibility. All 30 passengers and six crew on board died. On 8 April 1968, BOAC Flight 712 Boeing 707 (registration: G-ARWE), departing for Australia via Singapore, suffered an engine fire just after take-off. The engine fell from the wing into a nearby gravel pit in Staines, before the plane managed to perform an emergency landing with the wing on fire. However, the plane was consumed by fire once on the ground. Five people – four passengers and a flight attendant – died, while 122 survived. A flight attendant, Barbara Harrison, who helped with the evacuation, was posthumously awarded the George Cross. On 3 July 1968, the port flap operating rod of G-AMAD, an Airspeed Ambassador operated by BKS Air Transport failed due to fatigue, thereby allowing the port flaps to retract. This resulted in a rolling movement to the port which could not be controlled during the approach, causing the aircraft to contact the grass and swerve towards the terminal building. It hit two parked British European Airways Hawker Siddeley Trident aircraft, burst into flames and came to rest against the ground floor of the terminal building. Six of the eight crew died, as did eight horses on board. Trident G-ARPT was written off, and Trident G-ARPI was badly damaged, but subsequently repaired, only to be lost in the Staines crash in 1972. On 18 June 1972, Trident G-ARPI, operating as BEA548, crashed in a field close to the Crooked Billet Public House, Staines, two minutes after taking off. All 118 passengers and crew on board died. On 5 November 1997, the pilots of Virgin Atlantic Flight 024, Airbus A340-311 (registration: G-VSKY), performed an intentional belly landing on runway 27L after the left main landing gear jammed in a partially lowered position. Two crew and five passengers suffered minor injuries in the emergency evacuation. Investigators found that a brake torque pin had fallen out of the landing gear on takeoff from Los Angeles International Airport (LAX) because the pin and its retaining assembly were subject to higher than predicted loads while in service; the precise mode of failure could not be verified because only the pin, and not its retaining hardware, was found at LAX. The aircraft sustained substantial damage but was repaired and placed back in service. On 17 January 2008, a British Airways Boeing 777-236ER, (registration: G-YMMM), operating flight BA038 from Beijing, crash-landed short of runway 27L and stopped on the threshold, leading to 18 minor injuries. The impact tore off the right landing gear and pushed the left landing gear through the wing root; the aircraft was subsequently written off. The accident was attributed to a loss of thrust caused by fuel icing. On 28 September 2022, a Korean Air Boeing 777 preparing to take off collided with an Icelandair Boeing 757 which had just landed. The 777 crew aborted the takeoff; no injuries were reported, but the aircraft suffered minor damage. On 6 April 2024, the wing of an empty Virgin Atlantic Boeing 787 under tow at Terminal 3 clipped a parked British Airways plane preparing to depart from an adjacent gate with 121 passengers on board. The passengers transferred to a different British Airways aircraft and departed several hours later. Heathrow said there were no injuries, but both aircraft sustained damage. Terrorism and security incidents On 8 June 1968, James Earl Ray, the suspect in the 4 April 1968 assassination of Martin Luther King Jr., was captured, arrested, and extradited back to the United States at Heathrow Airport while attempting to leave the United Kingdom for Rhodesia (now Zimbabwe) on a false Canadian passport. On 6 September 1970, El Al Flight 219 experienced an attempted hijack by two PFLP members. One hijacker was killed and the other was subdued as the plane made an emergency landing at Heathrow Airport. On 19 May 1974, the IRA planted a series of bombs in the Terminal 1 car park. Two people were injured by the explosions. On 26 November 1983, the Brink's-Mat robbery occurred, in which 6,800 gold bars worth nearly £26million were taken from a vault near Heathrow. Only a small amount of the gold was recovered and only two men were convicted of the crime. On 17 April 1986, semtex explosives were found in the bag of a pregnant Irish woman attempting to board an El Al flight. The explosives had been given to her by her Jordanian boyfriend and the father of her unborn child Nizar Hindawi. The incident became known as the Hindawi Affair. On 21 December 1988, Pan Am Flight 103 exploded mid-air over the town of Lockerbie, killing all 259 onboard and eleven people on the ground. The flight originated from Frankfurt as a feeder flight with a change of aircraft at Heathrow and was on its transatlantic leg to New York's JFK airport at the time of the incident. An unaccompanied suitcase containing a boombox radio/cassette player which housed the explosive was checked in at Malta and forwarded as interline baggage for this flight at Frankfurt, wherein it made its way to the transatlantic leg. In 1994, over six days, Heathrow was targeted three times (8, 10, and 13 March) by the IRA, which fired 12 mortars. Heathrow was a symbolic target due to its importance to the UK economy, and much disruption was caused when areas of the airport were closed over the period. The gravity of the incident was heightened because the Queen was being flown back to Heathrow by the RAF on 10 March. In March 2002, thieves stole US$3million that had arrived on a South African Airways flight. Just a few weeks earlier, a similar amount of money was stolen from a British Airways flight that arrived from Bahrain. In February 2003, the British Army was deployed to Heathrow along with 1,000 police officers in response to intelligence reports suggesting that al-Qaeda terrorists might launch surface-to-air missile attacks at British or American airliners. On 17 May 2004, Scotland Yard's Flying Squad foiled an attempt by seven men to steal £40million in gold bullion and a similar quantity of cash from the Swissport warehouse at Heathrow. On 25 February 2008, Greenpeace activists protesting against the planned construction of a third runway managed to cross the ramp and climb atop a British Airways Airbus A320, which had just arrived from Manchester Airport. At about 09:45 GMT the protesters unveiled a "Climate Emergency – No Third Runway" banner over the aircraft's tailfin. By 11:00 GMT four arrests had been made. In October 2010, an Angolan national was being deported on a British Airways plane. Security guards were heavy-handed with him and they put him in a dangerous position, leading to asphyxia. He did not survive. On 13 July 2015, thirteen activists belonging to the climate change protest group Plane Stupid managed to break through the perimeter fence and get onto the northern runway. They chained themselves together in protest, disrupting hundreds of flights. All were eventually arrested. In June 2022, many protesters gathered at Heathrow and Gatwick airports to protest the UK-Rwanda deal. A flight which was supposed to carry asylum seekers from the UK to Rwanda was cancelled. In December 2022, a piece of uranium metal discovered in the airport triggered a counter-terrorism investigation. It was found in the scrap metal package originated from Pakistan via a passenger flight from Oman on 29 December. It was bound for an Iranian business with premises in the UK. Other incidents On 18 December 2010, snowfall (9 cm, according to the Heathrow Winter Resilience Enquiry) caused the closure of the entire airport, causing one of the largest incidents at Heathrow of all time. Some 4,000 flights were cancelled over five days and 9,500 passengers spent the night at Heathrow on 18 December following the initial snowfall. The problems were caused not only by snow on the runways but also by snow and ice on the 198 parking stands which were all occupied by aircraft. On 12 July 2013, the ELT on an Ethiopian Airlines Boeing 787 Dreamliner parked at Heathrow airport caught fire due to a short circuit. There were no passengers aboard and no injuries. From 12 September 2019, the climate change campaign group, Heathrow Pause attempted to disrupt flights into and out of Heathrow Airport in London by flying drones in the airport's exclusion zone. The action was unsuccessful in disrupting flights and nineteen people were arrested. Future expansion and plans Runway and terminal expansion There is a long history of expansion proposals for Heathrow since it was first designated as a civil airport. Following the cancellation of the Maplin project in 1974, a fourth terminal was proposed but expansion beyond this was ruled out. However, the Airports Inquiries of 1981–83 and the 1985 Airports Policy White Paper considered further expansion and, following a four-year-long public inquiry in 1995–99, Terminal 5 was approved. In 2003, after many studies and consultations, the Future of Air Transport White Paper was published which proposed a third runway at Heathrow, as well as a second runway at Stansted Airport. In January 2009, the Transport Secretary at the time, Geoff Hoon announced that the British government supported the expansion of Heathrow by building a third runway and a sixth terminal building. This decision followed the 2003 white paper on the future of air transport in the UK, and a public consultation in November 2007. This was a controversial decision which met with widespread opposition because of the expected greenhouse gas emissions, impact on local communities, as well as noise and air pollution concerns. Before the 2010 general election, the Conservative and Liberal Democrat parties announced that they would prevent the construction of any third runway or further material expansion of the airport's operating capacity. The Mayor of London, then Boris Johnson, took the position that London needs more airport capacity, favouring the construction of an entirely new airport in the Thames Estuary rather than expanding Heathrow. After the Conservative-Liberal Democrat coalition took power, it was announced that the third runway expansion was cancelled. Two years later, leading Conservatives were reported to have changed their minds on the subject. Another proposal for expanding Heathrow's capacity was the Heathrow Hub, which aims to extend both runways to a total length of about 7,000 metres and divide them into four so that they each provide two, full-length runways, allowing simultaneous take-offs and landings while decreasing noise levels. In July 2013, the airport submitted three new proposals for expansion to the Airports Commission, which was established to review airport capacity in the southeast of England. The Airports Commission was chaired by Sir Howard Davies. He, at the time of his appointment, was in the employ of GIC Private Limited (formerly known as Government Investment Corporation of Singapore) and a member of its International Advisory Board. GIC Private Limited was then (2012), as it remains today, one of Heathrow's principal owners. Sir Howard Davies resigned from these positions upon confirmation of his appointment to lead the Airports Commission, although it has been observed that he failed to identify these interests when invited to complete the Airports Commission's register of interests. Each of the three proposals that were to be considered by Sir Howard Davies's commission involved the construction of a third runway, either to the north, northwest or southwest of the airport. The commission released its interim report in December 2013, shortlisting three options: the north-west third runway option at Heathrow, extending an existing runway at Heathrow, and a second runway at Gatwick Airport. After this report was published, the government confirmed that no options had been ruled out for airport expansion in the South-east and that a new runway would not be built at Heathrow before 2015. The full report was published on 1 July 2015, and backed a third, north-west, runway at Heathrow. Reaction to the report was generally adverse, particularly from London Mayor Boris Johnson. One senior Conservative told Channel 4: "Howard Davies has dumped an utter steaming pile of poo on the Prime Minister's desk." On 25 October 2016, the government confirmed that Heathrow would be allowed to build a third runway; however, a final decision would not be taken until winter of 2017/18, after consultations and government votes. The earliest opening year would be 2025. On 5 June 2018, the UK Cabinet approved the third runway, with a full vote planned for Parliament. On 25 June 2018, the House of Commons voted, 415–119, in favour of the third runway. The bill received support from most MPs in the Conservative and Labour parties. A judicial review against the decision was launched by four London local authorities affected by the expansion—Wandsworth, Richmond, Hillingdon and Hammersmith and Fulham—in partnership with Greenpeace and London mayor Sadiq Khan. Khan previously stated he would take legal action if it were passed by Parliament. In February 2020, the Court of Appeal ruled that the plans for a third runway were illegal since they did not adequately take into account the government's commitments to the Paris climate agreement. However, this ruling was later overturned by the Supreme Court in December 2020. The plan stalled after a fall in passenger numbers during the COVID pandemic and concerns about investment costs, but came back into the spotlight after the Labour Party won the 2024 UK general election. The airport's CEO indicated in November 2024 that he would seek a "final" decision from the government by the end of 2025. New transport proposals Currently, all rail connections with Heathrow Airport run along an east–west alignment to and from central London, and a number of schemes have been proposed over the years to develop new rail transport links with other parts of London and with stations outside the city. This mainline rail service has been extended with the opening of the Elizabeth Line. A 2009 proposal to create a southern link with via the Waterloo–Reading line was abandoned in 2011 due to lack of funding and difficulties with a high number of level crossings on the route into London, and a plan to link Heathrow to the planned High Speed 2 (HS2) railway line (with a new station, ) was also dropped from the HS2 plans in March 2015. Among other schemes that have been considered is a rapid transport link between Heathrow and Gatwick Airports, known as Heathwick, which would allow the airports to operate jointly as an airline hub; In 2018, the Department for Transport began to invite proposals for privately funded rail links to Heathrow Airport. Projects being considered under this initiative include: the Western Rail Approach to Heathrow, a proposal for a spur from the Great Western Main Line to link Heathrow to , , the South West, South Wales and the West Midlands; Heathrow Southern Railway, a similar scheme to the abandoned Airtrack proposal, which would connect Terminal 5 station with or , , , Guildford and ;
Technology
Europe
null
13606
https://en.wikipedia.org/wiki/Half-life
Half-life
Half-life (symbol ) is the time required for a quantity (of substance) to reduce to half of its initial value. The term is commonly used in nuclear physics to describe how quickly unstable atoms undergo radioactive decay or how long stable atoms survive. The term is also used more generally to characterize any type of exponential (or, rarely, non-exponential) decay. For example, the medical sciences refer to the biological half-life of drugs and other chemicals in the human body. The converse of half-life (in exponential growth) is doubling time. The original term, half-life period, dating to Ernest Rutherford's discovery of the principle in 1907, was shortened to half-life in the early 1950s. Rutherford applied the principle of a radioactive element's half-life in studies of age determination of rocks by measuring the decay period of radium to lead-206. Half-life is constant over the lifetime of an exponentially decaying quantity, and it is a characteristic unit for the exponential decay equation. The accompanying table shows the reduction of a quantity as a function of the number of half-lives elapsed. Probabilistic nature A half-life often describes the decay of discrete entities, such as radioactive atoms. In that case, it does not work to use the definition that states "half-life is the time required for exactly half of the entities to decay". For example, if there is just one radioactive atom, and its half-life is one second, there will not be "half of an atom" left after one second. Instead, the half-life is defined in terms of probability: "Half-life is the time required for exactly half of the entities to decay on average". In other words, the probability of a radioactive atom decaying within its half-life is 50%. For example, the accompanying image is a simulation of many identical atoms undergoing radioactive decay. Note that after one half-life there are not exactly one-half of the atoms remaining, only approximately, because of the random variation in the process. Nevertheless, when there are many identical atoms decaying (right boxes), the law of large numbers suggests that it is a very good approximation to say that half of the atoms remain after one half-life. Various simple exercises can demonstrate probabilistic decay, for example involving flipping coins or running a statistical computer program. Formulas for half-life in exponential decay An exponential decay can be described by any of the following four equivalent formulas: where is the initial quantity of the substance that will decay (this quantity may be measured in grams, moles, number of atoms, etc.), is the quantity that still remains and has not yet decayed after a time , is the half-life of the decaying quantity, is a positive number called the mean lifetime of the decaying quantity, is a positive number called the decay constant of the decaying quantity. The three parameters , , and are directly related in the following way:where is the natural logarithm of 2 (approximately 0.693). Half-life and reaction orders In chemical kinetics, the value of the half-life depends on the reaction order: Zero order kinetics The rate of this kind of reaction does not depend on the substrate concentration, . Thus the concentration decreases linearly. The integrated rate law of zero order kinetics is: In order to find the half-life, we have to replace the concentration value for the initial concentration divided by 2: and isolate the time:This formula indicates that the half-life for a zero order reaction depends on the initial concentration and the rate constant. First order kinetics In first order reactions, the rate of reaction will be proportional to the concentration of the reactant. Thus the concentration will decrease exponentially. as time progresses until it reaches zero, and the half-life will be constant, independent of concentration. The time for to decrease from to in a first-order reaction is given by the following equation:It can be solved forFor a first-order reaction, the half-life of a reactant is independent of its initial concentration. Therefore, if the concentration of at some arbitrary stage of the reaction is , then it will have fallen to after a further interval of Hence, the half-life of a first order reaction is given as the following:</p>The half-life of a first order reaction is independent of its initial concentration and depends solely on the reaction rate constant, . Second order kinetics In second order reactions, the rate of reaction is proportional to the square of the concentration. By integrating this rate, it can be shown that the concentration of the reactant decreases following this formula: We replace for in order to calculate the half-life of the reactant and isolate the time of the half-life ():This shows that the half-life of second order reactions depends on the initial concentration and rate constant. Decay by two or more processes Some quantities decay by two exponential-decay processes simultaneously. In this case, the actual half-life can be related to the half-lives and that the quantity would have if each of the decay processes acted in isolation: For three or more processes, the analogous formula is: For a proof of these formulas, see Exponential decay § Decay by two or more processes. Examples There is a half-life describing any exponential-decay process. For example: As noted above, in radioactive decay the half-life is the length of time after which there is a 50% chance that an atom will have undergone nuclear decay. It varies depending on the atom type and isotope, and is usually determined experimentally. See List of nuclides. The current flowing through an RC circuit or RL circuit decays with a half-life of or , respectively. For this example the term half time tends to be used rather than "half-life", but they mean the same thing. In a chemical reaction, the half-life of a species is the time it takes for the concentration of that substance to fall to half of its initial value. In a first-order reaction the half-life of the reactant is , where (also denoted as ) is the reaction rate constant. In non-exponential decay The term "half-life" is almost exclusively used for decay processes that are exponential (such as radioactive decay or the other examples above), or approximately exponential (such as biological half-life discussed below). In a decay process that is not even close to exponential, the half-life will change dramatically while the decay is happening. In this situation it is generally uncommon to talk about half-life in the first place, but sometimes people will describe the decay in terms of its "first half-life", "second half-life", etc., where the first half-life is defined as the time required for decay from the initial value to 50%, the second half-life is from 50% to 25%, and so on. In biology and pharmacology A biological half-life or elimination half-life is the time it takes for a substance (drug, radioactive nuclide, or other) to lose one-half of its pharmacologic, physiologic, or radiological activity. In a medical context, the half-life may also describe the time that it takes for the concentration of a substance in blood plasma to reach one-half of its steady-state value (the "plasma half-life"). The relationship between the biological and plasma half-lives of a substance can be complex, due to factors including accumulation in tissues, active metabolites, and receptor interactions. While a radioactive isotope decays almost perfectly according to first order kinetics, where the rate constant is a fixed number, the elimination of a substance from a living organism usually follows more complex chemical kinetics. For example, the biological half-life of water in a human being is about 9 to 10 days, though this can be altered by behavior and other conditions. The biological half-life of caesium in human beings is between one and four months. The concept of a half-life has also been utilized for pesticides in plants, and certain authors maintain that pesticide risk and impact assessment models rely on and are sensitive to information describing dissipation from plants. In epidemiology, the concept of half-life can refer to the length of time for the number of incident cases in a disease outbreak to drop by half, particularly if the dynamics of the outbreak can be modeled exponentially.
Physical sciences
Nuclear physics
Physics
13607
https://en.wikipedia.org/wiki/Humus
Humus
In classical soil science, humus is the dark organic matter in soil that is formed by the decomposition of plant and animal matter. It is a kind of soil organic matter. It is rich in nutrients and retains moisture in the soil. Humus is the Latin word for "earth" or "ground". In agriculture, "humus" sometimes also is used to describe mature or natural compost extracted from a woodland or other spontaneous source for use as a soil conditioner. It is also used to describe a topsoil horizon that contains organic matter (humus type, humus form, or humus profile). Humus has many nutrients that improve the health of soil, nitrogen being the most important. The ratio of carbon to nitrogen (C:N) of humus commonly ranges between 8:1 and 15:1 with the median being about 12:1. It also significantly improves (decreases) the bulk density of soil. Humus is amorphous and lacks the cellular structure characteristic of organisms. The solid residue of sewage sludge treatment, which is a secondary phase in the wastewater treatment process, is also called humus. When not judged contaminated by pathogens, toxic heavy metals, or persistent organic pollutants according to standard tolerance levels, it is sometimes composted and used as a soil amendment. Description The primary materials needed for the process of humification are plant detritus and dead animals and microbes, excreta of all soil-dwelling organisms, and also black carbon resulting from past fires. The composition of humus varies with that of primary (plant) materials and secondary microbial and animal products. The decomposition rate of the different compounds will affect the composition of the humus. It is difficult to define humus precisely because it is a very complex substance which is still not fully understood. Humus is different from decomposing soil organic matter. The latter looks rough and has visible remains of the original plant or animal matter. Fully humified humus, on the contrary, has a uniformly dark, spongy, and jelly-like appearance, and is amorphous; it may gradually decay over several years or persist for millennia. It has no determinate shape, structure, or quality. However, when examined under a microscope, humus may reveal tiny plant, animal, or microbial remains that have been mechanically, but not chemically, degraded. This suggests an ambiguous boundary between humus and soil organic matter, leading some authors to contest the use of the term humus and derived terms such as humic substances or humification, proposing the Soil Continuum Model (SCM). However, humus can be considered as having distinct properties, mostly linked to its richness in functional groups, justifying its maintenance as a specific term. Fully formed humus is essentially a collection of very large and complex molecules formed in part from lignin and other polyphenolic molecules of the original plant material (foliage, wood, bark), in part from similar molecules that have been produced by microbes. During decomposition processes these polyphenols are modified chemically so that they are able to join up with one another to form very large molecules. Some parts of these molecules are modified in such a way that protein molecules, amino acids, and amino sugars are able to attach themselves to the polyphenol "base" molecule. As protein contains both nitrogen and sulfur, this attachment gives humus a moderate content of these two important plant nutrients. Radiocarbon and other dating techniques have shown that the polyphenolic base of humus (mostly lignin and black carbon) can be very old, but the protein and carbohydrate attachments much younger, while to the light of modern concepts and methods the situation appears much more complex and unpredictable than previously thought. It seems that microbes are able to pull protein off humus molecules rather more readily than they are able to break the polyphenolic base molecule itself. As protein is removed its place may be taken by younger protein, or this younger protein may attach itself to another part of the humus molecule. The most useful functions of humus are in improving soil structure, all the more when associated with cations (e.g. calcium), and in providing a very large surface area that can hold nutrient elements until required by plants, an ion exchange function comparable to that of clay particles. Soil carbon sequestration is a major property of the soil, also considered as an ecosystem service. Only when it becomes stable and acquires its multi-century permanence, mostly via multiple interactions with the soil matrix, molecular soil humus should be considered to be of significance in removing the atmosphere's current carbon dioxide overload. There is little data available on the composition of humus because it is a complex mixture that is challenging for researchers to analyze. Researchers in the 1940s and 1960s tried using chemical separation to analyze plant and humic compounds in forest and agricultural soils, but this proved impossible because extractants interacted with the analysed organic matter and created many artefacts. Further research has been done in more recent years, though it remains an active field of study. Humification Microorganisms decompose a large portion of the soil organic matter into inorganic minerals that the roots of plants can absorb as nutrients. This process is termed mineralization. In this process, nitrogen (nitrogen cycle) and the other nutrients (nutrient cycle) in the decomposed organic matter are recycled. Depending on the conditions in which the decomposition occurs, a fraction of the organic matter does not mineralize and instead is transformed by a process called humification. Prior to modern analytical methods, early evidence led scientists to believe that humification resulted in concatenations of organic polymers resistant to the action of microorganisms, however recent research has demonstrated that microorganisms are capable of digesting humus. Humification can occur naturally in soil or artificially in the production of compost. Organic matter is humified by a combination of saprotrophic fungi, bacteria, microbes and animals such as earthworms, nematodes, protozoa, and arthropods (see Soil biology). Plant remains, including those that animals digested and excreted, contain organic compounds: sugars, starches, proteins, carbohydrates, lignins, waxes, resins, and organic acids. Decay in the soil begins with the decomposition of sugars and starches from carbohydrates, which decompose easily as detritivores initially invade the dead plant organs, while the remaining cellulose and lignin decompose more slowly. Simple proteins, organic acids, starches, and sugars decompose rapidly, while crude proteins, fats, waxes, and resins remain relatively unchanged for longer periods of time. Lignin, which is quickly transformed by white-rot fungi, is one of the primary precursors of humus, together with by-products of microbial and animal activity. The humus produced by humification is thus a mixture of compounds and complex biological chemicals of plant, animal, and microbial origin that has many functions and benefits in soil. Some judge earthworm humus (vermicompost) to be the optimal organic manure. Stability Much of the humus in most soils has persisted for more than 100 years, rather than having been decomposed into CO2, and can be regarded as stable; this organic matter has been protected from decomposition by microbial or enzyme action because it is hidden (occluded) inside small aggregates of soil particles, or tightly sorbed or complexed to clays. Most humus that is not protected in this way is decomposed within 10 years and can be regarded as less stable or more labile. The mixing activity of soil-consuming invertebrates (e.g. earthworms, termites, some millipedes) contribute to the stability of humus by favouring the formation of organo-mineral complexes with clay at the inside of their guts, hence more carbon sequestration in humus forms such as mull and amphi, with well-developed mineral-organic horizons, when compared with moder where most organic matter accumulates at the soil surface. Stable humus contributes few plant-available nutrients in soil, but it helps maintain its physical structure. A very stable form of humus is formed from the slow oxidation (redox) of soil carbon after the incorporation of finely powdered charcoal into the topsoil, suggested to result from the grinding and mixing activity of a tropical earthworm. This process is speculated to have been important in the formation of the unusually fertile Amazonian . However, some authors suggest that complex soil organic molecules may be much less stable than previously thought: "the available evidence does not support the formation of large-molecular-size and persistent 'humic substances' in soils. Instead, soil organic matter is a continuum of progressively decomposing organic compounds.″ Horizons Humus has a characteristic black or dark brown color and is organic due to an accumulation of organic carbon. Soil scientists use the capital letters O, A, B, C, and E to identify the master soil horizons, and lowercase letters for distinctions of these horizons. Most soils have three major horizons: the surface horizon (A), the subsoil (B), and the substratum (C). Some soils have an organic horizon (O) on the surface, but this horizon can also be buried. The master horizon (E) is used for subsurface horizons that have significantly lost minerals (eluviation). Bedrock, which is not soil, uses the letter R. The richness of soil horizons in humus determines their more or less dark color, generally decreasing from O to E, to the exception of deep horizons of podzolic soils enriched with colloidal humic substances which have been leached down the soil profile. Benefits of soil organic matter and humus The importance of chemically stable humus is thought by some to be the fertility it provides to soils in both a physical and chemical sense, though some agricultural experts put a greater focus on other features of it, such as its ability to suppress disease. It helps the soil retain moisture by increasing microporosity and encourages the formation of good soil structure. The incorporation of oxygen into large organic molecular assemblages generates many active, negatively charged sites that bind to positively charged ions (cations) of plant nutrients, making them more available to the plant by way of ion exchange. Humus allows soil organisms to feed and reproduce and is often described as the "life-force" of the soil. The process that converts soil organic matter into humus feeds the population of microorganisms and other creatures in the soil, and thus maintains high and healthy levels of soil life. The rate at which soil organic matter is converted into humus promotes (when fast, e.g. mull) or limits (when slow, e.g. mor) the coexistence of plants, animals, and microorganisms in the soil. "Effective humus" and "stable humus" are additional sources of nutrients for microbes: the former provides a readily available supply, and the latter acts as a long-term storage reservoir. Decomposition of dead plant material causes complex organic compounds to be slowly oxidized (lignin-like humus) or to decompose into simpler forms (sugars and amino sugars, and aliphatic and phenolic organic acids), which are further transformed into microbial biomass (microbial humus) or reorganized and further oxidized into humic assemblages (fulvic acids and humic acids), which bind to clay minerals and metal hydroxides. The ability of plants to absorb humic substances with their roots and metabolize them has been long debated. There is now a consensus that humus functions hormonally rather than simply nutritionally in plant physiology, and that organic sunstances exuded by roots and transformed in humus by soil organisms are an evolved strategy by which plants "talk" to the soil. Humus is a negatively charged colloidal substance which increases the cation-exchange capacity of soil, hence its ability to store nutrients by chelation. While these nutrient cations are available to plants, they are held in the soil and prevented from being leached by rain or irrigation. Humus can hold the equivalent of 80–90% of its weight in moisture and therefore increases the soil's capacity to withstand drought. The biochemical structure of humus enables it to moderate, i.e. buffer, excessive acidic or alkaline soil conditions. During humification, microbes secrete sticky, gum-like mucilages; these contribute to the crumby structure (tilth) of the soil by adhering particles together and allowing greater aeration of the soil. Toxic substances such as heavy metals and excess nutrients can be chelated, i.e., bound to the organic molecules of humus, and so prevented from leaching away. The dark, usually brown or black, color of humus helps to warm cold soils in spring. Humus can contribute to climate change mitigation through its carbon sequestration potential. Artificial humic acid and artificial fulvic acid synthesized from agricultural litter can increase the content of dissolved organic matter and total organic carbon in soil.
Physical sciences
Soil science
Earth science
13609
https://en.wikipedia.org/wiki/Hydrogen%20bond
Hydrogen bond
In chemistry, a hydrogen bond (or H-bond) is primarily an electrostatic force of attraction between a hydrogen (H) atom which is covalently bonded to a more electronegative "donor" atom or group (Dn), and another electronegative atom bearing a lone pair of electrons—the hydrogen bond acceptor (Ac). Such an interacting system is generally denoted , where the solid line denotes a polar covalent bond, and the dotted or dashed line indicates the hydrogen bond. The most frequent donor and acceptor atoms are the period 2 elements nitrogen (N), oxygen (O), and fluorine (F). Hydrogen bonds can be intermolecular (occurring between separate molecules) or intramolecular (occurring among parts of the same molecule). The energy of a hydrogen bond depends on the geometry, the environment, and the nature of the specific donor and acceptor atoms and can vary between 1 and 40 kcal/mol. This makes them somewhat stronger than a van der Waals interaction, and weaker than fully covalent or ionic bonds. This type of bond can occur in inorganic molecules such as water and in organic molecules like DNA and proteins. Hydrogen bonds are responsible for holding materials such as paper and felted wool together, and for causing separate sheets of paper to stick together after becoming wet and subsequently drying. The hydrogen bond is also responsible for many of the physical and chemical properties of compounds of N, O, and F that seem unusual compared with other similar structures. In particular, intermolecular hydrogen bonding is responsible for the high boiling point of water (100 °C) compared to the other group-16 hydrides that have much weaker hydrogen bonds. Intramolecular hydrogen bonding is partly responsible for the secondary and tertiary structures of proteins and nucleic acids. Bonding Definitions and general characteristics In a hydrogen bond, the electronegative atom not covalently attached to the hydrogen is named the proton acceptor, whereas the one covalently bound to the hydrogen is named the proton donor. This nomenclature is recommended by the IUPAC. The hydrogen of the donor is protic and therefore can act as a Lewis acid and the acceptor is the Lewis base. Hydrogen bonds are represented as system, where the dots represent the hydrogen bond. Liquids that display hydrogen bonding (such as water) are called associated liquids. Hydrogen bonds arise from a combination of electrostatics (multipole-multipole and multipole-induced multipole interactions), covalency (charge transfer by orbital overlap), and dispersion (London forces). In weaker hydrogen bonds, hydrogen atoms tend to bond to elements such as sulfur (S) or chlorine (Cl); even carbon (C) can serve as a donor, particularly when the carbon or one of its neighbors is electronegative (e.g., in chloroform, aldehydes and terminal acetylenes). Gradually, it was recognized that there are many examples of weaker hydrogen bonding involving donor other than N, O, or F and/or acceptor Ac with electronegativity approaching that of hydrogen (rather than being much more electronegative). Although weak (≈1 kcal/mol), "non-traditional" hydrogen bonding interactions are ubiquitous and influence structures of many kinds of materials. The definition of hydrogen bonding has gradually broadened over time to include these weaker attractive interactions. In 2011, an IUPAC Task Group recommended a modern evidence-based definition of hydrogen bonding, which was published in the IUPAC journal Pure and Applied Chemistry. This definition specifies: Bond strength Hydrogen bonds can vary in strength from weak (1–2 kJ/mol) to strong (161.5 kJ/mol in the bifluoride ion, ). Typical enthalpies in vapor include: (161.5 kJ/mol or 38.6 kcal/mol), illustrated uniquely by (29 kJ/mol or 6.9 kcal/mol), illustrated water-ammonia (21 kJ/mol or 5.0 kcal/mol), illustrated water-water, alcohol-alcohol (13 kJ/mol or 3.1 kcal/mol), illustrated by ammonia-ammonia (8 kJ/mol or 1.9 kcal/mol), illustrated water-amide (18 kJ/mol or 4.3 kcal/mol) The strength of intermolecular hydrogen bonds is most often evaluated by measurements of equilibria between molecules containing donor and/or acceptor units, most often in solution. The strength of intramolecular hydrogen bonds can be studied with equilibria between conformers with and without hydrogen bonds. The most important method for the identification of hydrogen bonds also in complicated molecules is crystallography, sometimes also NMR-spectroscopy. Structural details, in particular distances between donor and acceptor which are smaller than the sum of the van der Waals radii can be taken as indication of the hydrogen bond strength. One scheme gives the following somewhat arbitrary classification: those that are 15 to 40 kcal/mol, 5 to 15 kcal/mol, and >0 to 5 kcal/mol are considered strong, moderate, and weak, respectively. Hydrogen bonds involving C-H bonds are both very rare and weak. Resonance assisted hydrogen bond The resonance assisted hydrogen bond (commonly abbreviated as RAHB) is a strong type of hydrogen bond. It is characterized by the π-delocalization that involves the hydrogen and cannot be properly described by the electrostatic model alone. This description of the hydrogen bond has been proposed to describe unusually short distances generally observed between or . Structural details The distance is typically ≈110 pm, whereas the distance is ≈160 to 200 pm. The typical length of a hydrogen bond in water is 197 pm. The ideal bond angle depends on the nature of the hydrogen bond donor. The following hydrogen bond angles between a hydrofluoric acid donor and various acceptors have been determined experimentally: Spectroscopy Strong hydrogen bonds are revealed by downfield shifts in the 1H NMR spectrum. For example, the acidic proton in the enol tautomer of acetylacetone appears at  15.5, which is about 10 ppm downfield of a conventional alcohol. In the IR spectrum, hydrogen bonding shifts the stretching frequency to lower energy (i.e. the vibration frequency decreases). This shift reflects a weakening of the bond. Certain hydrogen bonds - improper hydrogen bonds - show a blue shift of the stretching frequency and a decrease in the bond length. H-bonds can also be measured by IR vibrational mode shifts of the acceptor. The amide I mode of backbone carbonyls in α-helices shifts to lower frequencies when they form H-bonds with side-chain hydroxyl groups. The dynamics of hydrogen bond structures in water can be probed by this OH stretching vibration. In the hydrogen bonding network in protic organic ionic plastic crystals (POIPCs), which are a type of phase change material exhibiting solid-solid phase transitions prior to melting, variable-temperature infrared spectroscopy can reveal the temperature dependence of hydrogen bonds and the dynamics of both the anions and the cations. The sudden weakening of hydrogen bonds during the solid-solid phase transition seems to be coupled with the onset of orientational or rotational disorder of the ions. Theoretical considerations Hydrogen bonding is of persistent theoretical interest. According to a modern description integrates both the intermolecular O:H lone pair ":" nonbond and the intramolecular polar-covalent bond associated with repulsive coupling. Quantum chemical calculations of the relevant interresidue potential constants (compliance constants) revealed large differences between individual H bonds of the same type. For example, the central interresidue hydrogen bond between guanine and cytosine is much stronger in comparison to the bond between the adenine-thymine pair. Theoretically, the bond strength of the hydrogen bonds can be assessed using NCI index, non-covalent interactions index, which allows a visualization of these non-covalent interactions, as its name indicates, using the electron density of the system. Interpretations of the anisotropies in the Compton profile of ordinary ice claim that the hydrogen bond is partly covalent. However, this interpretation was challenged and subsequently clarified. Most generally, the hydrogen bond can be viewed as a metric-dependent electrostatic scalar field between two or more intermolecular bonds. This is slightly different from the intramolecular bound states of, for example, covalent or ionic bonds. However, hydrogen bonding is generally still a bound state phenomenon, since the interaction energy has a net negative sum. The initial theory of hydrogen bonding proposed by Linus Pauling suggested that the hydrogen bonds had a partial covalent nature. This interpretation remained controversial until NMR techniques demonstrated information transfer between hydrogen-bonded nuclei, a feat that would only be possible if the hydrogen bond contained some covalent character. History The concept of hydrogen bonding once was challenging. Linus Pauling credits T. S. Moore and T. F. Winmill with the first mention of the hydrogen bond, in 1912. Moore and Winmill used the hydrogen bond to account for the fact that trimethylammonium hydroxide is a weaker base than tetramethylammonium hydroxide. The description of hydrogen bonding in its better-known setting, water, came some years later, in 1920, from Latimer and Rodebush. In that paper, Latimer and Rodebush cited the work of a fellow scientist at their laboratory, Maurice Loyal Huggins, saying, "Mr. Huggins of this laboratory in some work as yet unpublished, has used the idea of a hydrogen kernel held between two atoms as a theory in regard to certain organic compounds." Hydrogen bonds in small molecules Water An ubiquitous example of a hydrogen bond is found between water molecules. In a discrete water molecule, there are two hydrogen atoms and one oxygen atom. The simplest case is a pair of water molecules with one hydrogen bond between them, which is called the water dimer and is often used as a model system. When more molecules are present, as is the case with liquid water, more bonds are possible because the oxygen of one water molecule has two lone pairs of electrons, each of which can form a hydrogen bond with a hydrogen on another water molecule. This can repeat such that every water molecule is H-bonded with up to four other molecules, as shown in the figure (two through its two lone pairs, and two through its two hydrogen atoms). Hydrogen bonding strongly affects the crystal structure of ice, helping to create an open hexagonal lattice. The density of ice is less than the density of water at the same temperature; thus, the solid phase of water floats on the liquid, unlike most other substances. Liquid water's high boiling point is due to the high number of hydrogen bonds each molecule can form, relative to its low molecular mass. Owing to the difficulty of breaking these bonds, water has a very high boiling point, melting point, and viscosity compared to otherwise similar liquids not conjoined by hydrogen bonds. Water is unique because its oxygen atom has two lone pairs and two hydrogen atoms, meaning that the total number of bonds of a water molecule is up to four. The number of hydrogen bonds formed by a molecule of liquid water fluctuates with time and temperature. From TIP4P liquid water simulations at 25 °C, it was estimated that each water molecule participates in an average of 3.59 hydrogen bonds. At 100 °C, this number decreases to 3.24 due to the increased molecular motion and decreased density, while at 0 °C, the average number of hydrogen bonds increases to 3.69. Another study found a much smaller number of hydrogen bonds: 2.357 at 25 °C. Defining and counting the hydrogen bonds is not straightforward however. Because water may form hydrogen bonds with solute proton donors and acceptors, it may competitively inhibit the formation of solute intermolecular or intramolecular hydrogen bonds. Consequently, hydrogen bonds between or within solute molecules dissolved in water are almost always unfavorable relative to hydrogen bonds between water and the donors and acceptors for hydrogen bonds on those solutes. Hydrogen bonds between water molecules have an average lifetime of 10−11 seconds, or 10 picoseconds. Bifurcated and over-coordinated hydrogen bonds in water A single hydrogen atom can participate in two hydrogen bonds. This type of bonding is called "bifurcated" (split in two or "two-forked"). It can exist, for instance, in complex organic molecules. It has been suggested that a bifurcated hydrogen atom is an essential step in water reorientation. Acceptor-type hydrogen bonds (terminating on an oxygen's lone pairs) are more likely to form bifurcation (it is called overcoordinated oxygen, OCO) than are donor-type hydrogen bonds, beginning on the same oxygen's hydrogens. Other liquids For example, hydrogen fluoride—which has three lone pairs on the F atom but only one H atom—can form only two bonds; (ammonia has the opposite problem: three hydrogen atoms but only one lone pair). H-F***H-F***H-F Further manifestations of solvent hydrogen bonding Increase in the melting point, boiling point, solubility, and viscosity of many compounds can be explained by the concept of hydrogen bonding. Negative azeotropy of mixtures of HF and water. The fact that ice is less dense than liquid water is due to a crystal structure stabilized by hydrogen bonds. Dramatically higher boiling points of , , and HF compared to the heavier analogues , , and HCl, where hydrogen-bonding is absent. Viscosity of anhydrous phosphoric acid and of glycerol. Dimer formation in carboxylic acids and hexamer formation in hydrogen fluoride, which occur even in the gas phase, resulting in gross deviations from the ideal gas law. Pentamer formation of water and alcohols in apolar solvents. Hydrogen bonds in polymers Hydrogen bonding plays an important role in determining the three-dimensional structures and the properties adopted by many proteins. Compared to the , , and bonds that comprise most polymers, hydrogen bonds are far weaker, perhaps 5%. Thus, hydrogen bonds can be broken by chemical or mechanical means while retaining the basic structure of the polymer backbone. This hierarchy of bond strengths (covalent bonds being stronger than hydrogen-bonds being stronger than van der Waals forces) is relevant in the properties of many materials. DNA In these macromolecules, bonding between parts of the same macromolecule cause it to fold into a specific shape, which helps determine the molecule's physiological or biochemical role. For example, the double helical structure of DNA is due largely to hydrogen bonding between its base pairs (as well as pi stacking interactions), which link one complementary strand to the other and enable replication. Proteins In the secondary structure of proteins, hydrogen bonds form between the backbone oxygens and amide hydrogens. When the spacing of the amino acid residues participating in a hydrogen bond occurs regularly between positions i and , an alpha helix is formed. When the spacing is less, between positions i and , then a 310 helix is formed. When two strands are joined by hydrogen bonds involving alternating residues on each participating strand, a beta sheet is formed. Hydrogen bonds also play a part in forming the tertiary structure of protein through interaction of R-groups. (
Physical sciences
Chemical bonds
null
13645
https://en.wikipedia.org/wiki/Horse
Horse
The horse (Equus ferus caballus) is a domesticated, one-toed, hoofed mammal. It belongs to the taxonomic family Equidae and is one of two extant subspecies of Equus ferus. The horse has evolved over the past 45 to 55 million years from a small multi-toed creature, Eohippus, into the large, single-toed animal of today. Humans began domesticating horses around 4000 BCE, and their domestication is believed to have been widespread by 3000 BCE. Horses in the subspecies caballus are domesticated, although some domesticated populations live in the wild as feral horses. These feral populations are not true wild horses, which are horses that never have been domesticated. There is an extensive, specialized vocabulary used to describe equine-related concepts, covering everything from anatomy to life stages, size, colors, markings, breeds, locomotion, and behavior. Horses are adapted to run, allowing them to quickly escape predators, and possess a good sense of balance and a strong fight-or-flight response. Related to this need to flee from predators in the wild is an unusual trait: horses are able to sleep both standing up and lying down, with younger horses tending to sleep significantly more than adults. Female horses, called mares, carry their young for approximately 11 months and a young horse, called a foal, can stand and run shortly following birth. Most domesticated horses begin training under a saddle or in a harness between the ages of two and four. They reach full adult development by age five, and have an average lifespan of between 25 and 30 years. Horse breeds are loosely divided into three categories based on general temperament: spirited "hot bloods" with speed and endurance; "cold bloods", such as draft horses and some ponies, suitable for slow, heavy work; and "warmbloods", developed from crosses between hot bloods and cold bloods, often focusing on creating breeds for specific riding purposes, particularly in Europe. There are more than 300 breeds of horse in the world today, developed for many different uses. Horses and humans interact in a wide variety of sport competitions and non-competitive recreational pursuits as well as in working activities such as police work, agriculture, entertainment, and therapy. Horses were historically used in warfare, from which a wide variety of riding and driving techniques developed, using many different styles of equipment and methods of control. Many products are derived from horses, including meat, milk, hide, hair, bone, and pharmaceuticals extracted from the urine of pregnant mares. Humans provide domesticated horses with food, water, and shelter, as well as attention from specialists such as veterinarians and farriers. Biology Lifespan and life stages Depending on breed, management and environment, the modern domestic horse has a life expectancy of 25 to 30 years. Uncommonly, a few animals live into their 40s and, occasionally, beyond. The oldest verifiable record was "Old Billy", a 19th-century horse that lived to the age of 62. In modern times, Sugar Puff, who had been listed in Guinness World Records as the world's oldest living pony, died in 2007 at age 56. Regardless of a horse or pony's actual birth date, for most competition purposes a year is added to its age each January 1 of each year in the Northern Hemisphere and each August 1 in the Southern Hemisphere. The exception is in endurance riding, where the minimum age to compete is based on the animal's actual calendar age. The following terminology is used to describe horses of various ages: Foal A horse of either sex less than one year old. A nursing foal is sometimes called a suckling, and a foal that has been weaned is called a weanling. Most domesticated foals are weaned at five to seven months of age, although foals can be weaned at four months with no adverse physical effects. Yearling A horse of either sex that is between one and two years old. Colt A male horse under the age of four. A common terminology error is to call any young horse a "colt", when the term actually only refers to young male horses. Filly A female horse under the age of four. Mare A female horse four years old and older. Stallion A non-castrated male horse four years old and older. The term "horse" is sometimes used colloquially to refer specifically to a stallion. Gelding A castrated male horse of any age. In horse racing, these definitions may differ: For example, in the British Isles, Thoroughbred horse racing defines colts and fillies as less than five years old. However, Australian Thoroughbred racing defines colts and fillies as less than four years old. Size and measurement The height of horses is measured at the highest point of the withers, where the neck meets the back. This point is used because it is a stable point of the anatomy, unlike the head or neck, which move up and down in relation to the body of the horse. In English-speaking countries, the height of horses is often stated in units of hands and inches: one hand is equal to . The height is expressed as the number of full hands, followed by a point, then the number of additional inches, and ending with the abbreviation "h" or "hh" (for "hands high"). Thus, a horse described as "15.2 h" is 15 hands plus 2 inches, for a total of in height. The size of horses varies by breed, but also is influenced by nutrition. Light-riding horses usually range in height from and can weigh from . Larger-riding horses usually start at about and often are as tall as , weighing from . Heavy or draft horses are usually at least high and can be as tall as high. They can weigh from about . The largest horse in recorded history was probably a Shire horse named Mammoth, who was born in 1848. He stood high and his peak weight was estimated at . The record holder for the smallest horse ever is Thumbelina, a fully mature miniature horse affected by dwarfism. She was tall and weighed . Ponies Ponies are taxonomically the same animals as horses. The distinction between a horse and pony is commonly drawn on the basis of height, especially for competition purposes. However, height alone is not dispositive; the difference between horses and ponies may also include aspects of phenotype, including conformation and temperament. The traditional standard for height of a horse or a pony at maturity is . An animal or over is usually considered to be a horse and one less than a pony, but there are many exceptions to the traditional standard. In Australia, ponies are considered to be those under . For competition in the Western division of the United States Equestrian Federation, the cutoff is . The International Federation for Equestrian Sports, the world governing body for horse sport, uses metric measurements and defines a pony as being any horse measuring less than at the withers without shoes, which is just over , and , with shoes. Height is not the sole criterion for distinguishing horses from ponies. Breed registries for horses that typically produce individuals both under and over consider all animals of that breed to be horses regardless of their height. Conversely, some pony breeds may have features in common with horses, and individual animals may occasionally mature at over , but are still considered to be ponies. Ponies often exhibit thicker manes, tails, and overall coat. They also have proportionally shorter legs, wider barrels, heavier bone, shorter and thicker necks, and short heads with broad foreheads. They may have calmer temperaments than horses and also a high level of intelligence that may or may not be used to cooperate with human handlers. Small size, by itself, is not an exclusive determinant. For example, the Shetland pony which averages , is considered a pony.Conversely, breeds such as the Falabella and other miniature horses, which can be no taller than , are classified by their registries as very small horses, not ponies. Genetics Horses have 64 chromosomes. The horse genome was sequenced in 2007. It contains 2.7 billion DNA base pairs, which is larger than the dog genome, but smaller than the human genome or the bovine genome. The map is available to researchers. Colors and markings Horses exhibit a diverse array of coat colors and distinctive markings, described by a specialized vocabulary. Often, a horse is classified first by its coat color, before breed or sex. Horses of the same color may be distinguished from one another by white markings, which, along with various spotting patterns, are inherited separately from coat color. Many genes that create horse coat colors and patterns have been identified. Current genetic tests can identify at least 13 different alleles influencing coat color, and research continues to discover new genes linked to specific traits. The basic coat colors of chestnut and black are determined by the gene controlled by the Melanocortin 1 receptor, also known as the "extension gene" or "red factor". Its recessive form is "red" (chestnut) and its dominant form is black. Additional genes control suppression of black color to point coloration that results in a bay, spotting patterns such as pinto or leopard, dilution genes such as palomino or dun, as well as greying, and all the other factors that create the many possible coat colors found in horses. Horses that have a white coat color are often mislabeled; a horse that looks "white" is usually a middle-aged or older gray. Grays are born a darker shade, get lighter as they age, but usually keep black skin underneath their white hair coat (with the exception of pink skin under white markings). The only horses properly called white are born with a predominantly white hair coat and pink skin, a fairly rare occurrence. Different and unrelated genetic factors can produce white coat colors in horses, including several different alleles of dominant white and the sabino-1 gene. However, there are no "albino" horses, defined as having both pink skin and red eyes. Reproduction and development Gestation lasts approximately 340 days, with an average range 320–370 days, and usually results in one foal; twins are rare. Horses are a precocial species, and foals are capable of standing and running within a short time following birth. Foals are usually born in the spring. The estrous cycle of a mare occurs roughly every 19–22 days and occurs from early spring into autumn. Most mares enter an anestrus period during the winter and thus do not cycle in this period. Foals are generally weaned from their mothers between four and six months of age. Horses, particularly colts, are sometimes physically capable of reproduction at about 18 months, but domesticated horses are rarely allowed to breed before the age of three, especially females. Horses four years old are considered mature, although the skeleton normally continues to develop until the age of six; maturation also depends on the horse's size, breed, sex, and quality of care. Larger horses have larger bones; therefore, not only do the bones take longer to form bone tissue, but the epiphyseal plates are larger and take longer to convert from cartilage to bone. These plates convert after the other parts of the bones, and are crucial to development. Depending on maturity, breed, and work expected, horses are usually put under saddle and trained to be ridden between the ages of two and four. Although Thoroughbred race horses are put on the track as young as the age of two in some countries, horses specifically bred for sports such as dressage are generally not put under saddle until they are three or four years old, because their bones and muscles are not solidly developed. For endurance riding competition, horses are not deemed mature enough to compete until they are a full 60 calendar months (five years) old. Anatomy Skeletal system The horse skeleton averages 205 bones. A significant difference between the horse skeleton and that of a human is the lack of a collarbone—the horse's forelimbs are attached to the spinal column by a powerful set of muscles, tendons, and ligaments that attach the shoulder blade to the torso. The horse's four legs and hooves are also unique structures. Their leg bones are proportioned differently from those of a human. For example, the body part that is called a horse's "knee" is actually made up of the carpal bones that correspond to the human wrist. Similarly, the hock contains bones equivalent to those in the human ankle and heel. The lower leg bones of a horse correspond to the bones of the human hand or foot, and the fetlock (incorrectly called the "ankle") is actually the proximal sesamoid bones between the cannon bones (a single equivalent to the human metacarpal or metatarsal bones) and the proximal phalanges, located where one finds the "knuckles" of a human. A horse also has no muscles in its legs below the knees and hocks, only skin, hair, bone, tendons, ligaments, cartilage, and the assorted specialized tissues that make up the hoof. Hooves The critical importance of the feet and legs is summed up by the traditional adage, "no foot, no horse". The horse hoof begins with the distal phalanges, the equivalent of the human fingertip or tip of the toe, surrounded by cartilage and other specialized, blood-rich soft tissues such as the laminae. The exterior hoof wall and horn of the sole is made of keratin, the same material as a human fingernail. The result is that a horse, weighing on average , travels on the same bones as would a human on tiptoe. For the protection of the hoof under certain conditions, some horses have horseshoes placed on their feet by a professional farrier. The hoof continually grows, and in most domesticated horses needs to be trimmed (and horseshoes reset, if used) every five to eight weeks, though the hooves of horses in the wild wear down and regrow at a rate suitable for their terrain. Teeth Horses are adapted to grazing. In an adult horse, there are 12 incisors at the front of the mouth, adapted to biting off the grass or other vegetation. There are 24 teeth adapted for chewing, the premolars and molars, at the back of the mouth. Stallions and geldings have four additional teeth just behind the incisors, a type of canine teeth called "tushes". Some horses, both male and female, will also develop one to four very small vestigial teeth in front of the molars, known as "wolf" teeth, which are generally removed because they can interfere with the bit. There is an empty interdental space between the incisors and the molars where the bit rests directly on the gums, or "bars" of the horse's mouth when the horse is bridled. An estimate of a horse's age can be made from looking at its teeth. The teeth continue to erupt throughout life and are worn down by grazing. Therefore, the incisors show changes as the horse ages; they develop a distinct wear pattern, changes in tooth shape, and changes in the angle at which the chewing surfaces meet. This allows a very rough estimate of a horse's age, although diet and veterinary care can also affect the rate of tooth wear. Digestion Horses are herbivores with a digestive system adapted to a forage diet of grasses and other plant material, consumed steadily throughout the day. Therefore, compared to humans, they have a relatively small stomach but very long intestines to facilitate a steady flow of nutrients. A horse will eat of food per day and, under normal use, drink of water. Horses are not ruminants, having only one stomach, like humans. But unlike humans, they can digest cellulose, a major component of grass, through the process of hindgut fermentation. Cellulose fermentation by symbiotic bacteria and other microbes occurs in the cecum and the large intestine. Horses cannot vomit, so digestion problems can quickly cause colic, a leading cause of death. Although horses do not have a gallbladder, they tolerate high amounts of fat in their diet. Senses The horses' senses are based on their status as prey animals, where they must be aware of their surroundings at all times. The equine eye is one of the largest of any land mammal. Horses are lateral-eyed, meaning that their eyes are positioned on the sides of their heads. This means that horses have a range of vision of more than 350°, with approximately 65° of this being binocular vision and the remaining 285° monocular vision. Horses have excellent day and night vision, but they have two-color, or dichromatic vision; their color vision is somewhat like red-green color blindness in humans, where certain colors, especially red and related colors, appear as a shade of green. Their sense of smell, while much better than that of humans, is not quite as good as that of a dog. It is believed to play a key role in the social interactions of horses as well as detecting other key scents in the environment. Horses have two olfactory centers. The first system is in the nostrils and nasal cavity, which analyze a wide range of odors. The second, located under the nasal cavity, are the vomeronasal organs, also called Jacobson's organs. These have a separate nerve pathway to the brain and appear to primarily analyze pheromones. A horse's hearing is good, and the pinna of each ear can rotate up to 180°, giving the potential for 360° hearing without having to move the head. Noise affects the behavior of horses and certain kinds of noise may contribute to stress—a 2013 study in the UK indicated that stabled horses were calmest in a quiet setting, or if listening to country or classical music, but displayed signs of nervousness when listening to jazz or rock music. This study also recommended keeping music under a volume of 21 decibels. An Australian study found that stabled racehorses listening to talk radio had a higher rate of gastric ulcers than horses listening to music, and racehorses stabled where a radio was played had a higher overall rate of ulceration than horses stabled where there was no radio playing. Horses have a great sense of balance, due partly to their ability to feel their footing and partly to highly developed proprioception—the unconscious sense of where the body and limbs are at all times. A horse's sense of touch is well-developed. The most sensitive areas are around the eyes, ears, and nose. Horses are able to sense contact as subtle as an insect landing anywhere on the body. Horses have an advanced sense of taste, which allows them to sort through fodder and choose what they would most like to eat, and their prehensile lips can easily sort even small grains. Horses generally will not eat poisonous plants, however, there are exceptions; horses will occasionally eat toxic amounts of poisonous plants even when there is adequate healthy food. Movement All horses move naturally with four basic gaits: the four-beat walk, which averages ; the two-beat trot or jog at (faster for harness racing horses); the canter or lope, a three-beat gait that is ; the gallop, which averages , but the world record for a horse galloping over a short, sprint distance is . Besides these basic gaits, some horses perform a two-beat pace, instead of the trot. There also are several four-beat 'ambling' gaits that are approximately the speed of a trot or pace, though smoother to ride. These include the lateral rack, running walk, and tölt as well as the diagonal fox trot. Ambling gaits are often genetic in some breeds, known collectively as gaited horses. These horses replace the trot with one of the ambling gaits. Behavior Horses are prey animals with a strong fight-or-flight response. Their first reaction to a threat is to startle and usually flee, although they will stand their ground and defend themselves when flight is impossible or if their young are threatened. They also tend to be curious; when startled, they will often hesitate an instant to ascertain the cause of their fright, and may not always flee from something that they perceive as non-threatening. Most light horse riding breeds were developed for speed, agility, alertness and endurance; natural qualities that extend from their wild ancestors. However, through selective breeding, some breeds of horses are quite docile, particularly certain draft horses. Horses are herd animals, with a clear hierarchy of rank, led by a dominant individual, usually a mare. They are also social creatures that are able to form companionship attachments to their own species and to other animals, including humans. They communicate in various ways, including vocalizations such as nickering or whinnying, mutual grooming, and body language. Many horses will become difficult to manage if they are isolated, but with training, horses can learn to accept a human as a companion, and thus be comfortable away from other horses. However, when confined with insufficient companionship, exercise, or stimulation, individuals may develop stable vices, an assortment of bad habits, mostly stereotypies of psychological origin, that include wood chewing, wall kicking, "weaving" (rocking back and forth), and other problems. Intelligence and learning Studies have indicated that horses perform a number of cognitive tasks on a daily basis, meeting mental challenges that include food procurement and identification of individuals within a social system. They also have good spatial discrimination abilities. They are naturally curious and apt to investigate things they have not seen before. Studies have assessed equine intelligence in areas such as problem solving, speed of learning, and memory. Horses excel at simple learning, but also are able to use more advanced cognitive abilities that involve categorization and concept learning. They can learn using habituation, desensitization, classical conditioning, and operant conditioning, and positive and negative reinforcement. One study has indicated that horses can differentiate between "more or less" if the quantity involved is less than four. Domesticated horses may face greater mental challenges than wild horses, because they live in artificial environments that prevent instinctive behavior whilst also learning tasks that are not natural. Horses are animals of habit that respond well to regimentation, and respond best when the same routines and techniques are used consistently. One trainer believes that "intelligent" horses are reflections of intelligent trainers who effectively use response conditioning techniques and positive reinforcement to train in the style that best fits with an individual animal's natural inclinations. Temperament Horses are mammals. As such, they are warm-blooded, or endothermic creatures, as opposed to cold-blooded, or poikilothermic animals. However, these words have developed a separate meaning in the context of equine terminology, used to describe temperament, not body temperature. For example, the "hot-bloods", such as many race horses, exhibit more sensitivity and energy, while the "cold-bloods", such as most draft breeds, are quieter and calmer. Sometimes "hot-bloods" are classified as "light horses" or "riding horses", with the "cold-bloods" classified as "draft horses" or "work horses". "Hot blooded" breeds include "oriental horses" such as the Akhal-Teke, Arabian horse, Barb, and now-extinct Turkoman horse, as well as the Thoroughbred, a breed developed in England from the older oriental breeds. Hot bloods tend to be spirited, bold, and learn quickly. They are bred for agility and speed. They tend to be physically refined—thin-skinned, slim, and long-legged. The original oriental breeds were brought to Europe from the Middle East and North Africa when European breeders wished to infuse these traits into racing and light cavalry horses. Muscular, heavy draft horses are known as "cold bloods." They are bred not only for strength, but also to have the calm, patient temperament needed to pull a plow or a heavy carriage full of people. They are sometimes nicknamed "gentle giants". Well-known draft breeds include the Belgian and the Clydesdale. Some, like the Percheron, are lighter and livelier, developed to pull carriages or to plow large fields in drier climates. Others, such as the Shire, are slower and more powerful, bred to plow fields with heavy, clay-based soils. The cold-blooded group also includes some pony breeds. "Warmblood" breeds, such as the Trakehner or Hanoverian, developed when European carriage and war horses were crossed with Arabians or Thoroughbreds, producing a riding horse with more refinement than a draft horse, but greater size and milder temperament than a lighter breed. Certain pony breeds with warmblood characteristics have been developed for smaller riders. Warmbloods are considered a "light horse" or "riding horse". Today, the term "Warmblood" refers to a specific subset of sport horse breeds that are used for competition in dressage and show jumping. Strictly speaking, the term "warm blood" refers to any cross between cold-blooded and hot-blooded breeds. Examples include breeds such as the Irish Draught or the Cleveland Bay. The term was once used to refer to breeds of light riding horse other than Thoroughbreds or Arabians, such as the Morgan horse. Sleep patterns Horses are able to sleep both standing up and lying down. In an adaptation from life in the wild, horses are able to enter light sleep by using a "stay apparatus" in their legs, allowing them to doze without collapsing. Horses sleep better when in groups because some animals will sleep while others stand guard to watch for predators. A horse kept alone will not sleep well because its instincts are to keep a constant eye out for danger. Unlike humans, horses do not sleep in a solid, unbroken period of time, but take many short periods of rest. Horses spend four to fifteen hours a day in standing rest, and from a few minutes to several hours lying down. Total sleep time in a 24-hour period may range from several minutes to a couple of hours, mostly in short intervals of about 15 minutes each. The average sleep time of a domestic horse is said to be 2.9 hours per day. Horses must lie down to reach REM sleep. They only have to lie down for an hour or two every few days to meet their minimum REM sleep requirements. However, if a horse is never allowed to lie down, after several days it will become sleep-deprived, and in rare cases may suddenly collapse because it slips, involuntarily, into REM sleep while still standing. This condition differs from narcolepsy, although horses may also suffer from that disorder. Taxonomy and evolution The horse adapted to survive in areas of wide-open terrain with sparse vegetation, surviving in an ecosystem where other large grazing animals, especially ruminants, could not. Horses and other equids are odd-toed ungulates of the order Perissodactyla, a group of mammals dominant during the Tertiary period. In the past, this order contained 14 families, but only three—Equidae (the horse and related species), Tapiridae (the tapir), and Rhinocerotidae (the rhinoceroses)—have survived to the present day. The earliest known member of the family Equidae was the Hyracotherium, which lived between 45 and 55 million years ago, during the Eocene period. It had 4 toes on each front foot, and 3 toes on each back foot. The extra toe on the front feet soon disappeared with the Mesohippus, which lived 32 to 37 million years ago. Over time, the extra side toes shrank in size until they vanished. All that remains of them in modern horses is a set of small vestigial bones on the leg below the knee, known informally as splint bones. Their legs also lengthened as their toes disappeared until they were a hooved animal capable of running at great speed. By about 5 million years ago, the modern Equus had evolved. Equid teeth also evolved from browsing on soft, tropical plants to adapt to browsing of drier plant material, then to grazing of tougher plains grasses. Thus proto-horses changed from leaf-eating forest-dwellers to grass-eating inhabitants of semi-arid regions worldwide, including the steppes of Eurasia and the Great Plains of North America. By about 15,000 years ago, Equus ferus was a widespread holarctic species. Horse bones from this time period, the late Pleistocene, are found in Europe, Eurasia, Beringia, and North America. Yet between 10,000 and 7,600 years ago, the horse became extinct in North America. The reasons for this extinction are not fully known, but one theory notes that extinction in North America paralleled human arrival. Another theory points to climate change, noting that approximately 12,500 years ago, the grasses characteristic of a steppe ecosystem gave way to shrub tundra, which was covered with unpalatable plants. Wild species surviving into modern times A truly wild horse is a species or subspecies with no ancestors that were ever successfully domesticated. Therefore, most "wild" horses today are actually feral horses, animals that escaped or were turned loose from domestic herds and the descendants of those animals. Only two wild subspecies, the tarpan and the Przewalski's horse, survived into recorded history and only the latter survives today. The Przewalski's horse (Equus ferus przewalskii), named after the Russian explorer Nikolai Przhevalsky, is a rare Asian animal. It is also known as the Mongolian wild horse; Mongolian people know it as the taki, and the Kyrgyz people call it a kirtag. The subspecies was presumed extinct in the wild between 1969 and 1992, while a small breeding population survived in zoos around the world. In 1992, it was reestablished in the wild by the conservation efforts of numerous zoos. Today, a small wild breeding population exists in Mongolia. There are additional animals still maintained at zoos throughout the world. Their status as a truly wild horse was called into question when domestic horses of the 5,000-year-old Botai culture of Central Asia were found to be more closely related to Przewalski's horses than to E. f. caballus. The study raised the possibility that modern Przewalski's horses could be the feral descendants of the domestic Botai horses. The study concluded that the Botai animals appear to have been an independent domestication attempt and apparently unsuccessful in terms of genetic markers carrying through to modern domesticated equines. However, the question of whether all Przewalski's horses descend from this population is also unresolved, as only one of seven modern Przewalski’s horses in the study shared this ancestry. It may also be that both the Botai horses and the modern Przewalski's horses descend separately from the same ancient wild Przewalski's horse population. The tarpan or European wild horse (Equus ferus ferus) was found in Europe and much of Asia. It survived into the historical era, but became extinct in 1909, when the last captive died in a Russian zoo. Thus, the genetic line was lost. Attempts have been made to recreate the tarpan, which resulted in horses with outward physical similarities, but nonetheless descended from domesticated ancestors and not true wild horses. Periodically, populations of horses in isolated areas are speculated to be relict populations of wild horses, but generally have been proven to be feral or domestic. For example, the Riwoche horse of Tibet was proposed as such, but testing did not reveal genetic differences from domesticated horses. Similarly, the Sorraia of Portugal was proposed as a direct descendant of the Tarpan on the basis of shared characteristics, but genetic studies have shown that the Sorraia is more closely related to other horse breeds, and that the outward similarity is an unreliable measure of relatedness. Other modern equids Besides the horse, there are six other species of genus Equus in the Equidae family. These are the ass or donkey, Equus asinus; the mountain zebra, Equus zebra; plains zebra, Equus quagga; Grévy's zebra, Equus grevyi; the kiang, Equus kiang; and the onager, Equus hemionus. Horses can crossbreed with other members of their genus. The most common hybrid is the mule, a cross between a "jack" (male donkey) and a mare. A related hybrid, a hinny, is a cross between a stallion and a "jenny" (female donkey). Other hybrids include the zorse, a cross between a zebra and a horse. With rare exceptions, most hybrids are sterile and cannot reproduce. Domestication and history Domestication of the horse most likely took place in central Asia prior to 3500 BCE. Two major sources of information are used to determine where and when the horse was first domesticated and how the domesticated horse spread around the world. The first source is based on palaeological and archaeological discoveries; the second source is a comparison of DNA obtained from modern horses to that from bones and teeth of ancient horse remains. The earliest archaeological evidence for attempted domestication of the horse comes from sites in Ukraine and Kazakhstan, dating to approximately 4000–3500 BCE. However the horses domesticated at the Botai culture in Kazakhstan were Przewalski's horses and not the ancestors of modern horses. By 3000 BCE, the horse was completely domesticated and by 2000 BCE there was a sharp increase in the number of horse bones found in human settlements in northwestern Europe, indicating the spread of domesticated horses throughout the continent. The most recent, but most irrefutable evidence of domestication comes from sites where horse remains were interred with chariots in graves of the Indo-European Sintashta and Petrovka cultures 2100 BCE. A 2021 genetic study suggested that most modern domestic horses descend from the lower Volga-Don region. Ancient horse genomes indicate that these populations influenced almost all local populations as they expanded rapidly throughout Eurasia, beginning about 4,200 years ago. It also shows that certain adaptations were strongly selected due to riding, and that equestrian material culture, including Sintashta spoke-wheeled chariots spread with the horse itself. Domestication is also studied by using the genetic material of present-day horses and comparing it with the genetic material present in the bones and teeth of horse remains found in archaeological and palaeological excavations. The variation in the genetic material shows that very few wild stallions contributed to the domestic horse, while many mares were part of early domesticated herds. This is reflected in the difference in genetic variation between the DNA that is passed on along the paternal, or sire line (Y-chromosome) versus that passed on along the maternal, or dam line (mitochondrial DNA). There are very low levels of Y-chromosome variability, but a great deal of genetic variation in mitochondrial DNA. There is also regional variation in mitochondrial DNA due to the inclusion of wild mares in domestic herds. Another characteristic of domestication is an increase in coat color variation. In horses, this increased dramatically between 5000 and 3000 BCE. Before the availability of DNA techniques to resolve the questions related to the domestication of the horse, various hypotheses were proposed. One classification was based on body types and conformation, suggesting the presence of four basic prototypes that had adapted to their environment prior to domestication. Another hypothesis held that the four prototypes originated from a single wild species and that all different body types were entirely a result of selective breeding after domestication. However, the lack of a detectable substructure in the horse has resulted in a rejection of both hypotheses. Feral populations Feral horses are born and live in the wild, but are descended from domesticated animals. Many populations of feral horses exist throughout the world. Studies of feral herds have provided useful insights into the behavior of prehistoric horses, as well as greater understanding of the instincts and behaviors that drive horses that live in domesticated conditions. There are also semi-feral horses in many parts of the world, such as Dartmoor and the New Forest in the UK, where the animals are all privately owned but live for significant amounts of time in "wild" conditions on undeveloped, often public, lands. Owners of such animals often pay a fee for grazing rights. Breeds The concept of purebred bloodstock and a controlled, written breed registry has come to be particularly significant and important in modern times. Sometimes purebred horses are incorrectly or inaccurately called "thoroughbreds". Thoroughbred is a specific breed of horse, while a "purebred" is a horse (or any other animal) with a defined pedigree recognized by a breed registry. Horse breeds are groups of horses with distinctive characteristics that are transmitted consistently to their offspring, such as conformation, color, performance ability, or disposition. These inherited traits result from a combination of natural crosses and artificial selection methods. Horses have been selectively bred since their domestication. An early example of people who practiced selective horse breeding were the Bedouin, who had a reputation for careful practices, keeping extensive pedigrees of their Arabian horses and placing great value upon pure bloodlines. These pedigrees were originally transmitted via an oral tradition. Breeds developed due to a need for "form to function", the necessity to develop certain characteristics in order to perform a particular type of work. Thus, a powerful but refined breed such as the Andalusian developed as riding horses with an aptitude for dressage. Heavy draft horses were developed out of a need to perform demanding farm work and pull heavy wagons. Other horse breeds had been developed specifically for light agricultural work, carriage and road work, various sport disciplines, or simply as pets. Some breeds developed through centuries of crossing other breeds, while others descended from a single foundation sire, or other limited or restricted foundation bloodstock. One of the earliest formal registries was General Stud Book for Thoroughbreds, which began in 1791 and traced back to the foundation bloodstock for the breed. There are more than 300 horse breeds in the world today. Interaction with humans Worldwide, horses play a role within human cultures and have done so for millennia. Horses are used for leisure activities, sports, and working purposes. The Food and Agriculture Organization (FAO) estimates that in 2008, there were almost 59,000,000 horses in the world, with around 33,500,000 in the Americas, 13,800,000 in Asia and 6,300,000 in Europe and smaller portions in Africa and Oceania. There are estimated to be 9,500,000 horses in the United States alone. The American Horse Council estimates that horse-related activities have a direct impact on the economy of the United States of over $39 billion, and when indirect spending is considered, the impact is over $102 billion. In a 2004 "poll" conducted by Animal Planet, more than 50,000 viewers from 73 countries voted for the horse as the world's 4th favorite animal. Communication between human and horse is paramount in any equestrian activity; to aid this process horses are usually ridden with a saddle on their backs to assist the rider with balance and positioning, and a bridle or related headgear to assist the rider in maintaining control. Sometimes horses are ridden without a saddle, and occasionally, horses are trained to perform without a bridle or other headgear. Many horses are also driven, which requires a harness, bridle, and some type of vehicle. Sport Historically, equestrians honed their skills through games and races. Equestrian sports provided entertainment for crowds and honed the excellent horsemanship that was needed in battle. Many sports, such as dressage, eventing, and show jumping, have origins in military training, which were focused on control and balance of both horse and rider. Other sports, such as rodeo, developed from practical skills such as those needed on working ranches and stations. Sport hunting from horseback evolved from earlier practical hunting techniques. Horse racing of all types evolved from impromptu competitions between riders or drivers. All forms of competition, requiring demanding and specialized skills from both horse and rider, resulted in the systematic development of specialized breeds and equipment for each sport. The popularity of equestrian sports through the centuries has resulted in the preservation of skills that would otherwise have disappeared after horses stopped being used in combat. Horses are trained to be ridden or driven in a variety of sporting competitions. Examples include show jumping, dressage, three-day eventing, competitive driving, endurance riding, gymkhana, rodeos, and fox hunting. Horse shows, which have their origins in medieval European fairs, are held around the world. They host a huge range of classes, covering all of the mounted and harness disciplines, as well as "In-hand" classes where the horses are led, rather than ridden, to be evaluated on their conformation. The method of judging varies with the discipline, but winning usually depends on style and ability of both horse and rider. Sports such as polo do not judge the horse itself, but rather use the horse as a partner for human competitors as a necessary part of the game. Although the horse requires specialized training to participate, the details of its performance are not judged, only the result of the rider's actions—be it getting a ball through a goal or some other task. Examples of these sports of partnership between human and horse include jousting, in which the main goal is for one rider to unseat the other, and buzkashi, a team game played throughout Central Asia, the aim being to capture a goat carcass while on horseback. Horse racing is an equestrian sport and major international industry, watched in almost every nation of the world. There are three types: "flat" racing; steeplechasing, i.e. racing over jumps; and harness racing, where horses trot or pace while pulling a driver in a small, light cart known as a sulky. A major part of horse racing's economic importance lies in the gambling associated with it. Work There are certain jobs that horses do very well, and no technology has yet developed to fully replace them. For example, mounted police horses are still effective for certain types of patrol duties and crowd control. Cattle ranches still require riders on horseback to round up cattle that are scattered across remote, rugged terrain. Search and rescue organizations in some countries depend upon mounted teams to locate people, particularly hikers and children, and to provide disaster relief assistance. Horses can also be used in areas where it is necessary to avoid vehicular disruption to delicate soil, such as nature reserves. They may also be the only form of transport allowed in wilderness areas. Horses are quieter than motorized vehicles. Law enforcement officers such as park rangers or game wardens may use horses for patrols, and horses or mules may also be used for clearing trails or other work in areas of rough terrain where vehicles are less effective. Although machinery has replaced horses in many parts of the world, an estimated 100 million horses, donkeys and mules are still used for agriculture and transportation in less developed areas. This number includes around 27 million working animals in Africa alone. Some land management practices such as cultivating and logging can be efficiently performed with horses. In agriculture, less fossil fuel is used and increased environmental conservation occurs over time with the use of draft animals such as horses. Logging with horses can result in reduced damage to soil structure and less damage to trees due to more selective logging. Warfare Horses have been used in warfare for most of recorded history. The first archaeological evidence of horses used in warfare dates to between 4000 and 3000 BCE, and the use of horses in warfare was widespread by the end of the Bronze Age. Although mechanization has largely replaced the horse as a weapon of war, horses are still seen today in limited military uses, mostly for ceremonial purposes, or for reconnaissance and transport activities in areas of rough terrain where motorized vehicles are ineffective. Horses have been used in the 21st century by the Janjaweed militias in the War in Darfur. Entertainment and culture Modern horses are often used to reenact many of their historical work purposes. Horses are used, complete with equipment that is authentic or a meticulously recreated replica, in various live action historical reenactments of specific periods of history, especially recreations of famous battles. Horses are also used to preserve cultural traditions and for ceremonial purposes. Countries such as the United Kingdom still use horse-drawn carriages to convey royalty and other VIPs to and from certain culturally significant events. Public exhibitions are another example, such as the Budweiser Clydesdales, seen in parades and other public settings, a team of draft horses that pull a beer wagon similar to that used before the invention of the modern motorized truck. Horses are frequently used in television, films and literature. They are sometimes featured as a major character in films about particular animals, but also used as visual elements that assure the accuracy of historical stories. Both live horses and iconic images of horses are used in advertising to promote a variety of products. The horse frequently appears in coats of arms in heraldry, in a variety of poses and equipment. The mythologies of many cultures, including Greco-Roman, Hindu, Islamic, and Germanic, include references to both normal horses and those with wings or additional limbs, and multiple myths also call upon the horse to draw the chariots of the Moon and Sun. The horse also appears in the 12-year cycle of animals in the Chinese zodiac related to the Chinese calendar. Horses serve as the inspiration for many modern automobile names and logos, including the Ford Pinto, Ford Bronco, Ford Mustang, Hyundai Equus, Hyundai Pony, Mitsubishi Starion, Subaru Brumby, Mitsubishi Colt/Dodge Colt, Pinzgauer, Steyr-Puch Haflinger, Pegaso, Porsche, Rolls-Royce Camargue, Ferrari, Carlsson, Kamaz, Corre La Licorne, Iran Khodro, Eicher, and Baojun. Indian TVS Motor Company also uses a horse on their motorcycles & scooters. Therapeutic use People of all ages with physical and mental disabilities obtain beneficial results from an association with horses. Therapeutic riding is used to mentally and physically stimulate disabled persons and help them improve their lives through improved balance and coordination, increased self-confidence, and a greater feeling of freedom and independence. The benefits of equestrian activity for people with disabilities has also been recognized with the addition of equestrian events to the Paralympic Games and recognition of para-equestrian events by the International Federation for Equestrian Sports (FEI). Hippotherapy and therapeutic horseback riding are names for different physical, occupational, and speech therapy treatment strategies that use equine movement. In hippotherapy, a therapist uses the horse's movement to improve their patient's cognitive, coordination, balance, and fine motor skills, whereas therapeutic horseback riding uses specific riding skills. Horses also provide psychological benefits to people whether they actually ride or not. "Equine-assisted" or "equine-facilitated" therapy is a form of experiential psychotherapy that uses horses as companion animals to assist people with mental illness, including anxiety disorders, psychotic disorders, mood disorders, behavioral difficulties, and those who are going through major life changes. There are also experimental programs using horses in prison settings. Exposure to horses appears to improve the behavior of inmates and help reduce recidivism when they leave. Products Horses are raw material for many products made by humans throughout history, including byproducts from the slaughter of horses as well as materials collected from living horses. Products collected from living horses include mare's milk, used by people with large horse herds, such as the Mongols, who let it ferment to produce kumis. Horse blood was once used as food by the Mongols and other nomadic tribes, who found it a convenient source of nutrition when traveling. Drinking their own horses' blood allowed the Mongols to ride for extended periods of time without stopping to eat. The drug Premarin is a mixture of estrogens extracted from the urine of pregnant mares (pregnant mares' urine), and was previously a widely used drug for hormone replacement therapy. The tail hair of horses can be used for making bows for string instruments such as the violin, viola, cello, and double bass. Horse meat has been used as food for humans and carnivorous animals throughout the ages. Approximately 5 million horses are slaughtered each year for meat worldwide. It is eaten in many parts of the world, though consumption is taboo in some cultures, and a subject of political controversy in others. Horsehide leather has been used for boots, gloves, jackets, baseballs, and baseball gloves. Horse hooves can also be used to produce animal glue. Horse bones can be used to make implements. Specifically, in Italian cuisine, the horse tibia is sharpened into a probe called a spinto, which is used to test the readiness of a (pig) ham as it cures. In Asia, the saba is a horsehide vessel used in the production of kumis. Care Horses are grazing animals, and their major source of nutrients is good-quality forage from hay or pasture. They can consume approximately 2% to 2.5% of their body weight in dry feed each day. Therefore, a adult horse could eat up to of food. Sometimes, concentrated feed such as grain is fed in addition to pasture or hay, especially when the animal is very active. When grain is fed, equine nutritionists recommend that 50% or more of the animal's diet by weight should still be forage. Horses require a plentiful supply of clean water, a minimum of per day. Although horses are adapted to live outside, they require shelter from the wind and precipitation, which can range from a simple shed or shelter to an elaborate stable. Horses require routine hoof care from a farrier, as well as vaccinations to protect against various diseases, and dental examinations from a veterinarian or a specialized equine dentist. If horses are kept inside in a barn, they require regular daily exercise for their physical health and mental well-being. When turned outside, they require well-maintained, sturdy fences to be safely contained. Regular grooming is also helpful to help the horse maintain good health of the hair coat and underlying skin. Climate change
Biology and health sciences
Biology
null
13654
https://en.wikipedia.org/wiki/Heat%20engine
Heat engine
A heat engine is a system that converts heat to usable energy, particularly mechanical energy, which can then be used to do mechanical work. While originally conceived in the context of mechanical energy, the concept of the heat engine has been applied to various other kinds of energy, particularly electrical, since at least the late 19th century. The heat engine does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the higher temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a lower temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag. In general, an engine is any machine that converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem of thermodynamics. Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), nuclear fission, absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications. Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models. Overview In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two. In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature. The efficiency of various heat engines proposed or used today has a large range: 3% (97 percent waste heat using low quality heat) for the ocean thermal energy conversion (OTEC) ocean power proposal 25% for most automotive gasoline engines 49% for a supercritical coal-fired power station such as the Avedøre Power Station 50%+ for long stroke marine Diesel engines 60% for a combined cycle gas turbine The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency. Examples Although some cycles have a typical combustion location (internal or external), they can often be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles. In a closed cycle the working fluid is retained within the engine at the completion of the cycle whereas is an open cycle the working fluid is either exchanged with the environment together with the products of combustion in the case of the internal combustion engine or simply vented to the environment in the case of external combustion engines like steam engines and turbines. Everyday examples Everyday examples of heat engines include the thermal power station, internal combustion engine, firearms, refrigerators and heat pumps. Power stations are examples of heat engines run in a forward direction in which heat flows from a hot reservoir and flows into a cool reservoir to produce work as the desired product. Refrigerators, air conditioners and heat pumps are examples of heat engines that are run in reverse, i.e. they use work to take heat energy at a low temperature and raise its temperature in a more efficient way than the simple conversion of work into heat (either through friction or electrical resistance). Refrigerators remove heat from within a thermally sealed chamber at low temperature and vent waste heat at a higher temperature to the environment and heat pumps take heat from the low temperature environment and 'vent' it into a thermally sealed chamber (a house) at higher temperature. In general heat engines exploit the thermal properties associated with the expansion and compression of gases according to the gas laws or the properties associated with phase changes between gas and liquid states. Earth's heat engine Earth's atmosphere and hydrosphere—Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe. A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy. Phase-change cycles In phase change cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression. Rankine cycle (classical steam engine) Regenerative cycle (steam engine more efficient than Rankine cycle) Organic Rankine cycle (Coolant changing phase in temperature ranges of ice and hot liquid water) Vapor to liquid cycle (drinking bird, injector, Minto wheel) Liquid to solid cycle (frost heaving – water changing from ice to liquid and back again can lift rock up to 60 cm.) Solid to gas cycle (firearms – solid propellants combust to hot gases.) Gas-only cycles In these cycles and engines the working fluid is always a gas (i.e., there is no phase change): Carnot cycle (Carnot heat engine) Ericsson cycle (Caloric Ship John Ericsson) Stirling cycle (Stirling engine, thermoacoustic devices) Internal combustion engine (ICE): Otto cycle (e.g. gasoline/petrol engine) Diesel cycle (e.g. Diesel engine) Atkinson cycle (Atkinson engine) Brayton cycle or Joule cycle originally Ericsson cycle (gas turbine) Lenoir cycle (e.g., pulse jet engine) Miller cycle (Miller engine) Liquid-only cycles In these cycles and engines the working fluid are always like liquid: Stirling cycle (Malone engine) Electron cycles Johnson thermoelectric energy converter Thermoelectric (Peltier–Seebeck effect) Thermogalvanic cell Thermionic emission Thermotunnel cooling Magnetic cycles Thermo-magnetic motor (Tesla) Cycles used for refrigeration A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible. Refrigeration cycles include: Air cycle machine Gas-absorption refrigerator Magnetic refrigeration Stirling cryocooler Vapor-compression refrigeration Vuilleumier cycle Evaporative heat engines The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air. Mesoscopic heat engines Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality Efficiency The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input. From the laws of thermodynamics, after a completed cycle: and therefore where is the net work extracted from the engine in one cycle. (It is negative, in the IUPAC convention, since work is done by the engine.) is the heat energy taken from the high temperature heat source in the surroundings in one cycle. (It is positive since heat energy is added to the engine.) is the waste heat given off by the engine to the cold temperature heat sink. (It is negative since heat is lost by the engine to the sink.) In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and giving off the rest as waste heat to the cold temperature heat sink. In general, the efficiency of a given heat transfer process is defined by the ratio of "what is taken out" to "what is put in". (For a refrigerator or heat pump, which can be considered as a heat engine run in reverse, this is the coefficient of performance and it is ≥ 1.) In the case of an engine, one desires to extract work and has to put in heat , for instance from combustion of a fuel, so the engine efficiency is reasonably defined as The efficiency is less than 100% because of the waste heat unavoidably lost to the cold sink (and corresponding compression work put in) during the required recompression at the cold temperature before the power stroke of the engine can occur again. The theoretical maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, after a full cycle, the overall change of entropy is zero: Note that is positive because isothermal expansion in the power stroke increases the multiplicity of the working fluid while is negative since recompression decreases the multiplicity. If the engine is ideal and runs reversibly, and , and thus , which gives and thus the Carnot limit for heat-engine efficiency, where is the absolute temperature of the hot source and that of the cold sink, usually measured in kelvins. The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of any thermodynamic cycle. Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine. Figure 2 and Figure 3 show variations on Carnot cycle efficiency with temperature. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature. Endo-reversible heat-engines By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired. A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics, where the system is broken into reversible subsystems, but with non reversible interactions between them. A classical example is the Curzon–Ahlborn engine, very similar to a Carnot engine, but where the thermal reservoirs at temperature and are allowed to be different from the temperatures of the substance going through the reversible Carnot cycle: and . The heat transfers between the reservoirs and the substance are considered as conductive (and irreversible) in the form . In this case, a tradeoff has to be made between power output and efficiency. If the engine is operated very slowly, the heat flux is low, and the classical Carnot result is found , but at the price of a vanishing power output. If instead one chooses to operate the engine at its maximum output power, the efficiency becomes (Note: T in units of K or °R) This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics): As shown, the Curzon–Ahlborn efficiency much more closely models that observed. History Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today. Enhancements Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules: Increase the temperature difference in the heat engine. The simplest way to do this is to increase the hot side temperature, which is the approach used in modern combined-cycle gas turbines. Unfortunately, physical limits (such as the melting point of the materials used to build the engine) and environmental concerns regarding NOx production (if the heat source is combustion with ambient air) restrict the maximum temperature on workable heat-engines. Modern gas turbines run at temperatures as high as possible within the range of temperatures necessary to maintain acceptable NOx output . Another way of increasing efficiency is to lower the output temperature. One new method of doing so is to use mixed chemical working fluids, then exploit the changing behavior of the mixtures. One of the most famous is the so-called Kalina cycle, which uses a 70/30 mix of ammonia and water as its working fluid. This mixture allows the cycle to generate useful power at considerably lower temperatures than most other processes. Exploit the physical properties of the working fluid. The most common such exploitation is the use of water above the critical point (supercritical water). The behavior of fluids above their critical point changes radically, and with materials such as water and carbon dioxide it is possible to exploit those changes in behavior to extract greater thermodynamic efficiency from the heat engine, even if it is using a fairly conventional Brayton or Rankine cycle. A newer and very promising material for such applications is supercritical CO2. SO2 and xenon have also been considered for such applications. Downsides include issues of corrosion and erosion, the different chemical behavior above and below the critical point, the needed high pressures and – in the case of sulfur dioxide and to a lesser extent carbon dioxide – toxicity. Among the mentioned compounds xenon is least suitable for use in a nuclear reactor due to the high neutron absorption cross section of almost all isotopes of xenon, whereas carbon dioxide and water can also double as a neutron moderator for a thermal spectrum reactor. Exploit the chemical properties of the working fluid. A fairly new and novel exploit is to use exotic working fluids with advantageous chemical properties. One such is nitrogen dioxide (NO2), a toxic component of smog, which has a natural dimer as di-nitrogen tetraoxide (N2O4). At low temperature, the N2O4 is compressed and then heated. The increasing temperature causes each N2O4 to break apart into two NO2 molecules. This lowers the molecular weight of the working fluid, which drastically increases the efficiency of the cycle. Once the NO2 has expanded through the turbine, it is cooled by the heat sink, which makes it recombine into N2O4. This is then fed back by the compressor for another cycle. Such species as aluminium bromide (Al2Br6), NOCl, and Ga2I6 have all been investigated for such uses. To date, their drawbacks have not warranted their use, despite the efficiency gains that can be realized. Heat engine processes Each process is one of the following: isothermal (at constant temperature, maintained with heat added or removed from a heat source or sink) isobaric (at constant pressure) isometric/isochoric (at constant volume), also referred to as iso-volumetric adiabatic (no heat is added or removed from the system during adiabatic process) isentropic (reversible adiabatic process, no heat is added or removed during isentropic process)
Physical sciences
Thermodynamics
Physics
13660
https://en.wikipedia.org/wiki/Homeomorphism
Homeomorphism
In mathematics and more specifically in topology, a homeomorphism (from Greek roots meaning "similar shape", named by Henri Poincaré), also called topological isomorphism, or bicontinuous function, is a bijective and continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, they are the mappings that preserve all the topological properties of a given space. Two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint they are the same. Very roughly speaking, a topological space is a geometric object, and a homeomorphism results from a continuous deformation of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a torus are not. However, this description can be misleading. Some continuous deformations do not result into homeomorphisms, such as the deformation of a line into a point. Some homeomorphisms do not result from continuous deformations, such as the homeomorphism between a trefoil knot and a circle. Homotopy and isotopy are precise definitions for the informal concept of continuous deformation. Definition A function between two topological spaces is a homeomorphism if it has the following properties: is a bijection (one-to-one and onto), is continuous, the inverse function is continuous ( is an open mapping). A homeomorphism is sometimes called a bicontinuous function. If such a function exists, and are homeomorphic. A self-homeomorphism is a homeomorphism from a topological space onto itself. Being "homeomorphic" is an equivalence relation on topological spaces. Its equivalence classes are called homeomorphism classes. The third requirement, that be continuous, is essential. Consider for instance the function (the unit circle in ) defined by This function is bijective and continuous, but not a homeomorphism ( is compact but is not). The function is not continuous at the point because although maps to any neighbourhood of this point also includes points that the function maps close to but the points it maps to numbers in between lie outside the neighbourhood. Homeomorphisms are the isomorphisms in the category of topological spaces. As such, the composition of two homeomorphisms is again a homeomorphism, and the set of all self-homeomorphisms forms a group, called the homeomorphism group of X, often denoted This group can be given a topology, such as the compact-open topology, which under certain assumptions makes it a topological group. In some contexts, there are homeomorphic objects that cannot be continuously deformed from one to the other. Homotopy and isotopy are equivalence relations that have been introduced for dealing with such situations. Similarly, as usual in category theory, given two spaces that are homeomorphic, the space of homeomorphisms between them, is a torsor for the homeomorphism groups and and, given a specific homeomorphism between and all three sets are identified. Examples The open interval is homeomorphic to the real numbers for any (In this case, a bicontinuous forward mapping is given by while other such mappings are given by scaled and translated versions of the or functions). The unit 2-disc and the unit square in are homeomorphic; since the unit disc can be deformed into the unit square. An example of a bicontinuous mapping from the square to the disc is, in polar coordinates, The graph of a differentiable function is homeomorphic to the domain of the function. A differentiable parametrization of a curve is a homeomorphism between the domain of the parametrization and the curve. A chart of a manifold is a homeomorphism between an open subset of the manifold and an open subset of a Euclidean space. The stereographic projection is a homeomorphism between the unit sphere in with a single point removed and the set of all points in (a 2-dimensional plane). If is a topological group, its inversion map is a homeomorphism. Also, for any the left translation the right translation and the inner automorphism are homeomorphisms. Counter-examples and are not homeomorphic for The Euclidean real line is not homeomorphic to the unit circle as a subspace of , since the unit circle is compact as a subspace of Euclidean but the real line is not compact. The one-dimensional intervals and are not homeomorphic because one is compact while the other is not. Properties Two homeomorphic spaces share the same topological properties. For example, if one of them is compact, then the other is as well; if one of them is connected, then the other is as well; if one of them is Hausdorff, then the other is as well; their homotopy and homology groups will coincide. Note however that this does not extend to properties defined via a metric; there are metric spaces that are homeomorphic even though one of them is complete and the other is not. A homeomorphism is simultaneously an open mapping and a closed mapping; that is, it maps open sets to open sets and closed sets to closed sets. Every self-homeomorphism in can be extended to a self-homeomorphism of the whole disk (Alexander's trick). Informal discussion The intuitive criterion of stretching, bending, cutting and gluing back together takes a certain amount of practice to apply correctly—it may not be obvious from the description above that deforming a line segment to a point is impermissible, for instance. It is thus important to realize that it is the formal definition given above that counts. In this case, for example, the line segment possesses infinitely many points, and therefore cannot be put into a bijection with a set containing only a finite number of points, including a single point. This characterization of a homeomorphism often leads to a confusion with the concept of homotopy, which is actually defined as a continuous deformation, but from one function to another, rather than one space to another. In the case of a homeomorphism, envisioning a continuous deformation is a mental tool for keeping track of which points on space X correspond to which points on Y—one just follows them as X deforms. In the case of homotopy, the continuous deformation from one map to the other is of the essence, and it is also less restrictive, since none of the maps involved need to be one-to-one or onto. Homotopy does lead to a relation on spaces: homotopy equivalence. There is a name for the kind of deformation involved in visualizing a homeomorphism. It is (except when cutting and regluing are required) an isotopy between the identity map on X and the homeomorphism from X to Y.
Mathematics
Topology
null
13711
https://en.wikipedia.org/wiki/Hydroxide
Hydroxide
Hydroxide is a diatomic anion with chemical formula OH−. It consists of an oxygen and hydrogen atom held together by a single covalent bond, and carries a negative electric charge. It is an important but usually minor constituent of water. It functions as a base, a ligand, a nucleophile, and a catalyst. The hydroxide ion forms salts, some of which dissociate in aqueous solution, liberating solvated hydroxide ions. Sodium hydroxide is a multi-million-ton per annum commodity chemical. The corresponding electrically neutral compound HO• is the hydroxyl radical. The corresponding covalently bound group –OH of atoms is the hydroxy group. Both the hydroxide ion and hydroxy group are nucleophiles and can act as catalysts in organic chemistry. Many inorganic substances which bear the word hydroxide in their names are not ionic compounds of the hydroxide ion, but covalent compounds which contain hydroxy groups. Hydroxide ion The hydroxide ion is naturally produced from water by the self-ionization reaction: H3O+ + OH− 2H2O The equilibrium constant for this reaction, defined as Kw = [H+][OH−] has a value close to 10−14 at 25 °C, so the concentration of hydroxide ions in pure water is close to 10−7 mol∙dm−3, to satisfy the equal charge constraint. The pH of a solution is equal to the decimal cologarithm of the hydrogen cation concentration; the pH of pure water is close to 7 at ambient temperatures. The concentration of hydroxide ions can be expressed in terms of pOH, which is close to (14 − pH), so the pOH of pure water is also close to 7. Addition of a base to water will reduce the hydrogen cation concentration and therefore increase the hydroxide ion concentration (decrease pH, increase pOH) even if the base does not itself contain hydroxide. For example, ammonia solutions have a pH greater than 7 due to the reaction NH3 + H+ , which decreases the hydrogen cation concentration, which increases the hydroxide ion concentration. pOH can be kept at a nearly constant value with various buffer solutions. In an aqueous solution the hydroxide ion is a base in the Brønsted–Lowry sense as it can accept a proton from a Brønsted–Lowry acid to form a water molecule. It can also act as a Lewis base by donating a pair of electrons to a Lewis acid. In aqueous solution both hydrogen and hydroxide ions are strongly solvated, with hydrogen bonds between oxygen and hydrogen atoms. Indeed, the bihydroxide ion has been characterized in the solid state. This compound is centrosymmetric and has a very short hydrogen bond (114.5 pm) that is similar to the length in the bifluoride ion (114 pm). In aqueous solution the hydroxide ion forms strong hydrogen bonds with water molecules. A consequence of this is that concentrated solutions of sodium hydroxide have high viscosity due to the formation of an extended network of hydrogen bonds as in hydrogen fluoride solutions. In solution, exposed to air, the hydroxide ion reacts rapidly with atmospheric carbon dioxide, acting as an acid, to form, initially, the bicarbonate ion. OH− + CO2 The equilibrium constant for this reaction can be specified either as a reaction with dissolved carbon dioxide or as a reaction with carbon dioxide gas (see Carbonic acid for values and details). At neutral or acid pH, the reaction is slow, but is catalyzed by the enzyme carbonic anhydrase, which effectively creates hydroxide ions at the active site. Solutions containing the hydroxide ion attack glass. In this case, the silicates in glass are acting as acids. Basic hydroxides, whether solids or in solution, are stored in airtight plastic containers. The hydroxide ion can function as a typical electron-pair donor ligand, forming such complexes as tetrahydroxoaluminate/tetrahydroxidoaluminate [Al(OH)4]−. It is also often found in mixed-ligand complexes of the type [MLx(OH)y]z+, where L is a ligand. The hydroxide ion often serves as a bridging ligand, donating one pair of electrons to each of the atoms being bridged. As illustrated by [Pb2(OH)]3+, metal hydroxides are often written in a simplified format. It can even act as a 3-electron-pair donor, as in the tetramer [PtMe3(OH)]4. When bound to a strongly electron-withdrawing metal centre, hydroxide ligands tend to ionise into oxide ligands. For example, the bichromate ion [HCrO4]− dissociates according to [O3CrO–H]− [CrO4]2− + H+ with a pKa of about 5.9. Vibrational spectra The infrared spectra of compounds containing the OH functional group have strong absorption bands in the region centered around 3500 cm−1. The high frequency of molecular vibration is a consequence of the small mass of the hydrogen atom as compared to the mass of the oxygen atom, and this makes detection of hydroxyl groups by infrared spectroscopy relatively easy. A band due to an OH group tends to be sharp. However, the band width increases when the OH group is involved in hydrogen bonding. A water molecule has an HOH bending mode at about 1600 cm−1, so the absence of this band can be used to distinguish an OH group from a water molecule. When the OH group is bound to a metal ion in a coordination complex, an M−OH bending mode can be observed. For example, in [Sn(OH)6]2− it occurs at 1065 cm−1. The bending mode for a bridging hydroxide tends to be at a lower frequency as in [(bipyridine)Cu(OH)2Cu(bipyridine)]2+ (955 cm−1). M−OH stretching vibrations occur below about 600 cm−1. For example, the tetrahedral ion [Zn(OH)4]2− has bands at 470 cm−1 (Raman-active, polarized) and 420 cm−1 (infrared). The same ion has a (HO)–Zn–(OH) bending vibration at 300 cm−1. Applications Sodium hydroxide solutions, also known as lye and caustic soda, are used in the manufacture of pulp and paper, textiles, drinking water, soaps and detergents, and as a drain cleaner. Worldwide production in 2004 was approximately 60 million tonnes. The principal method of manufacture is the chloralkali process. Solutions containing the hydroxide ion are generated when a salt of a weak acid is dissolved in water. Sodium carbonate is used as an alkali, for example, by virtue of the hydrolysis reaction + H2O + OH− (pKa2= 10.33 at 25 °C and zero ionic strength) Although the base strength of sodium carbonate solutions is lower than a concentrated sodium hydroxide solution, it has the advantage of being a solid. It is also manufactured on a vast scale (42 million tonnes in 2005) by the Solvay process. An example of the use of sodium carbonate as an alkali is when washing soda (another name for sodium carbonate) acts on insoluble esters, such as triglycerides, commonly known as fats, to hydrolyze them and make them soluble. Bauxite, a basic hydroxide of aluminium, is the principal ore from which the metal is manufactured. Similarly, goethite (α-FeO(OH)) and lepidocrocite (γ-FeO(OH)), basic hydroxides of iron, are among the principal ores used for the manufacture of metallic iron. Inorganic hydroxides Alkali metals Aside from NaOH and KOH, which enjoy very large scale applications, the hydroxides of the other alkali metals also are useful. Lithium hydroxide (LiOH) is used in breathing gas purification systems for spacecraft, submarines, and rebreathers to remove carbon dioxide from exhaled gas. 2 LiOH + CO2 → Li2CO3 + H2O The hydroxide of lithium is preferred to that of sodium because of its lower mass. Sodium hydroxide, potassium hydroxide, and the hydroxides of the other alkali metals are also strong bases. Alkaline earth metals Beryllium hydroxide Be(OH)2 is amphoteric. The hydroxide itself is insoluble in water, with a solubility product log K*sp of −11.7. Addition of acid gives soluble hydrolysis products, including the trimeric ion [Be3(OH)3(H2O)6]3+, which has OH groups bridging between pairs of beryllium ions making a 6-membered ring. At very low pH the aqua ion [Be(H2O)4]2+ is formed. Addition of hydroxide to Be(OH)2 gives the soluble tetrahydroxoberyllate or tetrahydroxidoberyllate anion, [Be(OH)4]2−. The solubility in water of the other hydroxides in this group increases with increasing atomic number. Magnesium hydroxide Mg(OH)2 is a strong base (up to the limit of its solubility, which is very low in pure water), as are the hydroxides of the heavier alkaline earths: calcium hydroxide, strontium hydroxide, and barium hydroxide. A solution or suspension of calcium hydroxide is known as limewater and can be used to test for the weak acid carbon dioxide. The reaction Ca(OH)2 + CO2 Ca2+ + + OH− illustrates the basicity of calcium hydroxide. Soda lime, which is a mixture of the strong bases NaOH and KOH with Ca(OH)2, is used as a CO2 absorbent. Boron group elements The simplest hydroxide of boron B(OH)3, known as boric acid, is an acid. Unlike the hydroxides of the alkali and alkaline earth hydroxides, it does not dissociate in aqueous solution. Instead, it reacts with water molecules acting as a Lewis acid, releasing protons. B(OH)3 + H2O + H+ A variety of oxyanions of boron are known, which, in the protonated form, contain hydroxide groups. Aluminium hydroxide Al(OH)3 is amphoteric and dissolves in alkaline solution. Al(OH)3 (solid) + OH− (aq)  (aq) In the Bayer process for the production of pure aluminium oxide from bauxite minerals this equilibrium is manipulated by careful control of temperature and alkali concentration. In the first phase, aluminium dissolves in hot alkaline solution as , but other hydroxides usually present in the mineral, such as iron hydroxides, do not dissolve because they are not amphoteric. After removal of the insolubles, the so-called red mud, pure aluminium hydroxide is made to precipitate by reducing the temperature and adding water to the extract, which, by diluting the alkali, lowers the pH of the solution. Basic aluminium hydroxide AlO(OH), which may be present in bauxite, is also amphoteric. In mildly acidic solutions, the hydroxo/hydroxido complexes formed by aluminium are somewhat different from those of boron, reflecting the greater size of Al(III) vs. B(III). The concentration of the species [Al13(OH)32]7+ is very dependent on the total aluminium concentration. Various other hydroxo complexes are found in crystalline compounds. Perhaps the most important is the basic hydroxide AlO(OH), a polymeric material known by the names of the mineral forms boehmite or diaspore, depending on crystal structure. Gallium hydroxide, indium hydroxide, and thallium(III) hydroxide are also amphoteric. Thallium(I) hydroxide is a strong base. Carbon group elements Carbon forms no simple hydroxides. The hypothetical compound C(OH)4 (orthocarbonic acid or methanetetrol) is unstable in aqueous solution: C(OH)4 → + H3O+ + H+ H2CO3 Carbon dioxide is also known as carbonic anhydride, meaning that it forms by dehydration of carbonic acid H2CO3 (OC(OH)2). Silicic acid is the name given to a variety of compounds with a generic formula [SiOx(OH)4−2x]n. Orthosilicic acid has been identified in very dilute aqueous solution. It is a weak acid with pKa1 = 9.84, pKa2 = 13.2 at 25 °C. It is usually written as H4SiO4, but the formula Si(OH)4 is generally accepted. Other silicic acids such as metasilicic acid (H2SiO3), disilicic acid (H2Si2O5), and pyrosilicic acid (H6Si2O7) have been characterized. These acids also have hydroxide groups attached to the silicon; the formulas suggest that these acids are protonated forms of polyoxyanions. Few hydroxo complexes of germanium have been characterized. Tin(II) hydroxide Sn(OH)2 was prepared in anhydrous media. When tin(II) oxide is treated with alkali the pyramidal hydroxo complex is formed. When solutions containing this ion are acidified, the ion [Sn3(OH)4]2+ is formed together with some basic hydroxo complexes. The structure of [Sn3(OH)4]2+ has a triangle of tin atoms connected by bridging hydroxide groups. Tin(IV) hydroxide is unknown but can be regarded as the hypothetical acid from which stannates, with a formula [Sn(OH)6]2−, are derived by reaction with the (Lewis) basic hydroxide ion. Hydrolysis of Pb2+ in aqueous solution is accompanied by the formation of various hydroxo-containing complexes, some of which are insoluble. The basic hydroxo complex [Pb6O(OH)6]4+ is a cluster of six lead centres with metal–metal bonds surrounding a central oxide ion. The six hydroxide groups lie on the faces of the two external Pb4 tetrahedra. In strongly alkaline solutions soluble plumbate ions are formed, including [Pb(OH)6]2−. Other main-group elements In the higher oxidation states of the pnictogens, chalcogens, halogens, and noble gases there are oxoacids in which the central atom is attached to oxide ions and hydroxide ions. Examples include phosphoric acid H3PO4, and sulfuric acid H2SO4. In these compounds one or more hydroxide groups can dissociate with the liberation of hydrogen cations as in a standard Brønsted–Lowry acid. Many oxoacids of sulfur are known and all feature OH groups that can dissociate. Telluric acid is often written with the formula H2TeO4·2H2O but is better described structurally as Te(OH)6. Ortho-periodic acid can lose all its protons, eventually forming the periodate ion [IO4]−. It can also be protonated in strongly acidic conditions to give the octahedral ion [I(OH)6]+, completing the isoelectronic series, [E(OH)6]z, E = Sn, Sb, Te, I; z = −2, −1, 0, +1. Other acids of iodine(VII) that contain hydroxide groups are known, in particular in salts such as the mesoperiodate ion that occurs in K4[I2O8(OH)2]·8H2O. As is common outside of the alkali metals, hydroxides of the elements in lower oxidation states are complicated. For example, phosphorous acid H3PO3 predominantly has the structure OP(H)(OH)2, in equilibrium with a small amount of P(OH)3. The oxoacids of chlorine, bromine, and iodine have the formula OA(OH), where n is the oxidation number: +1, +3, +5, or +7, and A = Cl, Br, or I. The only oxoacid of fluorine is F(OH), hypofluorous acid. When these acids are neutralized the hydrogen atom is removed from the hydroxide group. Transition and post-transition metals The hydroxides of the transition metals and post-transition metals usually have the metal in the +2 (M = Mn, Fe, Co, Ni, Cu, Zn) or +3 (M = Fe, Ru, Rh, Ir) oxidation state. None are soluble in water, and many are poorly defined. One complicating feature of the hydroxides is their tendency to undergo further condensation to the oxides, a process called olation. Hydroxides of metals in the +1 oxidation state are also poorly defined or unstable. For example, silver hydroxide Ag(OH) decomposes spontaneously to the oxide (Ag2O). Copper(I) and gold(I) hydroxides are also unstable, although stable adducts of CuOH and AuOH are known. The polymeric compounds M(OH)2 and M(OH)3 are in general prepared by increasing the pH of an aqueous solutions of the corresponding metal cations until the hydroxide precipitates out of solution. On the converse, the hydroxides dissolve in acidic solution. Zinc hydroxide Zn(OH)2 is amphoteric, forming the tetrahydroxidozincate ion in strongly alkaline solution. Numerous mixed ligand complexes of these metals with the hydroxide ion exist. In fact, these are in general better defined than the simpler derivatives. Many can be made by deprotonation of the corresponding metal aquo complex. LnM(OH2) + B LnM(OH) + BH+ (L = ligand, B = base) Vanadic acid H3VO4 shows similarities with phosphoric acid H3PO4 though it has a much more complex vanadate oxoanion chemistry. Chromic acid H2CrO4, has similarities with sulfuric acid H2SO4; for example, both form acid salts A+[HMO4]−. Some metals, e.g. V, Cr, Nb, Ta, Mo, W, tend to exist in high oxidation states. Rather than forming hydroxides in aqueous solution, they convert to oxo clusters by the process of olation, forming polyoxometalates. Basic salts containing hydroxide In some cases, the products of partial hydrolysis of metal ion, described above, can be found in crystalline compounds. A striking example is found with zirconium(IV). Because of the high oxidation state, salts of Zr4+ are extensively hydrolyzed in water even at low pH. The compound originally formulated as ZrOCl2·8H2O was found to be the chloride salt of a tetrameric cation [Zr4(OH)8(H2O)16]8+ in which there is a square of Zr4+ ions with two hydroxide groups bridging between Zr atoms on each side of the square and with four water molecules attached to each Zr atom. The mineral malachite is a typical example of a basic carbonate. The formula, Cu2CO3(OH)2 shows that it is halfway between copper carbonate and copper hydroxide. Indeed, in the past the formula was written as CuCO3·Cu(OH)2. The crystal structure is made up of copper, carbonate and hydroxide ions. The mineral atacamite is an example of a basic chloride. It has the formula, Cu2Cl(OH)3. In this case the composition is nearer to that of the hydroxide than that of the chloride CuCl2·3Cu(OH)2. Copper forms hydroxyphosphate (libethenite), arsenate (olivenite), sulfate (brochantite), and nitrate compounds. White lead is a basic lead carbonate, (PbCO3)2·Pb(OH)2, which has been used as a white pigment because of its opaque quality, though its use is now restricted because it can be a source for lead poisoning. Structural chemistry The hydroxide ion appears to rotate freely in crystals of the heavier alkali metal hydroxides at higher temperatures so as to present itself as a spherical ion, with an effective ionic radius of about 153 pm. Thus, the high-temperature forms of KOH and NaOH have the sodium chloride structure, which gradually freezes in a monoclinically distorted sodium chloride structure at temperatures below about 300 °C. The OH groups still rotate even at room temperature around their symmetry axes and, therefore, cannot be detected by X-ray diffraction. The room-temperature form of NaOH has the thallium iodide structure. LiOH, however, has a layered structure, made up of tetrahedral Li(OH)4 and (OH)Li4 units. This is consistent with the weakly basic character of LiOH in solution, indicating that the Li–OH bond has much covalent character. The hydroxide ion displays cylindrical symmetry in hydroxides of divalent metals Ca, Cd, Mn, Fe, and Co. For example, magnesium hydroxide Mg(OH)2 (brucite) crystallizes with the cadmium iodide layer structure, with a kind of close-packing of magnesium and hydroxide ions. The amphoteric hydroxide Al(OH)3 has four major crystalline forms: gibbsite (most stable), bayerite, nordstrandite, and doyleite. All these polymorphs are built up of double layers of hydroxide ions – the aluminium atoms on two-thirds of the octahedral holes between the two layers – and differ only in the stacking sequence of the layers. The structures are similar to the brucite structure. However, whereas the brucite structure can be described as a close-packed structure in gibbsite the OH groups on the underside of one layer rest on the groups of the layer below. This arrangement led to the suggestion that there are directional bonds between OH groups in adjacent layers. This is an unusual form of hydrogen bonding since the two hydroxide ion involved would be expected to point away from each other. The hydrogen atoms have been located by neutron diffraction experiments on α-AlO(OH) (diaspore). The O–H–O distance is very short, at 265 pm; the hydrogen is not equidistant between the oxygen atoms and the short OH bond makes an angle of 12° with the O–O line. A similar type of hydrogen bond has been proposed for other amphoteric hydroxides, including Be(OH)2, Zn(OH)2, and Fe(OH)3. A number of mixed hydroxides are known with stoichiometry A3MIII(OH)6, A2MIV(OH)6, and AMV(OH)6. As the formula suggests these substances contain M(OH)6 octahedral structural units. Layered double hydroxides may be represented by the formula . Most commonly, z = 2, and M2+ = Ca2+, Mg2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+, or Zn2+; hence q = x. In organic reactions Potassium hydroxide and sodium hydroxide are two well-known reagents in organic chemistry. Base catalysis The hydroxide ion may act as a base catalyst. The base abstracts a proton from a weak acid to give an intermediate that goes on to react with another reagent. Common substrates for proton abstraction are alcohols, phenols, amines, and carbon acids. The pKa value for dissociation of a C–H bond is extremely high, but the pKa alpha hydrogens of a carbonyl compound are about 3 log units lower. Typical pKa values are 16.7 for acetaldehyde and 19 for acetone. Dissociation can occur in the presence of a suitable base. RC(O)CH2R' + B RC(O)CH−R' + BH+ The base should have a pKa value not less than about 4 log units smaller, or the equilibrium will lie almost completely to the left. The hydroxide ion by itself is not a strong enough base, but it can be converted in one by adding sodium hydroxide to ethanol OH− + EtOH EtO− + H2O to produce the ethoxide ion. The pKa for self-dissociation of ethanol is about 16, so the alkoxide ion is a strong enough base. The addition of an alcohol to an aldehyde to form a hemiacetal is an example of a reaction that can be catalyzed by the presence of hydroxide. Hydroxide can also act as a Lewis-base catalyst. As a nucleophilic reagent The hydroxide ion is intermediate in nucleophilicity between the fluoride ion F−, and the amide ion . Ester hydrolysis under alkaline conditions (also known as base hydrolysis) R1C(O)OR2 + OH− R1CO(O)H + −OR2 R1CO2− + HOR2 is an example of a hydroxide ion serving as a nucleophile. Early methods for manufacturing soap treated triglycerides from animal fat (the ester) with lye. Other cases where hydroxide can act as a nucleophilic reagent are amide hydrolysis, the Cannizzaro reaction, nucleophilic aliphatic substitution, nucleophilic aromatic substitution, and in elimination reactions. The reaction medium for KOH and NaOH is usually water but with a phase-transfer catalyst the hydroxide anion can be shuttled into an organic solvent as well, for example in the generation of the reactive intermediate dichlorocarbene.
Physical sciences
Hydroxy anion
Chemistry
13733
https://en.wikipedia.org/wiki/Hilbert%27s%20basis%20theorem
Hilbert's basis theorem
In mathematics Hilbert's basis theorem asserts that every ideal of a polynomial ring over a field has a finite generating set (a finite basis in Hilbert's terminology). In modern algebra, rings whose ideals have this property are called Noetherian rings. Every field, and the ring of integers are Noetherian rings. So, the theorem can be generalized and restated as: every polynomial ring over a Noetherian ring is also Noetherian. The theorem was stated and proved by David Hilbert in 1890 in his seminal article on invariant theory, where he solved several problems on invariants. In this article, he proved also two other fundamental theorems on polynomials, the Nullstellensatz (zero-locus theorem) and the syzygy theorem (theorem on relations). These three theorems were the starting point of the interpretation of algebraic geometry in terms of commutative algebra. In particular, the basis theorem implies that every algebraic set is the intersection of a finite number of hypersurfaces. Another aspect of this article had a great impact on mathematics of the 20th century; this is the systematic use of non-constructive methods. For example, the basis theorem asserts that every ideal has a finite generator set, but the original proof does not provide any way to compute it for a specific ideal. This approach was so astonishing for mathematicians of that time that the first version of the article was rejected by Paul Gordan, the greatest specialist of invariants of that time, with the comment "This is not mathematics. This is theology." Later, he recognized "I have convinced myself that even theology has its merits." Statement If is a ring, let denote the ring of polynomials in the indeterminate over . Hilbert proved that if is "not too large", in the sense that if is Noetherian, the same must be true for . Formally, Hilbert's Basis Theorem. If is a Noetherian ring, then is a Noetherian ring. Corollary. If is a Noetherian ring, then is a Noetherian ring. Hilbert proved the theorem (for the special case of multivariate polynomials over a field) in the course of his proof of finite generation of rings of invariants. The theorem is interpreted in algebraic geometry as follows: every algebraic set is the set of the common zeros of finitely many polynomials. Hilbert's proof is highly non-constructive: it proceeds by induction on the number of variables, and, at each induction step uses the non-constructive proof for one variable less. Introduced more than eighty years later, Gröbner bases allow a direct proof that is as constructive as possible: Gröbner bases produce an algorithm for testing whether a polynomial belong to the ideal generated by other polynomials. So, given an infinite sequence of polynomials, one can construct algorithmically the list of those polynomials that do not belong to the ideal generated by the preceding ones. Gröbner basis theory implies that this list is necessarily finite, and is thus a finite basis of the ideal. However, for deciding whether the list is complete, one must consider every element of the infinite sequence, which cannot be done in the finite time allowed to an algorithm. Proof Theorem. If is a left (resp. right) Noetherian ring, then the polynomial ring is also a left (resp. right) Noetherian ring. Remark. We will give two proofs, in both only the "left" case is considered; the proof for the right case is similar. First proof Suppose is a non-finitely generated left ideal. Then by recursion (using the axiom of dependent choice) there is a sequence of polynomials such that if is the left ideal generated by then is of minimal degree. By construction, is a non-decreasing sequence of natural numbers. Let be the leading coefficient of and let be the left ideal in generated by . Since is Noetherian the chain of ideals must terminate. Thus for some integer . So in particular, Now consider whose leading term is equal to that of ; moreover, . However, , which means that has degree less than , contradicting the minimality. Second proof Let be a left ideal. Let be the set of leading coefficients of members of . This is obviously a left ideal over , and so is finitely generated by the leading coefficients of finitely many members of ; say . Let be the maximum of the set , and let be the set of leading coefficients of members of , whose degree is . As before, the are left ideals over , and so are finitely generated by the leading coefficients of finitely many members of , say with degrees . Now let be the left ideal generated by: We have and claim also . Suppose for the sake of contradiction this is not so. Then let be of minimal degree, and denote its leading coefficient by . Case 1: . Regardless of this condition, we have , so is a left linear combination of the coefficients of the . Consider which has the same leading term as ; moreover while . Therefore and , which contradicts minimality. Case 2: . Then so is a left linear combination of the leading coefficients of the . Considering we yield a similar contradiction as in Case 1. Thus our claim holds, and which is finitely generated. Note that the only reason we had to split into two cases was to ensure that the powers of multiplying the factors were non-negative in the constructions. Applications Let be a Noetherian commutative ring. Hilbert's basis theorem has some immediate corollaries. By induction we see that will also be Noetherian. Since any affine variety over (i.e. a locus-set of a collection of polynomials) may be written as the locus of an ideal and further as the locus of its generators, it follows that every affine variety is the locus of finitely many polynomials — i.e. the intersection of finitely many hypersurfaces. If is a finitely-generated -algebra, then we know that , where is an ideal. The basis theorem implies that must be finitely generated, say , i.e. is finitely presented. Formal proofs Formal proofs of Hilbert's basis theorem have been verified through the Mizar project (see HILBASIS file) and Lean (see ring_theory.polynomial).
Mathematics
Abstract algebra
null
13761
https://en.wikipedia.org/wiki/Hydrofoil
Hydrofoil
A hydrofoil is a lifting surface, or foil, that operates in water. They are similar in appearance and purpose to aerofoils used by aeroplanes. Boats that use hydrofoil technology are also simply termed hydrofoils. As a hydrofoil craft gains speed, the hydrofoils lift the boat's hull out of the water, decreasing drag and allowing greater speeds. Description The hydrofoil was created by Eric Walters. The hydrofoil usually consists of a winglike structure mounted on struts below the hull, or across the keels of a catamaran in a variety of boats (see illustration). As a hydrofoil-equipped watercraft increases in speed, the hydrofoil elements below the hull(s) develop enough lift to raise the hull out of the water, which greatly reduces hull drag. This provides a corresponding increase in speed and fuel efficiency. Wider adoption of hydrofoils is prevented by the increased complexity of building and maintaining them. Hydrofoils are generally prohibitively more expensive than conventional watercraft above a certain displacement, so most hydrofoil craft are relatively small, and are mainly used as high-speed passenger ferries, where the relatively high passenger fees can offset the high cost of the craft itself. However, the design is simple enough that there are many human-powered hydrofoil designs. Amateur experimentation and development of the concept is popular. Hydrodynamic mechanics Since air and water are governed by similar fluid equations—albeit with different levels of viscosity, density, and compressibility—the hydrofoil and airfoil (both types of foil) create lift in identical ways. The foil shape moves smoothly through the water, deflecting the flow downward, which, following the Euler equations, exerts an upward force on the foil. This turning of the water creates higher pressure on the bottom of the foil and reduced pressure on the top. This pressure difference is accompanied by a velocity difference, via Bernoulli's principle, so the resulting flow field about the foil has a higher average velocity on one side than the other. When used as a lifting element on a hydrofoil boat, this upward force lifts the body of the vessel, decreasing drag and increasing speed. The lifting force eventually balances with the weight of the craft, reaching a point where the hydrofoil no longer lifts out of the water but remains in equilibrium. Since wave resistance and other impeding forces such as various types of drag (physics) on the hull are eliminated as the hull lifts clear, turbulence and drag act increasingly on the much smaller surface area of the hydrofoil, and decreasingly on the hull, creating a marked increase in speed. Foil configurations Early hydrofoils used V-shaped foils. Hydrofoils of this type are known as "surface-piercing" since portions of the V-shape hydrofoils rise above the water surface when foilborne. Some modern hydrofoils use fully submerged inverted T-shape foils. Fully submerged hydrofoils are less subject to the effects of wave action, and, therefore, more stable at sea and more comfortable for crew and passengers. This type of configuration, however, is not self-stabilizing. The angle of attack on the hydrofoils must be adjusted continuously to changing conditions, a control process performed by sensors, a computer, and active surfaces. History Prototypes The first evidence of a hydrofoil on a vessel appears on a British patent granted in 1869 to Emmanuel Denis Farcot, a Parisian. He claimed that "adapting to the sides and bottom of the vessel a series or inclined planes or wedge formed pieces, which as the vessel is driven forward will have the effect of lifting it in the water and reducing the draught.". Italian inventor Enrico Forlanini began work on hydrofoils in 1898 and used a "ladder" foil system. Forlanini obtained patents in Britain and the United States for his ideas and designs. Between 1899 and 1901, British boat designer John Thornycroft worked on a series of models with a stepped hull and single bow foil. In 1909 his company built the full scale long boat, Miranda III. Driven by a engine, it rode on a bowfoil and flat stern. The subsequent Miranda IV was credited with a speed of . In May 1904 a hydrofoil boat was described being tested on the River Seine "in the neighbourhood of Paris". This boat was designed by Comte de Lambert. This had 5 variable pitch fins on the hull beneath the water so inclined that when the boat begins to move "the boat rises and the planes come to the surface" with the result that "it skims over the surface with little but the propellers beneath the surface". The boat had twin hulls 18-foot long connected by a single deck 9-foot wide, and was fitted with a 14HP De Dion-Bouton motor, the boat was reported to have reached 20 mph. It was stated that "The boat running practically on its fins resembles an aeroplane". A March 1906 Scientific American article by American hydrofoil pioneer William E. Meacham explained the basic principle of hydrofoils. Alexander Graham Bell considered the invention of the hydroplane (now regarded as a distinct type, but also employing lift) a very significant achievement, and after reading the article began to sketch concepts of what is now called a hydrofoil boat. With his chief engineer Casey Baldwin, Bell began hydrofoil experiments in the summer of 1908. Baldwin studied the work of the Italian inventor Enrico Forlanini and began testing models based on those designs, which led to the development of hydrofoil watercraft. During Bell's world tour of 1910–1911, Bell and Baldwin met with Forlanini in Italy, where they rode in his hydrofoil boat over Lake Maggiore. Baldwin described it as being as smooth as flying. On returning to Bell's large laboratory at his Beinn Bhreagh estate near Baddeck, Nova Scotia, they experimented with a number of designs, culminating in Bell's HD-4. Using Renault engines, a top speed of was achieved, accelerating rapidly, taking waves without difficulty, steering well and showing good stability. Bell's report to the United States Navy permitted him to obtain two 260 kW (350 hp) engines. On 9 September 1919 the HD-4 set a world marine speed record of , which stood for two decades. A full-scale replica of the HD-4 is viewable at the Alexander Graham Bell National Historic Site museum in Baddeck, Nova Scotia. In the early 1950s an English couple built the White Hawk, a jet-powered hydrofoil water craft, in an attempt to beat the absolute water speed record. However, in tests, White Hawk could barely top the record breaking speed of the 1919 HD-4. The designers had faced an engineering phenomenon that limits the top speed of even modern hydrofoils: cavitation disturbs the lift created by the foils as they move through the water at speed above , bending the lifting foil. First passenger boats German engineer Hanns von Schertel worked on hydrofoils prior to and during World War II in Germany. After the war, the Russians captured Schertel's team. As Germany was not authorized to build fast boats, Schertel went to Switzerland, where he established the Supramar company. In 1952, Supramar launched the first commercial hydrofoil, PT10 "Freccia d'Oro" (Golden Arrow), in Lake Maggiore, between Switzerland and Italy. The PT10 is of surface-piercing type, it can carry 32 passengers and travel at . In 1968, the Bahraini born banker Hussain Najadi acquired the Supramar AG and expanded its operations into Japan, Hong Kong, Singapore, the UK, Norway and the US. General Dynamics of the United States became its licensee, and the Pentagon awarded its first R&D naval research project in the field of supercavitation. Hitachi Shipbuilding of Osaka, Japan, was another licensee of Supramar, as well as many leading ship owners and shipyards in the OECD countries. From 1952 to 1971, Supramar designed many models of hydrofoils: PT20, PT50, PT75, PT100 and PT150. All are of surface-piercing type, except the PT150 combining a surface-piercing foil forward with a fully submerged foil in the aft location. Over 200 of Supramar's design were built, most of them by Rodriquez (headed at the time by Engineer Carlo Rodriquez in Sicily, Italy. During the same period the Soviet Union experimented extensively with hydrofoils, constructing hydrofoil river boats and ferries with streamlined designs during the cold war period and into the 1980s. Such vessels include the Raketa (1957) type, followed by the larger Meteor type and the smaller Voskhod type. One of the most successful Soviet designer/inventor in this area was Rostislav Alexeyev, who some consider the 'father' of the modern hydrofoil due to his 1950s era high speed hydrofoil designs. Later, circa 1970s, Alexeyev combined his hydrofoil experience with the surface effect principle to create the Ekranoplan. Extensive investment in this type of technology in the USSR resulted in the largest civil hydrofoil fleet in the world and the making of the Meteor type, the most successful hydrofoil in history, with more than 400 units built. In 1961, SRI International issued a study on "The Economic Feasibility of Passenger Hydrofoil Craft in US Domestic and Foreign Commerce". Commercial use of hydrofoils in the US first appeared in 1961 when two commuter vessels were commissioned by Harry Gale Nye, Jr.'s North American Hydrofoils to service the route from Atlantic Highlands, New Jersey to the financial district of Lower Manhattan. Military usage Germany A 17-ton German craft VS-6 Hydrofoil was designed and constructed in 1940, completed in 1941 for use as a mine layer; it was tested in the Baltic Sea, producing speeds of 47 knots. Tested against a standard E-boat over the next three years it performed well but was not brought into production. Being faster it could carry a higher payload and was capable of travelling over minefields but was prone to damage and noisier. Canada In Canada during World War II, Baldwin worked on an experimental smoke laying hydrofoil (later called the Comox Torpedo) that was later superseded by other smoke-laying technology and an experimental target-towing hydrofoil. The forward two foil assemblies of what is believed to be the latter hydrofoil were salvaged in the mid-1960s from a derelict hulk in Baddeck, Nova Scotia by Colin MacGregor Stevens. These were donated to the Maritime Museum in Halifax, Nova Scotia. The Canadian Armed Forces built and tested a number of hydrofoils (e.g., Baddeck and two vessels named Bras d'Or), which culminated in the high-speed anti-submarine hydrofoil HMCS Bras d'Or in the late 1960s. However, the program was cancelled in the early 1970s due to a shift away from anti-submarine warfare by the Canadian military. The Bras d'Or was a surface-piercing type that performed well during her trials, reaching a maximum speed of . Soviet Union The USSR introduced several hydrofoil-based fast attack craft into their navy, principally: Sarancha class missile boat, a unique vessel built in the 1970s Turya class torpedo boat, introduced in 1972 and still in service Matka class missile boat, introduced in the 1980s and still in service Muravey class patrol boat, introduced in the 1980s and still in service Project 664 United States The US Navy began experiments with hydrofoils in the mid-1950s by funding a sailing vessel that used hydrofoils to reach speeds in the 30 mph range. The XCH-4 (officially, Experimental Craft, Hydrofoil No. 4), designed by William P. Carl, exceeded speeds of and was mistaken for a seaplane due to its shape. Halobates was a 1957 US Navy prototype hydrofoil boat built by Miami Shipbuilding. The US Navy implemented a small number of combat hydrofoils, such as the Pegasus class, from 1977 through 1993. These hydrofoils were fast and well armed. Italy The Italian Navy used six hydrofoils of the Sparviero class starting in the late 1970s. These were armed with a 76 mm gun and two missiles, and were capable of speeds up to . Three similar boats were built for the Japan Maritime Self-Defense Force. Sailing and sports Several editions of the America's Cup have been raced with foiling yachts. In 2013 and 2017 respectively the AC72 and AC50 classes of catamaran, and in 2021 the AC75 class of foiling monohulls with canting arms. The French experimental sail powered hydrofoil Hydroptère is the result of a research project that involves advanced engineering skills and technologies. In September 2009, the Hydroptère set new sailcraft world speed records in the 500 m category, with a speed of and in the category with a speed of . The 500 m speed record for sailboats is currently held by the Vestas Sailrocket, an exotic design which operates in effect as a hydrofoil. Another trimaran sailboat is the Windrider Rave. The Rave is a commercially available , two person, hydrofoil trimaran, capable of reaching speeds of . The boat was designed by Jim Brown. The Moth dinghy has evolved into some radical foil configurations. Hobie Sailboats produced a production foiling trimaran, the Hobie Trifoiler, the fastest production sailboat. Trifoilers have clocked speeds upward of thirty knots. A new kayak design, called Flyak, has hydrofoils that lift the kayak enough to significantly reduce drag, allowing speeds of up to . Some surfers have developed surfboards with hydrofoils called foilboards, specifically aimed at surfing big waves further out to sea. Quadrofoil Q2 is a two-seater, four-foiled hydrofoil electrical leisure watercraft. Its initial design was set in 2012 and it has been available commercially since the end of 2016. Powered by a 5.2-kWh lithium-ion battery pack and propelled by a 5.5 kW motor, it reaches the top speed of 40 km/h and has 80 km of range. The Manta5 Hydrofoiler XE-1 is a Hydrofoil E-bike, designed and built in New Zealand that has since been available commercially for pre-order since late 2017. Propelled by a 400 watt motor, it can reach speeds exceeding 14 km/h with a weight of 22 kg. A single charge of the battery lasts an hour for a rider weighing 85 kg. Candela, a Swedish company, is producing a recreational hydrofoil powerboat, making strong claims for efficiency, performance, and range. Hydrofoils are now widely used with kitesurfing, that is traction kites over water. Hydrofoils are a new trend in windsurfing - including the new Summer Olympic class, the IQFoil, and more recently with Wing foiling, which are essentially a kite with no strings, or a hand-held sail. Modern passenger boats Soviet-built Voskhods are one of the most successful passenger hydrofoil designs. Manufactured in Soviet and later Ukrainian Crimea, they are in service in more than 20 countries. The most recent model, Voskhod-2M FFF, also known as Eurofoil, was built in Feodosiya for the Dutch public transport operator Connexxion. Mid-2010s saw a Russian governmental program aimed at restoring passenger hydrofoil production. The , based on the earlier , Kolhida and Katran models, became the first to enter production, initially on factory in Rybinsk, and later on More shipyard in Feodosiya. Since 2018, the ships are running Sevastopol-Yalta and Sochi-Gelenzhik-Novorossiysk, with a Sevastopol-Sochi connection in the immediate plans in 2021. At the same time, the Alekseyev Bureau began building lighter, smaller hydrofoils, based on a widely successful model, at its own plant in Nizhny Novgorod, the relatively shallow-draft boats used on the Ob and the Volga. The , a development of the , became the Valday's larger sibling, the first ship launched in Nizhny Novgorod in August 2021. The Boeing 929 is widely used in Asia for passenger services, between Hong Kong and Macau and between the many islands of Japan, also on the Korean peninsula. The main user is Hong Kong private corp. Current operation Current operators of hydrofoils include: TurboJET service, which speeds passengers across the Pearl River Delta between Hong Kong and Macau in less than an hour, with an average speed of 45 knots (83 km/h), mainly using Boeing's Jetfoil. Also services Shenzhen, Panyu (Nansha) and Kowloon. Operated by Shun Tak-China Travel Ship Management Limited. Voskhod and Polesye service between Tulcea and Sulina on the Danube. Meteor and Polesye service in Poland between Szczecin and Świnoujście. Cometa service between Nizhneangarsk and Irkutsk on Lake Baikal. Cometa service between Vladivostok and Slavyanka. Polesye service between Mozyr and Turov on the Pripyat River (Belarus). Meteor service between Saint Petersburg, Russia and the Peterhof Palace, a summer palace of Russian tsars. Meteor service between Saint Petersburg, Russia and the Kronstadt, a strongly fortified Russian seaport town, located on Kotlin Island, near the head of the Gulf of Finland. It lies thirty kilometers west of Saint Petersburg. Since 2012 replaced by a catamaran Mercury. Meteor, Raketa and Voskhod hydrofoil types operate all over Volga, Don and Kama Rivers in Russia. Also the Lena River and Amur River. Meteor hydrofoils are operated by a number of tour operators in Croatia, mostly for packaged tours, but there are also some scheduled services to islands in Adriatic. Hydrofoils are regularly operated on the three major Italian lakes by branches of the Ministry of Infrastructure and Transport: Navigazione Lago Maggiore services routes on Lake Maggiore between Locarno and Arona, Navigazione Lago di Como services routes on Lake Como, and Navigazione Lago di Garda services routes on Lake Garda. Three units of the Rodriguez RHS150 type operate on each lake, for a total of nine hydrofoils. Former Russian hydrofoils are used in southern Italy for connection with islands of Lazio and Campania. SNAV has five RHS200, RHS160 and RHS150 used in the connections between Naples and the islands of Capri and Ischia. A regular hydrofoil service runs from Istanbul to Yalova. Hellenic Seaways operate their Flying Dolphins service over many routes in the Aegean, between the Cyclades, Saronic Gulf islands such as Aegina and Poros, and Athens. Meteor (2), Polesye (4) and Voskhod (3) hydrofoil types operate in Hungary. MAHART PassNave Ltd. operates scheduled hydrofoil liners between Budapest, Bratislava and Vienna, inland liners between Budapest and the Danube Bend, and theme cruises to Komárom, Solt, Kalocsa and Mohács. "Kometa" Flying Dolphin services are currently operated by Joy Cruises between Corfu and Paxos. They run from Corfu Port to Gaios using two hydrofoils: Ilida and Ilida II. The company operates also an international service from Corfu to Saranda (Albania) using the hydrofoil Ilida Dolphin of the same type. "Kometa" type hydrofoils (registered in Albania) are operated by Ionian Cruises and Finikas Lines between Saranda and Corfu. Russian hydrofoils of the Kometa type operated on the Bulgarian Black Sea Coast connecting Varna, Nesebar, Burgas, Sozopol, Primorsko, and Tsarevo, and Raketa and Meteor models served the Bulgarian Danube ports between Rousse and Vidin. Both services were discontinued in the 1990s. In 2011 the service reopened between Varna, Nesebar, Burgas and Sozopol, operated by Bulgarian Hydrofoils Ltd. Vietnamese Greenline Company operates hourly shuttle service between Ho Chi Minh City, Vung Tau and Con Dao island. Hydrofoil lines using the Russian-built Meteor type also connect Hai Phong, Ha Long and Mong Cai in North Vietnam, Phan Thiet and Phu Quy Island and between Rach Gia and Phu Quoc Island in the South. The service between Busan, South Korea and Fukuoka, Japan is operated by two companies. Japanese JR Kyūshū Jet Ferry operates Beetle five times a day. Korean Miraejet operates Kobee three to four times a day. All of their fleets are Boeing 929. As of February 2008, all of the commercial lines in Japan use Boeing 929. The routes include: Sado Kisen operates the route between Sado and Niigata. Tōkai Kisen operates Seven Islands, running between Tokyo and Izu Islands, via Tateyama or Yokosuka. The destinations include Izu Ōshima, Toshima, Niijima, Shikinejima, and Kōzushima. The same ship also links Atami and Izu Ōshima. Kyūshū Yūsen operates the route between Fukuoka, Iki, and the two ports of Tsushima. Kyūshū Shōsen operates the route between Nagasaki and the two of Gotō Islands, namely Fukuejima and Nakadōrijima. Kagoshima Shōsen and Cosmo Line operate the various routes between Kagoshima and Tanegashima or Yakushima. In 2012, Agriculture, Fisheries and Conservation Department (AFCD) in Hong Kong leased a 12-meter HAWC (Hydrofoil Assisted Water Craft), a catamaran, to patrol the Hong Kong UNESCO Global Geopark in the Sai Kung Volcanic Rock Region. In 2017, Voskhod boat began operating on 2 lines in Ukraine: Nova Kakhovka-Kherson-Hola Prystan, Mykolaiv-Kinburn Spit, Ochakiv-Kinburn Spit. In July 2018, the new generation Kometa 120M boat has started operation on the busy Sevastopol-Yalta route in Crimea, with the plans to add two more and possible other routes in 2019. In Italy hydrofoils have been used for commercial connections since 1956, by the Rodriguez shipyards and the SNAV company. Currently, the main hydrofoil operator in Italy is Liberty Lines, which operates connections between the smaller Sicilian islands with Sicily and Calabria and between Trieste and some towns on the Croatian coast. SNAV operates connections between Naples and the smaller Campanian islands and - in the summer period - between Naples and the Aeolian Islands. Discontinued operations Until 31 December 2013, Fast Flying Ferries operated by Connexxion provided a regular public transport service over the North Sea Canal between Amsterdam Central Station and Velsen-Zuid in the Netherlands, using Voskhod 2M hydrofoils. It was stopped due to a new speed limit. Between 1981 and 1990, Transmediterranea operated a service of hydrofoils connecting Ceuta and Algeciras in the Strait of Gibraltar. The crossing took half an hour, in comparison to the hour and a half of conventional ferries. Due to the common extreme winds and storms that take place in winter in the Strait of Gibraltar, the service was replaced in 1990 by catamarans, which were also able to carry cars. At the peak of the year, in summer, there was a service every half an hour in each direction. This high-speed connection had a big impact on the development of Ceuta, facilitating one-day business trips to mainland Spain. Between 1964 and 1991 the Sydney hydrofoils operated on Sydney Harbour between Circular Quay and Manly. Between 1969 and 1998 Red Funnel operated between Southampton and Cowes, Isle of Wight. During the 1970s and 1980s there were frequent services between Belgrade and Tekija in Đerdap gorge. The distance of was covered in 3 hours and 30 minutes downstream and 4 hours upstream. Between 1980 and 1981, B+I Line operated a Boeing 929 jetfoil, named Cú Na Mara (Hound of the Sea), between Liverpool and Dublin. The service was not successful and was discontinued at the end of the 1981 season. Between the 1960s and 1985 there were hydrofoils going between Malmö, Sweden and Copenhagen, Denmark. They were retired and exchanged for catamarans. The service got cancelled when the Öresund Bridge got built in the early 2000s. Condor Ferries operated six hydrofoil ferries over a 29-year period between the Channel Islands, the south coast of England and Saint-Malo in France. Following the restoration of Estonian independence in the 1990s, the regular ferry service between Helsinki and Tallinn was augmented by Soviet built hydrofoils during the summer season in periods of good weather. The higher speed service competed with the traditional ro-ro ferries but allowed easy day trips for pedestrian travellers. They were ultimately replaced with high-speed catamarans that could also carry vehicles and have better seaworthiness; however, the latter ceased operations as the operator filed for bankruptcy in May 2018.
Technology
Naval transport
null
13764
https://en.wikipedia.org/wiki/Hassium
Hassium
Hassium is a synthetic chemical element; it has symbol Hs and atomic number 108. It is highly radioactive: its most stable known isotopes have half-lives of about ten seconds. One of its isotopes, Hs, has magic numbers of protons and neutrons for deformed nuclei, giving it greater stability against spontaneous fission. Hassium is a superheavy element; it has been produced in a laboratory in very small quantities by fusing heavy nuclei with lighter ones. Natural occurrences of hassium have been hypothesized but never found. In the periodic table, hassium is a transactinide element, a member of the 7th period and group 8; it is thus the sixth member of the 6d series of transition metals. Chemistry experiments have confirmed that hassium behaves as the heavier homologue to osmium, reacting readily with oxygen to form a volatile tetroxide. The chemical properties of hassium have been only partly characterized, but they compare well with the chemistry of the other group 8 elements. The main innovation that led to the discovery of hassium was cold fusion, where the fused nuclei do not differ by mass as much as in earlier techniques. It relied on greater stability of target nuclei, which in turn decreased excitation energy. This decreased the number of neutrons ejected during synthesis, creating heavier, more stable resulting nuclei. The technique was first tested at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, in 1974. JINR used this technique to attempt synthesis of element 108 in 1978, in 1983, and in 1984; the latter experiment resulted in a claim that element 108 had been produced. Later in 1984, a synthesis claim followed from the Gesellschaft für Schwerionenforschung (GSI) in Darmstadt, Hesse, West Germany. The 1993 report by the Transfermium Working Group, formed by the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP), concluded that the report from Darmstadt was conclusive on its own whereas that from Dubna was not, and major credit was assigned to the German scientists. GSI formally announced they wished to name the element hassium after the German state of Hesse (Hassia in Latin), home to the facility in 1992; this name was accepted as final in 1997. Introduction to the heaviest elements Discovery Cold fusion Nuclear reactions used in the 1960s resulted in high excitation energies that required expulsion of four or five neutrons; these reactions used targets made of elements with high atomic numbers to maximize the size difference between the two nuclei in a reaction. While this increased the chance of fusion due to the lower electrostatic repulsion between target and projectile, the formed compound nuclei often broke apart and did not survive to form a new element. Moreover, fusion inevitably produces neutron-poor nuclei, as heavier elements need more neutrons per proton for stability; therefore, the necessary ejection of neutrons results in final products that are typically shorter-lived. As such, light beams (six to ten protons) allowed synthesis of elements only up to 106. To advance to heavier elements, Soviet physicist Yuri Oganessian at the Joint Institute for Nuclear Research (JINR) in Dubna, Moscow Oblast, Russian SFSR, Soviet Union, proposed a different mechanism, in which the bombarded nucleus would be lead-208, which has magic numbers of protons and neutrons, or another nucleus close to it. Each proton and neutron has a fixed rest energy; those of all protons are equal and so are those of all neutrons. In a nucleus, some of this energy is diverted to binding protons and neutrons; if a nucleus has a magic number of protons and/or neutrons, then even more of its rest energy is diverted, which makes the nuclide more stable. This additional stability requires more energy for an external nucleus to break the existing one and penetrate it. More energy diverted to binding nucleons means less rest energy, which in turn means less mass (mass is proportional to rest energy). More equal atomic numbers of the reacting nuclei result in greater electrostatic repulsion between them, but the lower mass excess of the target nucleus balances it. This leaves less excitation energy for the new compound nucleus, which necessitates fewer neutron ejections to reach a stable state. Due to this energy difference, the former mechanism became known as "hot fusion" and the latter as "cold fusion". Cold fusion was first declared successful in 1974 at JINR, when it was tested for synthesis of the yet-undiscovered element106. These new nuclei were projected to decay via spontaneous fission. The physicists at JINR concluded element 106 was produced in the experiment because no fissioning nucleus known at the time showed parameters of fission similar to what was observed during the experiment and because changing either of the two nuclei in the reactions negated the observed effects. Physicists at Lawrence Berkeley Laboratory (LBL; originally Radiation Laboratory, RL, and later Lawrence Berkeley National Laboratory, LBNL) of the University of California in Berkeley, California, United States, also expressed great interest in the new technique. When asked about how far this new method could go and if lead targets were a physics' Klondike, Oganessian responded, "Klondike may be an exaggeration [...] But soon, we will try to get elements 107... 108 in these reactions." Reports Synthesis of element108 was first attempted in 1978 by a team led by Oganessian at JINR. The team used a reaction that would generate element108, specifically, the isotope 108, from fusion of radium (specifically, the isotope and calcium . The researchers were uncertain in interpreting their data, and their paper did not unambiguously claim to have discovered the element. The same year, another team at JINR investigated the possibility of synthesis of element108 in reactions between lead and iron ; they were uncertain in interpreting the data, suggesting the possibility that element108 had not been created. In 1983, new experiments were performed at JINR. The experiments probably resulted in the synthesis of element108; bismuth was bombarded with manganese to obtain 108, lead (Pb) was bombarded with iron (Fe) to obtain 108, and californium was bombarded with neon to obtain 108. These experiments were not claimed as a discovery and Oganessian announced them in a conference rather than in a written report. In 1984, JINR researchers in Dubna performed experiments set up identically to the previous ones; they bombarded bismuth and lead targets with ions of manganese and iron, respectively. Twenty-one spontaneous fission events were recorded; the researchers concluded they were caused by 108. Later in 1984, a research team led by Peter Armbruster and Gottfried Münzenberg at Gesellschaft für Schwerionenforschung (GSI; Institute for Heavy Ion Research) in Darmstadt, Hesse, West Germany, tried to create element108. The team bombarded a lead (Pb) target with accelerated iron (Fe) nuclei. GSI's experiment to create element108 was delayed until after their creation of element109 in 1982, as prior calculations had suggested that even–even isotopes of element108 would have spontaneous fission half-lives of less than one microsecond, making them difficult to detect and identify. The element108 experiment finally went ahead after 109 had been synthesized and was found to decay by alpha emission, suggesting that isotopes of element108 would do likewise, and this was corroborated by an experiment aimed at synthesizing isotopes of element106. GSI reported synthesis of three atoms of 108. Two years later, they reported synthesis of one atom of the even–even 108. Arbitration In 1985, the International Union of Pure and Applied Chemistry (IUPAC) and the International Union of Pure and Applied Physics (IUPAP) formed the Transfermium Working Group (TWG) to assess discoveries and establish final names for elements with atomic numbers greater than 100. The party held meetings with delegates from the three competing institutes; in 1990, they established criteria for recognition of an element and in 1991, they finished the work of assessing discoveries and disbanded. These results were published in 1993. According to the report, the 1984 works from JINR and GSI simultaneously and independently established synthesis of element108. Of the two 1984 works, the one from GSI was said to be sufficient as a discovery on its own. The JINR work, which preceded the GSI one, "very probably" displayed synthesis of element108. However, that was determined in retrospect given the work from Darmstadt; the JINR work focused on chemically identifying remote granddaughters of element108 isotopes (which could not exclude the possibility that these daughter isotopes had other progenitors), while the GSI work clearly identified the decay path of those element108 isotopes. The report concluded that the major credit should be awarded to GSI. In written responses to this ruling, both JINR and GSI agreed with its conclusions. In the same response, GSI confirmed that they and JINR were able to resolve all conflicts between them. Naming Historically, a newly discovered element was named by its discoverer. The first regulation came in 1947, when IUPAC decided naming required regulation in case there are conflicting names. These matters were to be resolved by the Commission of Inorganic Nomenclature and the Commission of Atomic Weights. They would review the names in case of a conflict and select one; the decision would be based on a number of factors, such as usage, and would not be an indicator of priority of a claim. The two commissions would recommend a name to the IUPAC Council, which would be the final authority. The discoverers held the right to name an element, but their name would be subject to approval by IUPAC. The Commission of Atomic Weights distanced itself from element naming in most cases. In Mendeleev's nomenclature for unnamed and undiscovered elements, hassium would be called "eka-osmium", as in "the first element below osmium in the periodic table" (from Sanskrit eka meaning "one"). In 1979, IUPAC published recommendations according to which the element was to be called "unniloctium" (symbol "Uno"), a systematic element name as a placeholder until the element was discovered and the discovery then confirmed, and a permanent name was decided. Although these recommendations were widely followed in the chemical community, the competing physicists in the field ignored them. They either called it "element108", with the symbols E108, (108) or 108, or used the proposed name "hassium". In 1990, in an attempt to break a deadlock in establishing priority of discovery and naming of several elements, IUPAC reaffirmed in its nomenclature of inorganic chemistry that after existence of an element was established, the discoverers could propose a name. (Also, the Commission of Atomic Weights was excluded from the naming process.) The first publication on criteria for an element discovery, released in 1991, specified the need for recognition by TWG. Armbruster and his colleagues, the officially recognized German discoverers, held a naming ceremony for the elements 107 through 109, which had all been recognized as discovered by GSI, on 7September 1992. For element108, the scientists proposed the name "hassium". It is derived from the Latin name Hassia for the German state of Hesse where the institute is located. This name was proposed to IUPAC in a written response to their ruling on priority of discovery claims of elements, signed 29 September 1992. The process of naming of element 108 was a part of a larger process of naming a number of elements starting with element 101; three teams—JINR, GSI, and LBL—claimed discovery of several elements and the right to name those elements. Sometimes, these claims clashed; since a discoverer was considered entitled to naming of an element, conflicts over priority of discovery often resulted in conflicts over names of these new elements. These conflicts became known as the Transfermium Wars. Different suggestions to name the whole set of elements from 101 onward and they occasionally assigned names suggested by one team to be used for elements discovered by another. However, not all suggestions were met with equal approval; the teams openly protested naming proposals on several occasions. In 1994, IUPAC Commission on Nomenclature of Inorganic Chemistry recommended that element108 be named "hahnium" (Hn) after German physicist Otto Hahn so elements named after Hahn and Lise Meitner (it was recommended element109 should be named meitnerium, following GSI's suggestion) would be next to each other, honouring their joint discovery of nuclear fission; IUPAC commented that they felt the German suggestion was obscure. GSI protested, saying this proposal contradicted the long-standing convention of giving the discoverer the right to suggest a name; the American Chemical Society supported GSI. The name "hahnium", albeit with the different symbol Ha, had already been proposed and used by the American scientists for element105, for which they had a discovery dispute with JINR; they thus protested the confusing scrambling of names. Following the uproar, IUPAC formed an ad hoc committee of representatives from the national adhering organizations of the three countries home to the competing institutions; they produced a new set of names in 1995. Element108 was again named hahnium; this proposal was also retracted. The final compromise was reached in 1996 and published in 1997; element108 was named hassium (Hs). Simultaneously, the name dubnium (Db; from Dubna, the JINR location) was assigned to element105, and the name hahnium was not used for any element. The official justification for this naming, alongside that of darmstadtium for element110, was that it completed a set of geographic names for the location of the GSI; this set had been initiated by 19th-century names europium and germanium. This set would serve as a response to earlier naming of americium, californium, and berkelium for elements discovered in Berkeley. Armbruster commented on this, "this bad tradition was established by Berkeley. We wanted to do it for Europe." Later, when commenting on the naming of element112, Armbruster said, "I did everything to ensure that we do not continue with German scientists and German towns." Isotopes Hassium has no stable or naturally occurring isotopes. Several radioisotopes have been synthesized in the lab, either by fusing two atoms or by observing the decay of heavier elements. As of 2019, the quantity of all hassium ever produced was on the order of hundreds of atoms. Thirteen isotopes with mass numbers 263 through 277 (except for 274 and 276) have been reported, six of which—Hs—have known metastable states, though that of Hs is unconfirmed. Most of these isotopes decay mainly through alpha decay; this is the most common for all isotopes for which comprehensive decay characteristics are available; the only exception is Hs, which undergoes spontaneous fission. Lighter isotopes were usually synthesized by direct fusion of two nuclei, whereas heavier isotopes were typically observed as decay products of nuclei with larger atomic numbers. Atomic nuclei have well-established nuclear shells, which make nuclei more stable. If a nucleus has certain numbers (magic numbers) of protons or neutrons, that complete a nuclear shell, then the nucleus is even more stable against decay. The highest known magic numbers are 82 for protons and 126 for neutrons. This notion is sometimes expanded to include additional numbers between those magic numbers, which also provide some additional stability and indicate closure of "sub-shells". Unlike the better-known lighter nuclei, superheavy nuclei are deformed. Until the 1960s, the liquid drop model was the dominant explanation for nuclear structure. It suggested that the fission barrier would disappear for nuclei with ~280nucleons. It was thus thought that spontaneous fission would occur nearly instantly before nuclei could form a structure that could stabilize them; it appeared that nuclei with Z≈103 were too heavy to exist for a considerable length of time. The later nuclear shell model suggested that nuclei with ~300 nucleons would form an island of stability where nuclei will be more resistant to spontaneous fission and will mainly undergo alpha decay with longer half-lives, and the next doubly magic nucleus (having magic numbers of both protons and neutrons) is expected to lie in the center of the island of stability near Z=110–114 and the predicted magic neutron number N=184. Subsequent discoveries suggested that the predicted island might be further than originally anticipated. They also showed that nuclei intermediate between the long-lived actinides and the predicted island are deformed, and gain additional stability from shell effects, against alpha decay and especially against spontaneous fission. The center of the region on a chart of nuclides that would correspond to this stability for deformed nuclei was determined as Hs, with 108 expected to be a magic number for protons for deformed nuclei—nuclei that are far from spherical—and 162 a magic number for neutrons for such nuclei. Experiments on lighter superheavy nuclei, as well as those closer to the expected island, have shown greater than previously anticipated stability against spontaneous fission, showing the importance of shell effects on nuclei. Theoretical models predict a region of instability for some hassium isotopes to lie around A=275 and N=168–170, which is between the predicted neutron shell closures at N=162 for deformed nuclei and N=184 for spherical nuclei. Nuclides in this region are predicted to have low fission barrier heights, resulting in short partial half-lives toward spontaneous fission. This prediction is supported by the observed 11-millisecond half-life of Hs and the 5-millisecond half-life of the neighbouring isobar Mt because the hindrance factors from the odd nucleon were shown to be much lower than otherwise expected. The measured half-lives are even lower than those originally predicted for the even–even Hs and Ds, which suggests a gap in stability away from the shell closures and perhaps a weakening of the shell closures in this region. In 1991, Polish physicists Zygmunt Patyk and Adam Sobiczewski predicted that 108 is a proton magic number for deformed nuclei and 162 is a neutron magic number for such nuclei. This means such nuclei are permanently deformed in their ground state but have high, narrow fission barriers to further deformation and hence relatively long spontaneous-fission half-lives. Computational prospects for shell stabilization for Hs made it a promising candidate for a deformed doubly magic nucleus. Experimental data is scarce, but the existing data is interpreted by the researchers to support the assignment of N=162 as a magic number. In particular, this conclusion was drawn from the decay data of Hs, Hs, and Hs. In 1997, Polish physicist Robert Smolańczuk calculated that the isotope Hs may be the most stable superheavy nucleus against alpha decay and spontaneous fission as a consequence of the predicted N=184 shell closure. Natural occurrence Hassium is not known to occur naturally on Earth; all its known isotopes are so short-lived that no primordial hassium would survive to today. This does not rule out the possibility of unknown, longer-lived isotopes or nuclear isomers, some of which could still exist in trace quantities if they are long-lived enough. As early as 1914, German physicist Richard Swinne proposed element108 as a source of X-rays in the Greenland ice sheet. Though Swinne was unable to verify this observation and thus did not claim discovery, he proposed in 1931 the existence of "regions" of long-lived transuranic elements, including one around Z=108. In 1963, Soviet geologist and physicist Viktor Cherdyntsev, who had previously claimed the existence of primordial curium-247, claimed to have discovered element108—specifically the 267108 isotope, which supposedly had a half-life of 400 to 500million years—in natural molybdenite and suggested the provisional name sergenium (symbol Sg); this name comes from the name for the Silk Road and was explained as "coming from Kazakhstan" for it. His rationale for claiming that sergenium was the heavier homologue to osmium was that minerals supposedly containing sergenium formed volatile oxides when boiled in nitric acid, similarly to osmium. Soviet physicist Vladimir Kulakov criticized Cherdyntsev's findings on the grounds that some of the properties Cherdyntsev claimed sergenium had, were inconsistent with then-current nuclear physics. The chief questions Kulakov raised were that the claimed alpha decay energy of sergenium was many orders of magnitude lower than expected and the half-life given was eight orders of magnitude shorter than what would be predicted for a nuclide alpha-decaying with the claimed decay energy. At the same time, a corrected half-life in the region of 10years would be impossible because it would imply the samples contained ~100 milligrams of sergenium. In 2003, it was suggested that the observed alpha decay with energy 4.5MeV could be due to a low-energy and strongly enhanced transition between different hyperdeformed states of a hassium isotope around Hs, thus suggesting that the existence of superheavy elements in nature was at least possible, but unlikely. In 2006, Russian geologist Alexei Ivanov hypothesized that an isomer of Hs might have a half-life of ~ years, which would explain the observation of alpha particles with energies of ~4.4MeV in some samples of molybdenite and osmiridium. This isomer of Hs could be produced from the beta decay of Bh and Sg, which, being homologous to rhenium and molybdenum respectively, should occur in molybdenite along with rhenium and molybdenum if they occurred in nature. Because hassium is homologous to osmium, it should occur along with osmium in osmiridium if it occurs in nature. The decay chains of Bh and Sg are hypothetical and the predicted half-life of this hypothetical hassium isomer is not long enough for any sufficient quantity to remain on Earth. It is possible that more Hs may be deposited on the Earth as the Solar System travels through the spiral arms of the Milky Way; this would explain excesses of plutonium-239 found on the ocean floors of the Pacific Ocean and the Gulf of Finland. However, minerals enriched with Hs are predicted to have excesses of its daughters uranium-235 and lead-207; they would also have different proportions of elements that are formed by spontaneous fission, such as krypton, zirconium, and xenon. The natural occurrence of hassium in minerals such as molybdenite and osmiride is theoretically possible, but very unlikely. In 2004, JINR started a search for natural hassium in the Modane Underground Laboratory in Modane, Auvergne-Rhône-Alpes, France; this was done underground to avoid interference and false positives from cosmic rays. In 2008–09, an experiment run in the laboratory resulted in detection of several registered events of neutron multiplicity (number of emitted free neutrons after a nucleus is hit by a neutron and fissioned) above three in natural osmium, and in 2012–13, these findings were reaffirmed in another experiment run in the laboratory. These results hinted natural hassium could potentially exist in nature in amounts that allow its detection by the means of analytical chemistry, but this conclusion is based on an explicit assumption that there is a long-lived hassium isotope to which the registered events could be attributed. Since Hs may be particularly stable against alpha decay and spontaneous fission, it was considered as a candidate to exist in nature. This nuclide, however, is predicted to be very unstable toward beta decay and any beta-stable isotopes of hassium such as Hs would be too unstable in the other decay channels to be observed in nature. A 2012 search for Hs in nature along with its homologue osmium at the Maier-Leibnitz Laboratory in Garching, Bavaria, Germany, was unsuccessful, setting an upper limit to its abundance at of hassium per gram of osmium. Predicted properties Various calculations suggest hassium should be the heaviest group 8 element so far, consistently with the periodic law. Its properties should generally match those expected for a heavier homologue of osmium; as is the case for all transactinides, a few deviations are expected to arise from relativistic effects. Very few properties of hassium or its compounds have been measured; this is due to its extremely limited and expensive production and the fact that hassium (and its parents) decays very quickly. A few singular chemistry-related properties have been measured, such as enthalpy of adsorption of hassium tetroxide, but properties of hassium metal remain unknown and only predictions are available. Relativistic effects Relativistic effects in hassium should arise due to the high charge of its nuclei, which causes the electrons around the nucleus to move faster—so fast their speed is comparable to the speed of light. There are three main effects: the direct relativistic effect, the indirect relativistic effect, and spin–orbit splitting. (The existing calculations do not account for Breit interactions, but those are negligible, and their omission can only result in an uncertainty of the current calculations of no more than 2%.) As atomic number increases, so does the electrostatic attraction between an electron and the nucleus. This causes the velocity of the electron to increase, which leads to an increase in its mass. This in turn leads to contraction of the atomic orbitals, most specifically the s and p orbitals. Their electrons become more closely attached to the atom and harder to pull from the nucleus. This is the direct relativistic effect. It was originally thought to be strong only for the innermost electrons, but was later established to significantly influence valence electrons as well. Since the s and p orbitals are closer to the nucleus, they take a bigger portion of the electric charge of the nucleus on themselves ("shield" it). This leaves less charge for attraction of the remaining electrons, whose orbitals therefore expand, making them easier to pull from the nucleus. This is the indirect relativistic effect. As a result of the combination of the direct and indirect relativistic effects, the Hs ion, compared to the neutral atom, lacks a 6d electron, rather than a 7s electron. In comparison, Os lacks a 6s electron compared to the neutral atom. The ionic radius (in oxidation state +8) of hassium is greater than that of osmium because of the relativistic expansion of the 6p orbitals, which are the outermost orbitals for an Hs ion (although in practice such highly charged ions would be too polarized in chemical environments to have much reality). There are several kinds of electron orbitals, denoted s, p, d, and f (g orbitals are expected to start being chemically active among elements after element 120). Each of these corresponds to an azimuthal quantum number l: s to 0, p to 1, d to 2, and f to 3. Every electron also corresponds to a spin quantum number s, which may equal either +1/2 or −1/2. Thus, the total angular momentum quantum number j = l + s is equal to j = l ± 1/2 (except for l = 0, for which for both electrons in each orbital j = 0 + 1/2 = 1/2). Spin of an electron relativistically interacts with its orbit, and this interaction leads to a split of a subshell into two with different energies (the one with j = l − 1/2 is lower in energy and thus these electrons more difficult to extract): for instance, of the six 6p electrons, two become 6p and four become 6p. This is the spin–orbit splitting (also called subshell splitting or jj coupling). It is most visible with p electrons, which do not play an important role in the chemistry of hassium, but those for d and f electrons are within the same order of magnitude (quantitatively, spin–orbit splitting in expressed in energy units, such as electronvolts). These relativistic effects are responsible for the expected increase of the ionization energy, decrease of the electron affinity, and increase of stability of the +8 oxidation state compared to osmium; without them, the trends would be reversed. Relativistic effects decrease the atomization energies of hassium compounds because the spin–orbit splitting of the d orbital lowers binding energy between electrons and the nucleus and because relativistic effects decrease ionic character in bonding. Physical and atomic The previous members of group8 have high melting points: Fe, 1538°C; Ru, 2334°C; Os, 3033°C. Like them, hassium is predicted to be a solid at room temperature though its melting point has not been precisely calculated. Hassium should crystallize in the hexagonal close-packed structure (/=1.59), similarly to its lighter congener osmium. Pure metallic hassium is calculated to have a bulk modulus (resistance to uniform compression) of 450GPa, comparable with that of diamond, 442GPa. Hassium is expected to be one of the densest of the 118 known elements, with a predicted density of 27–29 g/cm vs. the 22.59 g/cm measured for osmium. Hassium's atomic radius is expected to be ≈126pm. Due to relativistic stabilization of the 7s orbital and destabilization of the 6d orbital, the Hs ion is predicted to have an electron configuration of [Rn]5f6d7s, giving up a 6d electron instead of a 7s electron, which is the opposite of the behaviour of its lighter homologues. The Hs ion is expected to have electron configuration [Rn]5f6d7s, analogous to that calculated for the Os ion. In chemical compounds, hassium is calculated to display bonding characteristic for a d-block element, whose bonding will be primarily executed by 6d and 6d orbitals; compared to the elements from the previous periods, 7s, 6p, 6p, and 7p orbitals should be more important. Chemical Hassium is the sixth member of the 6d series of transition metals and is expected to be much like the platinum group metals. Some of these properties were confirmed by gas-phase chemistry experiments. The group8 elements portray a wide variety of oxidation states but ruthenium and osmium readily portray their group oxidation state of +8; this state becomes more stable down the group. This oxidation state is extremely rare: among stable elements, only ruthenium, osmium, and xenon are able to attain it in reasonably stable compounds. Hassium is expected to follow its congeners and have a stable +8 state, but like them it should show lower stable oxidation states such as +6, +4, +3, and +2. Hassium(IV) is expected to be more stable than hassium(VIII) in aqueous solution. Hassium should be a rather noble metal. The standard reduction potential for the Hs4+/Hs couple is expected to be 0.4V. The group 8 elements show a distinctive oxide chemistry. All the lighter members have known or hypothetical tetroxides, MO4. Their oxidizing power decreases as one descends the group. FeO4 is not known due to its extraordinarily large electron affinity—the amount of energy released when an electron is added to a neutral atom or molecule to form a negative ion—which results in the formation of the well-known oxyanion ferrate(VI), . Ruthenium tetroxide, RuO4, which is formed by oxidation of ruthenium(VI) in acid, readily undergoes reduction to ruthenate(VI), . Oxidation of ruthenium metal in air forms the dioxide, RuO2. In contrast, osmium burns to form the stable tetroxide, OsO4, which complexes with the hydroxide ion to form an osmium(VIII) -ate complex, [OsO4(OH)2]2−. Therefore, hassium should behave as a heavier homologue of osmium by forming of a stable, very volatile tetroxide HsO4, which undergoes complexation with hydroxide to form a hassate(VIII), [HsO4(OH)2]2−. Ruthenium tetroxide and osmium tetroxide are both volatile due to their symmetrical tetrahedral molecular geometry and because they are charge-neutral; hassium tetroxide should similarly be a very volatile solid. The trend of the volatilities of the group8 tetroxides is experimentally known to be RuO4<OsO4>HsO4, which confirms the calculated results. In particular, the calculated enthalpies of adsorption—the energy required for the adhesion of atoms, molecules, or ions from a gas, liquid, or dissolved solid to a surface—of HsO4, −(45.4±1)kJ/mol on quartz, agrees very well with the experimental value of −(46±2)kJ/mol. Experimental chemistry The first goal for chemical investigation was the formation of the tetroxide; it was chosen because ruthenium and osmium form volatile tetroxides, being the only transition metals to display a stable compound in the +8 oxidation state. Despite this selection for gas-phase chemical studies being clear from the beginning, chemical characterization of hassium was considered a difficult task for a long time. Although hassium was first synthesized in 1984, it was not until 1996 that a hassium isotope long-lived enough to allow chemical studies was synthesized. Unfortunately, this isotope, Hs, was synthesized indirectly from the decay of Cn; not only are indirect synthesis methods not favourable for chemical studies, but the reaction that produced the isotope Cn had a low yield—its cross section was only 1pb—and thus did not provide enough hassium atoms for a chemical investigation. Direct synthesis of Hs and Hs in the reaction Cm(Mg,xn)Hs (x=4 or 5) appeared more promising because the cross section for this reaction was somewhat larger at 7pb. This yield was still around ten times lower than that for the reaction used for the chemical characterization of bohrium. New techniques for irradiation, separation, and detection had to be introduced before hassium could be successfully characterized chemically. Ruthenium and osmium have very similar chemistry due to the lanthanide contraction but iron shows some differences from them; for example, although ruthenium and osmium form stable tetroxides in which the metal is in the +8 oxidation state, iron does not. In preparation for the chemical characterization of hassium, research focused on ruthenium and osmium rather than iron because hassium was expected to be similar to ruthenium and osmium, as the predicted data on hassium closely matched that of those two. The first chemistry experiments were performed using gas thermochromatography in 2001, using the synthetic osmium radioisotopes Os as a reference. During the experiment, seven hassium atoms were synthesized using the reactions Cm(Mg,5n)Hs and Cm(Mg,4n)Hs. They were then thermalized and oxidized in a mixture of helium and oxygen gases to form hassium tetroxide molecules. Hs + 2 O → HsO The measured deposition temperature of hassium tetroxide was higher than that of osmium tetroxide, which indicated the former was the less volatile one, and this placed hassium firmly in group 8. The enthalpy of adsorption for HsO measured, , was significantly lower than the predicted value, , indicating OsO is more volatile than HsO, contradicting earlier calculations that implied they should have very similar volatilities. For comparison, the value for OsO is . (The calculations that yielded a closer match to the experimental data came after the experiment, in 2008.) It is possible hassium tetroxide interacts differently with silicon nitride than with silicon dioxide, the chemicals used for the detector; further research is required to establish whether there is a difference between such interactions and whether it has influenced the measurements. Such research would include more accurate measurements of the nuclear properties of Hs and comparisons with RuO in addition to OsO. In 2004, scientists reacted hassium tetroxide and sodium hydroxide to form sodium hassate(VIII), a reaction that is well known with osmium. This was the first acid-base reaction with a hassium compound, forming sodium hassate(VIII): + 2 NaOH → The team from the University of Mainz planned in 2008 to study the electrodeposition of hassium atoms using the new TASCA facility at GSI. Their aim was to use the reaction Ra(Ca,4n)Hs. Scientists at GSI were hoping to use TASCA to study the synthesis and properties of the hassium(II) compound hassocene, Hs(CH), using the reaction Ra(Ca,xn). This compound is analogous to the lighter compounds ferrocene, ruthenocene, and osmocene, and is expected to have the two cyclopentadienyl rings in an eclipsed conformation like ruthenocene and osmocene and not in a staggered conformation like ferrocene. Hassocene, which is expected to be a stable and highly volatile compound, was chosen because it has hassium in the low formal oxidation state of +2—although the bonding between the metal and the rings is mostly covalent in metallocenes—rather than the high +8 state that had previously been investigated, and relativistic effects were expected to be stronger in the lower oxidation state. The highly symmetrical structure of hassocene and its low number of atoms make relativistic calculations easier. , there are no experimental reports of hassocene.
Physical sciences
Group 8
Chemistry
13767
https://en.wikipedia.org/wiki/Hydra%20%28genus%29
Hydra (genus)
Hydra ( ) is a genus of small freshwater hydrozoans of the phylum Cnidaria. They are native to the temperate and tropical regions. The genus was named by Linnaeus in 1758 after the Hydra, which was the many-headed beast of myth defeated by Heracles, as when the animal has a part severed, it will regenerate much like the mythical hydra's heads. Biologists are especially interested in Hydra because of their regenerative ability; they do not appear to die of old age, or to age at all. Morphology Hydra has a tubular, radially symmetric body up to long when extended, secured by a simple adhesive foot known as the basal disc. Gland cells in the basal disc secrete a sticky fluid that accounts for its adhesive properties. At the free end of the body is a mouth opening surrounded by one to twelve thin, mobile tentacles. Each tentacle, or cnida (plural: cnidae), is clothed with highly specialised stinging cells called cnidocytes. Cnidocytes contain specialized structures called nematocysts, which look like miniature light bulbs with a coiled thread inside. At the narrow outer edge of the cnidocyte is a short trigger hair called a cnidocil. Upon contact with prey, the contents of the nematocyst are explosively discharged, firing a dart-like thread containing neurotoxins into whatever triggered the release. This can paralyze the prey, especially if many hundreds of nematocysts are fired. Hydra has two main body layers, which makes it "diploblastic". The layers are separated by mesoglea, a gel-like substance. The outer layer is the epidermis, and the inner layer is called the gastrodermis, because it lines the stomach. The cells making up these two body layers are relatively simple. Hydramacin is a bactericide recently discovered in Hydra; it protects the outer layer against infection. A single Hydra is composed of 50,000 to 100,000 cells which consist of three specific stem cell populations that create many different cell types. These stem cells continually renew themselves in the body column. Hydras have two significant structures on their body: the "head" and the "foot". When a Hydra is cut in half, each half regenerates and forms into a small Hydra; the "head" regenerates a "foot" and the "foot" regenerates a "head". If the Hydra is sliced into many segments then the middle slices form both a "head" and a "foot". Respiration and excretion occur by diffusion throughout the surface of the epidermis, while larger excreta are discharged through the mouth. Nervous system The nervous system of Hydra is a nerve net, which is structurally simple compared to more derived animal nervous systems. Hydra does not have a recognizable brain or true muscles. Nerve nets connect sensory photoreceptors and touch-sensitive nerve cells located in the body wall and tentacles. The structure of the nerve net has two levels: level 1 – sensory cells or internal cells; and level 2 – interconnected ganglion cells synapsed to epithelial or motor cells. Some have only two sheets of neurons. Motion and locomotion If Hydra are alarmed or attacked, the tentacles can be retracted to small buds, and the body column itself can be retracted to a small gelatinous sphere. Hydra generally react in the same way regardless of the direction of the stimulus, and this may be due to the simplicity of the nerve nets. Hydra are generally sedentary or sessile, but do occasionally move quite readily, especially when hunting. They have two distinct methods for moving – 'looping' and 'somersaulting'. They do this by bending over and attaching themselves to the substrate with the mouth and tentacles and then relocate the foot, which provides the usual attachment, this process is called looping. In somersaulting, the body then bends over and makes a new place of attachment with the foot. By this process of "looping" or "somersaulting", a Hydra can move several inches (c. 100 mm) in a day. Hydra may also move by amoeboid motion of their bases or by detaching from the substrate and floating away in the current. Reproduction and life cycle Most hydra species do not have any gender system. Instead, when food is plentiful, many Hydra reproduce asexually by budding. The buds form from the body wall, grow into miniature adults and break away when mature. When a hydra is well fed, a new bud can form every two days. When conditions are harsh, often before winter or in poor feeding conditions, sexual reproduction occurs in some Hydra. Swellings in the body wall develop into either ovaries or testes. The testes release free-swimming gametes into the water, and these can fertilize the egg in the ovary of another individual. The fertilized eggs secrete a tough outer coating, and, as the adult dies (due to starvation or cold), these resting eggs fall to the bottom of the lake or pond to await better conditions, whereupon they hatch into nymph Hydra. Some Hydra species, like Hydra circumcincta and Hydra viridissima, are hermaphrodites and may produce both testes and ovaries at the same time. Many members of the Hydrozoa go through a body change from a polyp to an adult form called a medusa, which is usually the life stage where sexual reproduction occurs, but Hydra do not progress beyond the polyp phase. Feeding Hydra mainly feed on aquatic invertebrates such as Daphnia and Cyclops. While feeding, Hydra extend their body to maximum length and then slowly extend their tentacles. Despite their simple construction, the tentacles of Hydra are extraordinarily extensible and can be four to five times the length of the body. Once fully extended, the tentacles are slowly maneuvered around waiting for contact with a suitable prey animal. Upon contact, nematocysts on the tentacle fire into the prey, and the tentacle itself coils around the prey. Most of the tentacles join in the attack within 30 seconds to subdue the struggling prey. Within two minutes, the tentacles surround the prey and move it into the open mouth aperture. Within ten minutes, the prey is engulfed within the body cavity, and digestion commences. Hydra can stretch their body wall considerably. The feeding behaviour of Hydra demonstrates the sophistication of what appears to be a simple nervous system. Some species of Hydra exist in a mutual relationship with various types of unicellular algae. The algae are protected from predators by Hydra; in return, photosynthetic products from the algae are beneficial as a food source to Hydra, and even help to maintain the Hydra microbiome. Measuring the feeding response The feeding response in Hydra is induced by glutathione (specifically in the reduced state as GSH) released from damaged tissue of injured prey. There are several methods conventionally used for quantification of the feeding response. In some, the duration for which the mouth remains open is measured. Other methods rely on counting the number of Hydra among a small population showing the feeding response after addition of glutathione. Recently, an assay for measuring the feeding response in hydra has been developed. In this method, the linear two-dimensional distance between the tip of the tentacle and the mouth of hydra was shown to be a direct measure of the extent of the feeding response. This method has been validated using a starvation model, as starvation is known to cause enhancement of the Hydra feeding response. Predators The species Hydra oligactis is preyed upon by the flatworm Microstomum lineare. Tissue regeneration Hydras undergo morphallaxis (tissue regeneration) when injured or severed. Typically, Hydras reproduce by just budding off a whole new individual; the bud occurs around two-thirds of the way down the body axis. When a Hydra is cut in half, each half regenerates and forms into a small Hydra; the "head" regenerates a "foot" and the "foot" regenerates a "head". This regeneration occurs without cell division. If the Hydra is sliced into many segments, the middle slices form both a "head" and a "foot". The polarity of the regeneration is explained by two pairs of positional value gradients. There is both a head and foot activation and inhibition gradient. The head activation and inhibition works in an opposite direction of the pair of foot gradients. The evidence for these gradients was shown in the early 1900s with grafting experiments. The inhibitors for both gradients have shown to be important to block the bud formation. The location where the bud forms is where the gradients are low for both the head and foot. Hydras are capable of regenerating from pieces of tissue from the body and additionally after tissue dissociation from reaggregates. This process takes place not only in the pieces of tissue excised from the body column, but also from re-aggregates of dissociated single cells. It was found that in these aggregates, cells initially distributed randomly undergo sorting and form two epithelial cell layers, in which the endodermal epithelial cells play more active roles in the process. Active mobility of these endodermal epithelial cells forms two layers in both the re-aggregate and the re-generating tip of the excised tissue. As these two layers are established, a patterning process takes place to form heads and feet. Non-senescence Daniel Martinez claimed in a 1998 article in Experimental Gerontology that Hydra are biologically immortal. This publication has been widely cited as evidence that Hydra do not senesce (do not age), and that they are proof of the existence of non-senescing organisms generally. In 2010, Preston Estep published (also in Experimental Gerontology) a letter to the editor arguing that the Martinez data refutes the hypothesis that Hydra do not senesce. The controversial unlimited lifespan of Hydra has attracted much attention from scientists. Research today appears to confirm Martinez' study. Hydra stem cells have a capacity for indefinite self-renewal. The transcription factor "forkhead box O" (FoxO) has been identified as a critical driver of the continuous self-renewal of Hydra. In experiments, a drastically reduced population growth resulted from FoxO down-regulation. In bilaterally symmetrical organisms (Bilateria), the transcription factor FoxO affects stress response, lifespan, and increase in stem cells. If this transcription factor is knocked down in bilaterian model organisms, such as fruit flies and nematodes, their lifespan is significantly decreased. In experiments on H. vulgaris (a radially symmetrical member of phylum Cnidaria), when FoxO levels were decreased, there was a negative effect on many key features of the Hydra, but no death was observed, thus it is believed other factors may contribute to the apparent lack of aging in these creatures. DNA repair Hydra are capable of two types of DNA repair: nucleotide excision repair and base excision repair. The repair pathways facilitate DNA replication by removing DNA damage. Their identification in hydra was based, in part, on the presence in its genome of genes homologous to ones present in other genetically well studied species playing key roles in these DNA repair pathways. Genomics An ortholog comparison analysis done within the last decade demonstrated that Hydra share a minimum of 6,071 genes with humans. Hydra is becoming an increasingly better model system as more genetic approaches become available. Transgenic hydra have become attractive model organisms to study the evolution of immunity. A draft of the genome of Hydra magnipapillata was reported in 2010. The genomes of cnidarians are usually less than 500 Mb (megabases) in size, as in the Hydra viridissima, which has a genome size of approximately 300 Mb. In contrast, the genomes of brown hydras are approximately 1 Gb in size. This is because the brown hydra genome is the result of an expansion event involving LINEs, a type of transposable elements, in particular, a single family of the CR1 class. This expansion is unique to this subgroup of the genus Hydra and is absent in the green hydra, which has a repeating landscape similar to other cnidarians. These genome characteristics make Hydra attractive for studies of transposon-driven speciations and genome expansions. Due to the simplicity of their life cycle when compared to other hydrozoans, hydras have lost many genes that correspond to cell types or metabolic pathways of which the ancestral function is still unknown. Hydra genome shows a preference towards proximal promoters. Thanks to this feature, many reporter cell lines have been created with regions around 500 to 2000 bases upstream of the gene of interest. Its cis-regulatory elements (CRE) are mostly located less than 2000 base pairs upstream from the closest transcription initiation site, but there are CREs located further away. Its chromatin has a Rabl configuration. There are interactions between the centromeres of different chromosomes and the centromeres and telomeres of the same chromosome. It presents a great number of intercentromeric interactions when compared to other cnidarians, probably due to the loss of multiple subunits of condensin II. It is organized in domains that span dozens to hundreds of megabases, containing epigenetically co-regulated genes and flanked by boundaries located within heterochromatin. Transcriptomics Different Hydra cell types express gene families of different evolutionary ages. Progenitor cells (stem cells, neuron and nematocyst precursors, and germ cells) express genes from families that predate metazoans. Among differentiated cells some express genes from families that date from the base of metazoans, like gland and neuronal cells, and others express genes from newer families, originating from the base of cnidaria or medusozoa, like nematocysts. Interstitial cells contain translation factors with a function that has been conserved for at least 400 million years.
Biology and health sciences
Cnidarians
Animals
13768
https://en.wikipedia.org/wiki/Hydrus
Hydrus
Hydrus is a small constellation in the deep southern sky. It was one of twelve constellations created by Petrus Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman and it first appeared on a 35-cm (14 in) diameter celestial globe published in late 1597 (or early 1598) in Amsterdam by Plancius and Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in Johann Bayer's Uranometria of 1603. The French explorer and astronomer Nicolas Louis de Lacaille charted the brighter stars and gave their Bayer designations in 1756. Its name means "male water snake", as opposed to Hydra, a much larger constellation that represents a female water snake. It remains below the horizon for most Northern Hemisphere observers. The brightest star is the 2.8-magnitude Beta Hydri, also the closest reasonably bright star to the south celestial pole. Pulsating between magnitude 3.26 and 3.33, Gamma Hydri is a variable red giant 60 times the diameter of the Sun. Lying near it is VW Hydri, one of the brightest dwarf novae in the heavens. Four star systems in Hydrus have been found to have exoplanets to date, including HD 10180, which could bear up to nine planetary companions. History Hydrus was one of the twelve constellations established by the astronomer Petrus Plancius from the observations of the southern sky by the Dutch explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a 35-cm (14 in) diameter celestial globe published in late 1597 (or early 1598) in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in the German cartographer Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name De Waterslang, "The Water Snake", it representing a type of snake encountered on the expedition rather than a mythical creature. The French explorer and astronomer Nicolas Louis de Lacaille called it l’Hydre Mâle on the 1756 version of his planisphere of the southern skies, distinguishing it from the feminine Hydra. The French name was retained by Jean Fortin in 1776 for his Atlas Céleste, while Lacaille Latinised the name to Hydrus for his revised Coelum Australe Stelliferum in 1763. Characteristics Irregular in shape, Hydrus is bordered by Mensa to the southeast, Eridanus to the east, Horologium and Reticulum to the northeast, Phoenix to the north, Tucana to the northwest and west, and Octans to the south; Lacaille had shortened Hydrus' tail to make space for this last constellation he had drawn up. Covering 243 square degrees and 0.589% of the night sky, it ranks 61st of the 88 constellations in size. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Hyi". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 12 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −57.85° and −82.06°. As one of the deep southern constellations, it remains below the horizon at latitudes north of the 30th parallel in the Northern Hemisphere, and is circumpolar at latitudes south of the 50th parallel in the Southern Hemisphere. Herman Melville mentions it and Argo Navis in Moby Dick "beneath effulgent Antarctic Skies", highlighting his knowledge of the southern constellations from whaling voyages. A line drawn between the long axis of the Southern Cross to Beta Hydri and then extended 4.5 times will mark a point due south. Hydrus culminates at midnight around 26 October. Features Stars Keyzer and de Houtman assigned fifteen stars to the constellation in their Malay and Madagascan vocabulary, with a star that would be later designated as Alpha Hydri marking the head, Gamma the chest and a number of stars that were later allocated to Tucana, Reticulum, Mensa and Horologium marking the body and tail. Lacaille charted and designated 20 stars with the Bayer designations Alpha through to Tau in 1756. Of these, he used the designations Eta, Pi and Tau twice each, for three sets of two stars close together, and omitted Omicron and Xi. He assigned Rho to a star that subsequent astronomers were unable to find. Beta Hydri, the brightest star in Hydrus, is a yellow star of apparent magnitude 2.8, lying 24 light-years from Earth. It has about 104% of the mass of the Sun and 181% of the Sun's radius, with more than three times the Sun's luminosity. The spectrum of this star matches a stellar classification of G2 IV, with the luminosity class of 'IV' indicating this is a subgiant star. As such, it is a slightly more evolved star than the Sun, with the supply of hydrogen fuel at its core becoming exhausted. It is the nearest subgiant star to the Sun and one of the oldest stars in the solar neighbourhood. Thought to be between 6.4 and 7.1 billion years old, this star bears some resemblance to what the Sun may look like in the far distant future, making it an object of interest to astronomers. It is also the closest bright star to the south celestial pole. Located at the northern edge of the constellation and just southwest of Achernar is Alpha Hydri, a white sub-giant star of magnitude 2.9, situated 72 light-years from Earth. Of spectral type F0IV, it is beginning to cool and enlarge as it uses up its supply of hydrogen. It is twice as massive and 3.3 times as wide as the Sun and 26 times more luminous. A line drawn between Alpha Hydri and Beta Centauri is bisected by the south celestial pole. In the southeastern corner of the constellation is Gamma Hydri, a red giant of spectral type M2III located 214 light-years from Earth. It is a semi-regular variable star, pulsating between magnitudes 3.26 and 3.33. Observations over five years were not able to establish its periodicity. It is around 1.5 to 2 times as massive as the Sun, and has expanded to about 60 times the Sun's diameter. It shines with about 655 times the luminosity of the Sun. Located 3° northeast of Gamma is the VW Hydri, a dwarf nova of the SU Ursae Majoris type. It is a close binary system that consists of a white dwarf and another star, the former drawing off matter from the latter into a bright accretion disk. These systems are characterised by frequent eruptions and less frequent supereruptions. The former are smooth, while the latter exhibit short "superhumps" of heightened activity. One of the brightest dwarf novae in the sky, it has a baseline magnitude of 14.4 and can brighten to magnitude 8.4 during peak activity. BL Hydri is another close binary system composed of a low-mass star and a strongly magnetic white dwarf. Known as a polar or AM Herculis variable, these produce polarized optical and infrared emissions and intense soft and hard X-ray emissions to the frequency of the white dwarf's rotation period—in this case 113.6 minutes. There are two notable optical double stars in Hydrus. Pi Hydri, composed of Pi1 Hydri and Pi2 Hydri, is divisible in binoculars. Around 476 light-years distant, Pi1 is a red giant of spectral type M1III that varies between magnitudes 5.52 and 5.58. Pi2 is an orange giant of spectral type K2III and shining with a magnitude of 5.7, around 488 light-years from Earth. Eta Hydri is the other optical double, composed of Eta1 and Eta2. Eta1 is a blue-white main sequence star of spectral type B9V that was suspected of being variable, and is located just over 700 light-years away. Eta2 has a magnitude of 4.7 and is a yellow giant star of spectral type G8.5III around 218 light-years distant, which has evolved off the main sequence and is expanding and cooling on its way to becoming a red giant. Calculations of its mass indicate it was most likely a white A-type main sequence star for most of its existence, around twice the mass of the Sun. A planet, Eta2 Hydri b, greater than 6.5 times the mass of Jupiter was discovered in 2005, orbiting around Eta2 every 711 days at a distance of 1.93 astronomical units (AU). Three other systems have been found to have planets, most notably the Sun-like star HD 10180, which has seven planets, plus possibly an additional two for a total of nine—as of 2012 more than any other system to date, including the Solar System. Lying around from the Earth, it has an apparent magnitude of 7.33. GJ 3021 is a solar twin—a star very like the Sun—around 57 light-years distant with a spectral type G8V and magnitude of 6.7. It has a Jovian planet companion (GJ 3021 b). Orbiting about 0.5 AU from its star, it has a minimum mass 3.37 times that of Jupiter and a period of around 133 days. The system is a complex one as the faint star GJ 3021B orbits at a distance of 68 AU; it is a red dwarf of spectral type M4V. HD 20003 is a star of magnitude 8.37. It is a yellow main sequence star of spectral type G8V a little cooler and smaller than the Sun around 143 light-years away. It has two planets that are around 12 and 13.5 times as massive as the Earth with periods of just under 12 and 34 days respectively. Deep-sky objects Hydrus contains only faint deep-sky objects. IC 1717 was a deep-sky object discovered by the Danish astronomer John Louis Emil Dreyer in the late 19th century. The object at the coordinate Dreyer observed is no longer there, and is now a mystery. It was very likely to have been a faint comet. PGC 6240, known as the White Rose Galaxy, is a giant spiral galaxy surrounded by shells resembling rose petals, located around 345 million light years from the Solar System. Unusually, it has cohorts of globular clusters of three distinct ages suggesting bouts of post-starburst formation following a merger with another galaxy. The constellation also contains a spiral galaxy, NGC 1511, which lies edge on to observers on Earth and is readily viewed in amateur telescopes. Located mostly in Dorado, the Large Magellanic Cloud extends into Hydrus. The globular cluster NGC 1466 is an outlying component of the galaxy, and contains many RR Lyrae-type variable stars. It has a magnitude of 11.59 and is thought to be over 12 billion years old. Two stars, HD 24188 of magnitude 6.3 and HD 24115 of magnitude 9.0, lie nearby in its foreground. NGC 602 is composed of an emission nebula and a young, bright open cluster of stars that is an outlying component on the eastern edge of the Small Magellanic Cloud, a satellite galaxy to the Milky Way. Most of the cloud is located in the neighbouring constellation Tucana.
Physical sciences
Other
Astronomy