id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
481856
https://en.wikipedia.org/wiki/Rigid%20body%20dynamics
Rigid body dynamics
In the physical science of dynamics, rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are rigid (i.e. they do not deform under the action of applied forces) simplifies analysis, by reducing the parameters that describe the configuration of the system to the translation and rotation of reference frames attached to each body. This excludes bodies that display fluid, highly elastic, and plastic behavior. The dynamics of a rigid body system is described by the laws of kinematics and by the application of Newton's second law (kinetics) or their derivative form, Lagrangian mechanics. The solution of these equations of motion provides a description of the position, the motion and the acceleration of the individual components of the system, and overall the system itself, as a function of time. The formulation and solution of rigid body dynamics is an important tool in the computer simulation of mechanical systems. Planar rigid body dynamics If a system of particles moves parallel to a fixed plane, the system is said to be constrained to planar movement. In this case, Newton's laws (kinetics) for a rigid system of N particles, P, i=1,...,N, simplify because there is no movement in the k direction. Determine the resultant force and torque at a reference point R, to obtain where r denotes the planar trajectory of each particle. The kinematics of a rigid body yields the formula for the acceleration of the particle P in terms of the position R and acceleration A of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as, For systems that are constrained to planar movement, the angular velocity and angular acceleration vectors are directed along k perpendicular to the plane of movement, which simplifies this acceleration equation. In this case, the acceleration vectors can be simplified by introducing the unit vectors e from the reference point R to a point r and the unit vectors , so This yields the resultant force on the system as and torque as where and is the unit vector perpendicular to the plane for all of the particles P. Use the center of mass C as the reference point, so these equations for Newton's laws simplify to become where is the total mass and I is the moment of inertia about an axis perpendicular to the movement of the rigid system and through the center of mass. Rigid body in three dimensions Orientation or attitude descriptions Several methods to describe orientations of a rigid body in three dimensions have been developed. They are summarized in the following sections. Euler angles The first attempt to represent an orientation is attributed to Leonhard Euler. He imagined three reference frames that could rotate one around the other, and realized that by starting with a fixed reference frame and performing three rotations, he could get any other reference frame in the space (using two rotations to fix the vertical axis and another to fix the other two axes). The values of these three rotations are called Euler angles. Commonly, is used to denote precession, nutation, and intrinsic rotation. Tait–Bryan angles These are three angles, also known as yaw, pitch and roll, Navigation angles and Cardan angles. Mathematically they constitute a set of six possibilities inside the twelve possible sets of Euler angles, the ordering being the one best used for describing the orientation of a vehicle such as an airplane. In aerospace engineering they are usually referred to as Euler angles. Orientation vector Euler also realized that the composition of two rotations is equivalent to a single rotation about a different fixed axis (Euler's rotation theorem). Therefore, the composition of the former three angles has to be equal to only one rotation, whose axis was complicated to calculate until matrices were developed. Based on this fact he introduced a vectorial way to describe any rotation, with a vector on the rotation axis and module equal to the value of the angle. Therefore, any orientation can be represented by a rotation vector (also called Euler vector) that leads to it from the reference frame. When used to represent an orientation, the rotation vector is commonly called orientation vector, or attitude vector. A similar method, called axis-angle representation, describes a rotation or orientation using a unit vector aligned with the rotation axis, and a separate value to indicate the angle (see figure). Orientation matrix With the introduction of matrices the Euler theorems were rewritten. The rotations were described by orthogonal matrices referred to as rotation matrices or direction cosine matrices. When used to represent an orientation, a rotation matrix is commonly called orientation matrix, or attitude matrix. The above-mentioned Euler vector is the eigenvector of a rotation matrix (a rotation matrix has a unique real eigenvalue). The product of two rotation matrices is the composition of rotations. Therefore, as before, the orientation can be given as the rotation from the initial frame to achieve the frame that we want to describe. The configuration space of a non-symmetrical object in n-dimensional space is SO(n) × Rn. Orientation may be visualized by attaching a basis of tangent vectors to an object. The direction in which each vector points determines its orientation. Orientation quaternion Another way to describe rotations is using rotation quaternions, also called versors. They are equivalent to rotation matrices and rotation vectors. With respect to rotation vectors, they can be more easily converted to and from matrices. When used to represent orientations, rotation quaternions are typically called orientation quaternions or attitude quaternions. Newton's second law in three dimensions To consider rigid body dynamics in three-dimensional space, Newton's second law must be extended to define the relationship between the movement of a rigid body and the system of forces and torques that act on it. Newton formulated his second law for a particle as, "The change of motion of an object is proportional to the force impressed and is made in the direction of the straight line in which the force is impressed." Because Newton generally referred to mass times velocity as the "motion" of a particle, the phrase "change of motion" refers to the mass times acceleration of the particle, and so this law is usually written as where F is understood to be the only external force acting on the particle, m is the mass of the particle, and a is its acceleration vector. The extension of Newton's second law to rigid bodies is achieved by considering a rigid system of particles. Rigid system of particles If a system of N particles, Pi, i=1,...,N, are assembled into a rigid body, then Newton's second law can be applied to each of the particles in the body. If Fi is the external force applied to particle Pi with mass mi, then where Fij is the internal force of particle Pj acting on particle Pi that maintains the constant distance between these particles. An important simplification to these force equations is obtained by introducing the resultant force and torque that acts on the rigid system. This resultant force and torque is obtained by choosing one of the particles in the system as a reference point, R, where each of the external forces are applied with the addition of an associated torque. The resultant force F and torque T are given by the formulas, where Ri is the vector that defines the position of particle Pi. Newton's second law for a particle combines with these formulas for the resultant force and torque to yield, where the internal forces Fij cancel in pairs. The kinematics of a rigid body yields the formula for the acceleration of the particle Pi in terms of the position R and acceleration a of the reference particle as well as the angular velocity vector ω and angular acceleration vector α of the rigid system of particles as, Mass properties The mass properties of the rigid body are represented by its center of mass and inertia matrix. Choose the reference point R so that it satisfies the condition then it is known as the center of mass of the system. The inertia matrix [IR] of the system relative to the reference point R is defined by where is the column vector ; is its transpose, and is the 3 by 3 identity matrix. is the scalar product of with itself, while is the tensor product of with itself. Force-torque equations Using the center of mass and inertia matrix, the force and torque equations for a single rigid body take the form and are known as Newton's second law of motion for a rigid body. The dynamics of an interconnected system of rigid bodies, , , is formulated by isolating each rigid body and introducing the interaction forces. The resultant of the external and interaction forces on each body, yields the force-torque equations Newton's formulation yields 6M equations that define the dynamics of a system of M rigid bodies. Rotation in three dimensions A rotating object, whether under the influence of torques or not, may exhibit the behaviours of precession and nutation. The fundamental equation describing the behavior of a rotating solid body is Euler's equation of motion: where the pseudovectors τ and L are, respectively, the torques on the body and its angular momentum, the scalar I is its moment of inertia, the vector ω is its angular velocity, the vector α is its angular acceleration, D is the differential in an inertial reference frame and d is the differential in a relative reference frame fixed with the body. The solution to this equation when there is no applied torque is discussed in the articles Euler's equation of motion and Poinsot's ellipsoid. It follows from Euler's equation that a torque τ applied perpendicular to the axis of rotation, and therefore perpendicular to L, results in a rotation about an axis perpendicular to both τ and L. This motion is called precession. The angular velocity of precession ΩP is given by the cross product: Precession can be demonstrated by placing a spinning top with its axis horizontal and supported loosely (frictionless toward precession) at one end. Instead of falling, as might be expected, the top appears to defy gravity by remaining with its axis horizontal, when the other end of the axis is left unsupported and the free end of the axis slowly describes a circle in a horizontal plane, the resulting precession turning. This effect is explained by the above equations. The torque on the top is supplied by a couple of forces: gravity acting downward on the device's centre of mass, and an equal force acting upward to support one end of the device. The rotation resulting from this torque is not downward, as might be intuitively expected, causing the device to fall, but perpendicular to both the gravitational torque (horizontal and perpendicular to the axis of rotation) and the axis of rotation (horizontal and outwards from the point of support), i.e., about a vertical axis, causing the device to rotate slowly about the supporting point. Under a constant torque of magnitude τ, the speed of precession ΩP is inversely proportional to L, the magnitude of its angular momentum: where θ is the angle between the vectors ΩP and L. Thus, if the top's spin slows down (for example, due to friction), its angular momentum decreases and so the rate of precession increases. This continues until the device is unable to rotate fast enough to support its own weight, when it stops precessing and falls off its support, mostly because friction against precession cause another precession that goes to cause the fall. By convention, these three vectors - torque, spin, and precession - are all oriented with respect to each other according to the right-hand rule. Virtual work of forces acting on a rigid body An alternate formulation of rigid body dynamics that has a number of convenient features is obtained by considering the virtual work of forces acting on a rigid body. The virtual work of forces acting at various points on a single rigid body can be calculated using the velocities of their point of application and the resultant force and torque. To see this, let the forces F1, F2 ... Fn act on the points R1, R2 ... Rn in a rigid body. The trajectories of Ri, are defined by the movement of the rigid body. The velocity of the points Ri along their trajectories are where ω is the angular velocity vector of the body. Virtual work Work is computed from the dot product of each force with the displacement of its point of contact If the trajectory of a rigid body is defined by a set of generalized coordinates , , then the virtual displacements are given by The virtual work of this system of forces acting on the body in terms of the generalized coordinates becomes or collecting the coefficients of Generalized forces For simplicity consider a trajectory of a rigid body that is specified by a single generalized coordinate q, such as a rotation angle, then the formula becomes Introduce the resultant force F and torque T so this equation takes the form The quantity Q defined by is known as the generalized force associated with the virtual displacement δq. This formula generalizes to the movement of a rigid body defined by more than one generalized coordinate, that is where It is useful to note that conservative forces such as gravity and spring forces are derivable from a potential function , known as a potential energy. In this case the generalized forces are given by D'Alembert's form of the principle of virtual work The equations of motion for a mechanical system of rigid bodies can be determined using D'Alembert's form of the principle of virtual work. The principle of virtual work is used to study the static equilibrium of a system of rigid bodies, however by introducing acceleration terms in Newton's laws this approach is generalized to define dynamic equilibrium. Static equilibrium The static equilibrium of a mechanical system rigid bodies is defined by the condition that the virtual work of the applied forces is zero for any virtual displacement of the system. This is known as the principle of virtual work. This is equivalent to the requirement that the generalized forces for any virtual displacement are zero, that is Qi=0. Let a mechanical system be constructed from rigid bodies, Bi, , and let the resultant of the applied forces on each body be the force-torque pairs, and , . Notice that these applied forces do not include the reaction forces where the bodies are connected. Finally, assume that the velocity and angular velocities , , for each rigid body, are defined by a single generalized coordinate q. Such a system of rigid bodies is said to have one degree of freedom. The virtual work of the forces and torques, and , applied to this one degree of freedom system is given by where is the generalized force acting on this one degree of freedom system. If the mechanical system is defined by m generalized coordinates, , , then the system has m degrees of freedom and the virtual work is given by, where is the generalized force associated with the generalized coordinate . The principle of virtual work states that static equilibrium occurs when these generalized forces acting on the system are zero, that is These equations define the static equilibrium of the system of rigid bodies. Generalized inertia forces Consider a single rigid body which moves under the action of a resultant force F and torque T, with one degree of freedom defined by the generalized coordinate q. Assume the reference point for the resultant force and torque is the center of mass of the body, then the generalized inertia force associated with the generalized coordinate is given by This inertia force can be computed from the kinetic energy of the rigid body, by using the formula A system of rigid bodies with m generalized coordinates has the kinetic energy which can be used to calculate the m generalized inertia forces Dynamic equilibrium D'Alembert's form of the principle of virtual work states that a system of rigid bodies is in dynamic equilibrium when the virtual work of the sum of the applied forces and the inertial forces is zero for any virtual displacement of the system. Thus, dynamic equilibrium of a system of n rigid bodies with m generalized coordinates requires that for any set of virtual displacements . This condition yields equations, which can also be written as The result is a set of m equations of motion that define the dynamics of the rigid body system. Lagrange's equations If the generalized forces Qj are derivable from a potential energy , then these equations of motion take the form In this case, introduce the Lagrangian, , so these equations of motion become These are known as Lagrange's equations of motion. Linear and angular momentum System of particles The linear and angular momentum of a rigid system of particles is formulated by measuring the position and velocity of the particles relative to the center of mass. Let the system of particles Pi, be located at the coordinates ri and velocities vi. Select a reference point R and compute the relative position and velocity vectors, The total linear and angular momentum vectors relative to the reference point R are and If R is chosen as the center of mass these equations simplify to Rigid system of particles To specialize these formulas to a rigid body, assume the particles are rigidly connected to each other so P, i=1,...,n are located by the coordinates r and velocities v. Select a reference point R and compute the relative position and velocity vectors, where ω is the angular velocity of the system. The linear momentum and angular momentum of this rigid system measured relative to the center of mass R is These equations simplify to become, where M is the total mass of the system and [I] is the moment of inertia matrix defined by where [ri − R] is the skew-symmetric matrix constructed from the vector ri − R. Applications For the analysis of robotic systems For the biomechanical analysis of animals, humans or humanoid systems For the analysis of space objects For the understanding of strange motions of rigid bodies. For the design and development of dynamics-based sensors, such as gyroscopic sensors. For the design and development of various stability enhancement applications in automobiles. For improving the graphics of video games which involves rigid bodies
Physical sciences
Solid mechanics
Physics
481862
https://en.wikipedia.org/wiki/Uranium-238
Uranium-238
Uranium-238 ( or U-238) is the most common isotope of uranium found in nature, with a relative abundance of 99%. Unlike uranium-235, it is non-fissile, which means it cannot sustain a chain reaction in a thermal-neutron reactor. However, it is fissionable by fast neutrons, and is fertile, meaning it can be transmuted to fissile plutonium-239. 238U cannot support a chain reaction because inelastic scattering reduces neutron energy below the range where fast fission of one or more next-generation nuclei is probable. Doppler broadening of 238U's neutron absorption resonances, increasing absorption as fuel temperature increases, is also an essential negative feedback mechanism for reactor control. Around 99.284% of natural uranium's mass is uranium-238, which has a half-life of 1.41 seconds (4.468 years, or 4.468 billion years). Due to its natural abundance and half-life relative to other radioactive elements, 238U produces ~40% of the radioactive heat produced within the Earth. The 238U decay chain contributes six electron anti-neutrinos per 238U nucleus (one per beta decay), resulting in a large detectable geoneutrino signal when decays occur within the Earth. The decay of 238U to daughter isotopes is extensively used in radiometric dating, particularly for material older than approximately 1 million years. Depleted uranium has an even higher concentration of the 238U isotope, and even low-enriched uranium (LEU), while having a higher proportion of the uranium-235 isotope (in comparison to depleted uranium), is still mostly 238U. Reprocessed uranium is also mainly 238U, with about as much uranium-235 as natural uranium, a comparable proportion of uranium-236, and much smaller amounts of other isotopes of uranium such as uranium-234, uranium-233, and uranium-232. Nuclear energy applications In a fission nuclear reactor, uranium-238 can be used to generate plutonium-239, which itself can be used in a nuclear weapon or as a nuclear-reactor fuel supply. In a typical nuclear reactor, up to one-third of the generated power comes from the fission of 239Pu, which is not supplied as a fuel to the reactor, but rather, produced from 238U. A certain amount of production of from is unavoidable wherever it is exposed to neutron radiation. Depending on burnup and neutron temperature, different shares of the are converted to , which determines the "grade" of produced plutonium, ranging from weapons grade, through reactor grade, to plutonium so high in that it cannot be used in current reactors operating with a thermal neutron spectrum. The latter usually involves used "recycled" MOX fuel which entered the reactor containing significant amounts of plutonium. Breeder reactors 238U can produce energy via "fast" fission. In this process, a neutron that has a kinetic energy in excess of 1 MeV can cause the nucleus of 238U to split. Depending on design, this process can contribute some one to ten percent of all fission reactions in a reactor, but too few of the average 2.5 neutrons produced in each fission have enough speed to continue a chain reaction. 238U can be used as a source material for creating plutonium-239, which can in turn be used as nuclear fuel. Breeder reactors carry out such a process of transmutation to convert the fertile isotope 238U into fissile 239Pu. It has been estimated that there is anywhere from 10,000 to five billion years worth of 238U for use in these power plants. Breeder technology has been used in several experimental nuclear reactors. By December 2005, the only breeder reactor producing power was the 600-megawatt BN-600 reactor at the Beloyarsk Nuclear Power Station in Russia. Russia later built another unit, BN-800, at the Beloyarsk Nuclear Power Station which became fully operational in November 2016. Also, Japan's Monju breeder reactor, which has been inoperative for most of the time since it was originally built in 1986, was ordered for decommissioning in 2016, after safety and design hazards were uncovered, with a completion date set for 2047. Both China and India have announced plans to build nuclear breeder reactors. The breeder reactor as its name implies creates even larger quantities of 239Pu or 233U than the fission nuclear reactor. The Clean And Environmentally Safe Advanced Reactor (CAESAR), a nuclear reactor concept that would use steam as a moderator to control delayed neutrons, will potentially be able to use 238U as fuel once the reactor is started with Low-enriched uranium (LEU) fuel. This design is still in the early stages of development. CANDU reactors Natural uranium, with 0.711% , is usable as nuclear fuel in reactors designed specifically to make use of naturally occurring uranium, such as CANDU reactors. By making use of non-enriched uranium, such reactor designs give a nation access to nuclear power for the purpose of electricity production without necessitating the development of fuel enrichment capabilities, which are often seen as a prelude to weapons production. Radiation shielding 238U is also used as a radiation shield – its alpha radiation is easily stopped by the non-radioactive casing of the shielding and the uranium's high atomic weight and high number of electrons are highly effective in absorbing gamma rays and X-rays. It is not as effective as ordinary water for stopping fast neutrons. Both metallic depleted uranium and depleted uranium dioxide are used for radiation shielding. Uranium is about five times better as a gamma ray shield than lead, so a shield with the same effectiveness can be packed into a thinner layer. DUCRETE, a concrete made with uranium dioxide aggregate instead of gravel, is being investigated as a material for dry cask storage systems to store radioactive waste. Downblending The opposite of enriching is downblending. Surplus highly enriched uranium can be downblended with depleted uranium or natural uranium to turn it into low-enriched uranium suitable for use in commercial nuclear fuel. 238U from depleted uranium and natural uranium is also used with recycled 239Pu from nuclear weapons stockpiles for making mixed oxide fuel (MOX), which is now being redirected to become fuel for nuclear reactors. This dilution, also called downblending, means that any nation or group that acquired the finished fuel would have to repeat the very expensive and complex chemical separation of uranium and plutonium process before assembling a weapon. Nuclear weapons Most modern nuclear weapons utilize 238U as a "tamper" material (see nuclear weapon design). A tamper which surrounds a fissile core works to reflect neutrons and to add inertia to the compression of the 239Pu charge. As such, it increases the efficiency of the weapon and reduces the critical mass required. In the case of a thermonuclear weapon, 238U can be used to encase the fusion fuel, the high flux of very energetic neutrons from the resulting fusion reaction causes 238U nuclei to split and adds more energy to the "yield" of the weapon. Such weapons are referred to as fission-fusion-fission weapons after the order in which each reaction takes place. An example of such a weapon is Castle Bravo. The larger portion of the total explosive yield in this design comes from the final fission stage fueled by 238U, producing enormous amounts of radioactive fission products. For example, an estimated 77% of the 10.4-megaton yield of the Ivy Mike thermonuclear test in 1952 came from fast fission of the depleted uranium tamper. Because depleted uranium has no critical mass, it can be added to thermonuclear bombs in almost unlimited quantity. The Soviet Union's test of the Tsar Bomba in 1961 produced "only" 50 megatons of explosive power, over 90% of which came from fusion because the 238U final stage had been replaced with lead. Had 238U been used instead, the yield of the Tsar Bomba could have been well above 100 megatons, and it would have produced nuclear fallout equivalent to one third of the global total that had been produced up to that time. Radium series (or uranium series) The decay chain of 238U is commonly called the "radium series" (sometimes "uranium series"). Beginning with naturally occurring uranium-238, this series includes the following elements: astatine, bismuth, lead, polonium, protactinium, radium, radon, thallium, and thorium. All of the decay products are present, at least transiently, in any uranium-containing sample, whether metal, compound, or mineral. The decay proceeds as: The mean lifetime of 238U is 1.41 seconds divided by ln(2) ≈ 0.693 (or multiplied by 1/ln(2) ≈  1.443), i.e. ca. 2 seconds, so 1 mole of 238U emits 3 alpha particles per second, producing the same number of thorium-234 atoms. In a closed system an equilibrium would be reached, with all amounts except for lead-206 and 238U in fixed ratios, in slowly decreasing amounts. The amount of 206Pb will increase accordingly while that of 238U decreases; all steps in the decay chain have this same rate of 3 decayed particles per second per mole 238U. Thorium-234 has a mean lifetime of 3 seconds, so there is equilibrium if one mole of 238U contains 9 atoms of thorium-234, which is 1.5 mole (the ratio of the two half-lives). Similarly, in an equilibrium in a closed system the amount of each decay product, except the end product lead, is proportional to its half-life. While 238U is minimally radioactive, its decay products, thorium-234 and protactinium-234, are beta particle emitters with half-lives of about 20 days and one minute respectively. Protactinium-234 decays to uranium-234, which has a half-life of hundreds of millennia, and this isotope does not reach an equilibrium concentration for a very long time. When the two first isotopes in the decay chain reach their relatively small equilibrium concentrations, a sample of initially pure 238U will emit three times the radiation due to 238U itself, and most of this radiation is beta particles. As already touched upon above, when starting with pure 238U, within a human timescale the equilibrium applies for the first three steps in the decay chain only. Thus, for one mole of 238U, 3 times per second one alpha and two beta particles and a gamma ray are produced, together 6.7 MeV, a rate of 3 μW. 238U atom is itself a gamma emitter at 49.55 keV with probability 0.084%, but that is a very weak gamma line, so activity is measured through its daughter nuclides in its decay series. Radioactive dating 238U abundance and its decay to daughter isotopes comprises multiple uranium dating techniques and is one of the most common radioactive isotopes used in radiometric dating. The most common dating method is uranium-lead dating, which is used to date rocks older than 1 million years old and has provided ages for the oldest rocks on Earth at 4.4 billion years old. The relation between 238U and 234U gives an indication of the age of sediments and seawater that are between 100,000 years and 1,200,000 years in age. The 238U daughter product, 206Pb, is an integral part of lead–lead dating, which is most famous for the determination of the age of the Earth. The Voyager program spacecraft carry small amounts of initially pure 238U on the covers of their golden records to facilitate dating in the same manner. Health concerns Uranium emits alpha particles through the process of alpha decay. External exposure has limited effect. Significant internal exposure to tiny particles of uranium or its decay products, such as thorium-230, radium-226 and radon-222, can cause severe health effects, such as cancer of the bone or liver. Uranium is also a toxic chemical, meaning that ingestion of uranium can cause kidney damage from its chemical properties much sooner than its radioactive properties would cause cancers of the bone or liver.
Physical sciences
Actinides
Chemistry
482050
https://en.wikipedia.org/wiki/Tetrachromacy
Tetrachromacy
Tetrachromacy (from Greek tetra, meaning "four" and chroma, meaning "color") is the condition of possessing four independent channels for conveying color information, or possessing four types of cone cell in the eye. Organisms with tetrachromacy are called tetrachromats. In tetrachromatic organisms, the sensory color space is four-dimensional, meaning that matching the sensory effect of arbitrarily chosen spectra of light within their visible spectrum requires mixtures of at least four primary colors. Tetrachromacy is demonstrated among several species of birds, fishes, and reptiles. The common ancestor of all vertebrates was a tetrachromat, but a common ancestor of mammals lost two of its four kinds of cone cell, evolving dichromacy, a loss ascribed to the conjectured nocturnal bottleneck. Some primates then later evolved a third cone. Physiology The normal explanation of tetrachromacy is that the organism's retina contains four types of higher-intensity light receptors (called cone cells in vertebrates as opposed to rod cells, which are lower-intensity light receptors) with different spectral sensitivity. This means that the organism may see wavelengths beyond those of a typical human's vision, and may be able to distinguish between colors that, to a normal human, appear to be identical. Species with tetrachromatic color vision may have an unknown physiological advantage over rival species. Humans Apes (including humans) and Old World monkeys normally have only three types of cone cell, and are therefore trichromats. However, human tetrachromacy is suspected to exist in a small percentage of the population. Trichromats have three types of cone cells, each type being sensitive to a corresponding portion of the spectrum as shown in the diagram. But at least one woman has been implied to be a tetrachromat. More precisely, she had an additional cone type L′, intermediate between M and L in its responsivity, and showed 3 dimensional (M, L′, and L components) color discrimination for wavelengths 546–670 nm (to which the fourth type, S, is insensitive). Tetrachromacy requires that there be four independent photoreceptor cell classes with different spectral sensitivity. However, there must also be the appropriate post-receptoral mechanism to compare the signals from the four classes of receptors. According to the opponent process theory, humans have three opponent channels, which give trichromacy. It is unclear whether having available a fourth opponent channel is sufficient for tetrachromacy. Mice, which normally have only two cone pigments (and therefore two opponent channels), have been engineered to express a third cone pigment, and appear to demonstrate increased chromatic discrimination, possibly indicating trichromacy, and suggesting they were able to create or re-enable a third opponent channel. This would support the theory that humans should be able to utilize a fourth opponent channel for tetrachromatic vision. However, the original publication's claims about plasticity in the optic nerve have also been disputed. Tetrachromacy in carriers of CVD It has been theorized that females who carry recessive opsin alleles that can cause color vision deficiency (CVD) could possess tetrachromacy. Female carriers of anomalous trichromacy (mild color blindness) possess heterozygous alleles of the genes that encode the L-opsin or M-opsin. These alleles often have a different spectral sensitivity, so if the carrier expresses both opsin alleles, they may exhibit tetrachromacy. In humans, two cone cell pigment genes are present on the The classical type 2 opsin gene OPN1MW and OPN1MW2. People with two X chromosomes could possess multiple cone cell pigments, perhaps born as full tetrachromats who have four simultaneously-functioning kinds of cone cell, each type with a specific pattern of responsiveness to different wavelengths of light in the range of the visible spectrum. One study suggested that 15% of the world's women might have the type of fourth cone whose sensitivity peak is between the standard red and green cones, theoretically giving a significant increase in color differentiation. Another study suggests that as many as 50% of women and 8% of men may have four photopigments and corresponding increased chromatic discrimination, compared to trichromats. In 2010, after twenty years' study of women with four types of cones (non-functional tetrachromats), neuroscientist Gabriele Jordan identified a woman (subject 'cDa29') who could detect a greater variety of colors than trichromats could, corresponding with a functional or "true" tetrachromat. Specifically, she has been shown to be a trichromat in the range 546–670 nm where people with normal vision are essentially dichromats due to negligible response of S cones to those wavelengths. Thus, if S cones of 'cDa29' provide independent color perception dimension as they normally do, that would confirm her being a tetrachromat when the whole spectrum is considered. Variation in cone pigment genes is widespread in most human populations, but the most prevalent and pronounced tetrachromacy would derive from female carriers of major red / green pigment anomalies, usually classed as forms of "color blindness" (protanomaly or deuteranomaly). The biological basis for this phenomenon is X-inactivation of heterozygotic alleles for retinal pigment genes, which is the same mechanism that gives the majority of female New World monkeys trichromatic vision. In humans, preliminary visual processing occurs in the neurons of the retina. It is not known how these nerves would respond to a new color channel: Whether they would handle it separately, or just combine it with one of the existing channels. Similarly, visual information leaves the eye by way of the optic nerve, and a variety of final image processing takes place in the brain; it is not known whether the optic nerve or the areas of the brain have any capacity to effectively respond if presented with a stimulus from a new color signal. Tetrachromacy may also enhance vision in dim lighting, or in looking at a screen. Conditional tetrachromacy Despite being trichromats, humans can experience slight tetrachromacy at low light intensities, using their mesopic vision. In mesopic vision, both cone cells and rod cells are active. While rods typically do not contribute to color vision, in these specific light conditions, they may give a small region of tetrachromacy in the color space. Human rod cell sensitivity is greatest at 500 nm (bluish-green) wavelength, which is significantly different from the peak spectral sensitivity of the cones (typically 420, 530, and 560 nm). Blocked tetrachromacy Although many birds are tetrachromats with a fourth color in the ultraviolet, humans cannot see ultraviolet light directly because the lens of the eye blocks most light in the wavelength range of 300–400 nm; shorter wavelengths are blocked by the cornea. The photoreceptor cells of the retina are sensitive to near ultraviolet light, and people lacking a lens (a condition known as aphakia) see near ultraviolet light (down to 300 nm) as whitish blue, or for some wavelengths, whitish violet, probably because all three types of cones are roughly equally sensitive to ultraviolet light (with blue cone cells slightly more sensitive). While an extended visible range does not denote tetrachromacy, some believe that visual pigments are available with sensitivity in near-UV wavelengths that would enable tetrachromacy in the case of aphakia. However, there is no peer-reviewed evidence supporting this claim. Other animals Fish Fish, specifically teleosts, are typically tetrachromats. Exceptions include: Sharks and rays – range from monochromacy to trichromacy Deep-sea fish – often rod monochromats Cichlid – arguably pentachromacy or higher Birds Some species of birds, such as the zebra finch and the Columbidae, use the ultraviolet wavelength 300–400 nm specific to tetrachromatic color vision as a tool during mate selection and foraging. When selecting for mates, ultraviolet plumage and skin coloration show a high level of selection. A typical bird eye responds to wavelengths from about 300–700 nm. In terms of frequency, this corresponds to a band in the vicinity of 430–1000 THz. Most birds have retinas with four spectral types of cone cell that are believed to mediate tetrachromatic color vision. Bird color vision is further improved by filtering by pigmented oil droplets in the photoreceptors. The oil droplets filter incident light before it reaches the visual pigment in the outer segments of the photoreceptors. The four cone types, and the specialization of pigmented oil droplets, give birds better color vision than that of humans. However, more recent research has suggested that tetrachromacy in birds only provides birds with a larger visual spectrum than that in humans (humans cannot see ultraviolet light, 300–400 nm), while the spectral resolution (the "sensitivity" to nuances) is similar. Some birds such as corvids and flycatchers, as well as most diurnal raptors, have little ability to see UV light, with the fourth cone type instead peaking in the violet range. It is believed that UV vision in raptors is selected against because short-wavelength UVA light contributes highly to chromatic aberration, reducing visual acuity which raptorial birds rely on for hunting. Pentachromacy and greater The dimensionality of color vision has no upper bound, but vertebrates with color vision greater than tetrachromacy are rare. The next level is pentachromacy, which is five-dimensional color vision requiring at least 5 different classes of photoreceptor as well as 5 independent channels of color information through the primary visual system. A female that is heterozygous for both the LWS and MWS opsins (and therefore a carrier for both protanomaly and deuteranomaly) would express five opsins of different spectral sensitivity. However, for her to be a true (strong) pentachromat, these opsins would need to be segregated into different photoreceptor cells and she would need to have the appropriate post-receptoral mechanisms to handle 5 opponent process channels, which is contentious. Some birds (notably pigeons) have five or more kinds of color receptors in their retinae, and are therefore believed to be pentachromats, though psychophysical evidence of functional pentachromacy is lacking. Research also indicates that some lampreys, members of the Petromyzontiformes, may be pentachromats. Invertebrates can have large numbers of different opsin classes, including 15 opsins in bluebottle butterflies or 33 in mantis shrimp. However, it has not been shown that color vision in these invertebrates is of a dimension commensurate with the number of opsins.
Biology and health sciences
Visual system
Biology
482147
https://en.wikipedia.org/wiki/Shadoof
Shadoof
A shadoof or shaduf, well pole, well sweep, sweep, swape, or simply a lift is a tool that is used to lift water from a well or another water source onto land or into another waterway or basin. It is highly efficient, and has been known since 3000 BCE. The mechanism of a shadoof comprises a long counterbalanced pole on a pivot, with a bucket attached to the end of it. It is generally used in a crop irrigation system using basins, dikes, ditches, walls, canals, and similar waterways. History One theory states that the shadoof was invented in prehistoric times in Mesopotamia as early as the time of Sargon of Akkad (around 24th and 23rd centuries BCE). The earliest evidence of this technology is a cylindrical seal with a depiction of a shadoof dating back to about 2200 BCE. Then, it is believed that the Minoans adopted this technology; evidence suggests the use of shadoofs as early as around 2100–1600 BCE. The shadoof appeared in Upper Egypt sometime after 2000 BC, most likely during the 18th Dynasty. Around the same time, the shadoof reached China. Some historians believe the Egyptians were the original inventors of the shadoof. The theory states that the shadoof originated along the Nile, using tomb drawings illustrating shadoofs at Thebes dating from 1250 BCE as evidence. An alternative origin theory states that shadoof originated from India around the same time as in Mesopotamia. This theory owes to the fact that the shadoof was well spread in India; however, there is little to no other evidence that makes this theory any stronger. It is still used in many areas of Africa and Asia and is very common in rural areas of India and Pakistan, such as the Bhojpuri belt of the Ganges plain. In Europe, they remain common in Germany and Hungary's Great Plain, where they are considered a symbol of the region. They are also well widespread throughout Eastern Europe in countries like Ukraine, Belarus, and the Baltic states. Design, construction, and efficiency The shadoof is easy to construct and is highly efficient in use. It consists of an upright frame on which is suspended a long pole or branch, at a distance of about one-fifth of its length from one end. At the long end of this pole hangs a bucket, skin bag, or bitumen-coated reed basket. The bucket can be made in many different styles, sometimes having an uneven base or a part at the top of the skin that can be untied. This allows the water to be immediately distributed rather than manually emptied. The short end carries a weight made of clay, stone, or a similar material, which serves as the counterpoise of a lever. The bucket can be lowered by the operator using their own weight to push it down; the counterweight then raises the full bucket without effort. The implementations vary from region to region. The frame can consist of a single pole or a pair, and the buckets can be attached in multiple different ways, from being tied to a rope to being attached to a thinner stick. With an almost effortless swinging and lifting motion, the waterproof vessel can be used to scoop up and carry water from a body of water (typically, a river or pond) onto land or to another body of water. At the end of each movement, the water is emptied out into runnels that convey the water along irrigation ditches in the required direction. The device is capable of lowering the force levels required of operators to the extent that the performance tends to be limited by the energy processing capacity of the operator and not necessarily muscle fatigue. The shadoof has a lifting range of 1 to 6 meters. A study of efficiency in various sites in Chad has shown that one man can lift 39 to 130 liters per minute over heights of 1.8 to 6.2 m, resulting in water-lifting power levels of 26.7 to 60.1 W. Its efficiency has been calculated at 60%. A study done in Nigeria also indirectly assessed energy usage through heart rate, serving as the physiological metric. Through this approach, it was discovered that making suitable adjustments to the shadoof decreased energy consumption from approximately 109 to 71 watts (equivalent to 6.56 to 4.27 kilojoules per minute). This reduction enables a farmer to engage in prolonged work without necessitating frequent rest breaks. Social effects Across numerous cultures, shadoofs have symbolized collective effort. In ancient Egypt and Mesopotamia, for instance, the multi-tiered shadoof systems allowed the movement of water to higher levels through teamwork. Together with other irrigation technologies, shadoofs not only helped establish reliable methods of agriculture for growing civilizations but also influenced cultural elements. The accessibility and utilization of shadoofs have been linked to class. During the Egyptian Middle Empire and the New Kingdom, pleasure gardens featuring shadoof irrigation became a hallmark of luxury residences and consequently a status symbol. Although not directly, shadoofs contributed to creating a class system, a barrier for some. At the same time, shadoofs have remained essential for those with limited resources to support their livelihoods on large-scale farms around the Nile. Even in the present day, many communities worldwide lack access to more sophisticated water technologies, making shadoofs an indicator of socio-economic standing and a certain measure of societal development. The technology's reliability, despite its antiquity, often gets overlooked. The geographic spread of shadoofs is impressive. In regions where irrigation is imperative, such as Egypt, India, and parts of sub-Saharan Africa, shadoofs have played a crucial role in enabling agriculture to thrive in water-scarce areas. Shadoofs have empowered marginalized communities by providing them with the means to secure their sustenance, breaking the barrier of food insecurity even in the modern age. Gender roles have also undergone a transformation, with women frequently assuming shadoof operation. The ease of use of the shadoof empowered women to play a more active role in farming. It is fair to acknowledge that shadoofs contributed to normalizing women's increased independence and participation in less physically demanding, and therefore more “socially acceptable”, aspects of food production. The ease of use of the shadoof is perhaps its most important feature. Studies have shown that it is impressively efficient, given the simplicity of its design. Still, it is essential to recognize that shadoofs, while easing the physical demands of water retrieval, require manual labor, posing a barrier for individuals with certain physical disabilities. Names Shadoof or shaduf comes from the Arabic word , šādūf. It is also called a lift, well pole, well sweep, or simply a sweep in the US. A less common English translation is swape. Picotah (or picota) is a Portuguese loan word. It is also called a jiégāo (桔槹) in Chinese. The Tamil name is thulla (துலா), while the Telugu name is ethaamu (ఏతాము) or ethamu (ఏతము). It was also known by the Ancient Greek name kēlōn () or kēlōneion (); this term (קילון) is also borrowed in Mishnaic Hebrew. In Ukrainian, it is called krynychnyi zhuravel (криничний журавель, "well crane") for its shape; it is also known as zvid (звід). In Hungarian, it is known as gémeskút (literally, "heron wells"). In Croatian, it is known as đeram (from Turkish, germe). Gallery In art In heraldry The use of shadoofs in certain areas influenced heraldry. Below are some examples of heraldic elements of various subdivisions.
Technology
Agricultural tools
null
482302
https://en.wikipedia.org/wiki/Muskox
Muskox
The muskox (Ovibos moschatus) is a hoofed mammal of the family Bovidae. Native to the Arctic, it is noted for its thick coat and for the strong odor emitted by males during the seasonal rut, from which its name derives. This musky odor has the effect of attracting females during mating season. Its Inuktitut name "umingmak" translates to "the bearded one". Its Woods Cree names "mâthi-môs" and "mâthi-mostos" translate to "ugly moose" and "ugly bison", respectively. In historic times, muskoxen primarily lived in Greenland and the Canadian Arctic of the Northwest Territories and Nunavut. They were formerly present in Eurasia, with their youngest natural records in the region dating to around 2,700 years ago, with reintroduced populations in the American state of Alaska, the Canadian territory of Yukon, and Siberia, and an introduced population in Norway, part of which emigrated to Sweden, where a small population now lives. Evolution Extant relatives The muskox is in the subtribe Ovibovina (or tribe Ovibovini) in the tribe Caprini (or subfamily Caprinae) of the subfamily Antilopinae in the family Bovidae. It is therefore more closely related to sheep and goats than to oxen; it is placed in its own genus, Ovibos (Latin: "sheep-ox"). It is one of the two largest extant members of the caprines, along with the similarly sized Takin Budorcas. While the takin and muskox were once considered possibly closely related, the takin lacks common ovibovine features, such as the muskox's specialized horn morphology, and genetic analysis shows that their lineages actually separated early in caprine evolution. Instead, the muskox's closest living relatives appear to be the gorals of the genus Naemorhedus, nowadays common in many countries of central and east Asia. The vague similarity between takin and muskox is therefore an example of convergent evolution. Fossil history and extinct relatives The modern muskox is the last member of a line of ovibovines that first evolved in temperate regions of Asia and adapted to a cold tundra environment late in its evolutionary history. Muskox ancestors with sheep-like high-positioned horns (horn cores being mostly over the plane of the frontal bones, rather than below them as in modern muskoxen) first left the temperate forests for the developing grasslands of Central Asia during the Pliocene, expanding into Siberia and the rest of northern Eurasia. Later migration waves of Asian ungulates that included high-horned muskoxen reached Europe and North America during the first half of the Pleistocene. The first well known muskox, the "shrub-ox" Euceratherium, crossed to North America over an early version of the Bering Land Bridge two million years ago and prospered in the American southwest and Mexico. Euceratherium was larger yet more lightly built than modern muskoxen, resembling a giant sheep with massive horns, and preferred hilly grasslands. A genus with intermediate horns, Soergelia, inhabited Eurasia in the early Pleistocene, from Spain to Siberia, and crossed to North America during the Irvingtonian (1.8 million years to 240,000 years ago), soon after Euceratherium. Unlike Euceratherium, which survived in America until the Pleistocene-Holocene extinction event, Soergelia was a lowland dweller which disappeared fairly early, displaced by more advanced ungulates, such as the "giant muskox" Praeovibos (literally "before Ovibos"). The low-horned Praeovibos was present in Europe and the Mediterranean 1.5 million years ago, colonized Alaska and the Yukon one million years ago and disappeared half a million years ago. Praeovibos was a highly adaptable animal apparently associated with cold tundra (reindeer) and temperate woodland (red deer) faunas alike. During the Mindel glaciation 500,000 years ago, Praeovibos was present in the Kolyma river area in eastern Siberia in association with many Ice Age megafauna that would later coexist with Ovibos, in the Kolyma itself and elsewhere, including wild horses, reindeer, woolly mammoth and stag-moose. It is debated, however, if Praeovibos was directly ancestral to Ovibos, or both genera descended from a common ancestor, since the two occurred together during the middle Pleistocene. Defenders of ancestry from Praeovibos have proposed that Praeovibos evolved into Ovibos in one region during a period of isolation and expanded later, replacing the remaining populations of Praeovibos. Two more Praeovibos-like genera were named in America in the 19th century, Bootherium and Symbos, which are now identified as the male and female forms of a single, sexually dimorphic species, the "woodland muskox", Bootherium bombifrons. Bootherium inhabited open woodland areas of North America during the late Pleistocene, from Alaska to Texas and maybe even Mexico, but was most common in the Southern United States, while Ovibos replaced it in the tundra-steppe to the north, immediately south of the Laurentian ice sheet. Modern Ovibos appeared in Germany almost one million years ago and was common in the region through the Pleistocene. By the Mindel, muskoxen had also reached the British Isles. Both Germany and Britain were just south of the Scandinavian ice sheet and covered in tundra during cold periods, but Pleistocene muskoxen are also rarely recorded in more benign and wooded areas to the south like France and Green Spain, where they coexisted with temperate ungulates like red deer and aurochs. Likewise, the muskox is known to have survived in Britain during warm interglacial periods. Today's muskoxen are descended from others believed to have migrated from Siberia to North America between 200,000 and 90,000 years ago, having previously occupied Alaska (at the time united to Siberia and isolated periodically from the rest of North America by the union of the Laurentide and Cordilleran Ice Sheets during colder periods) between 250,000 and 150,000 years ago. After migrating south during one of the warmer periods of the Illinoian glaciation, non-Alaskan American muskoxen would be isolated from the rest in the colder periods. The muskox was already present in its current stronghold of Banks Island 34,000 years ago, but the existence of other ice-free areas in the Canadian Arctic Archipelago at the time is disputed. Along with the bison and the pronghorn, the muskox was one of a few species of Pleistocene megafauna in North America to survive the Pleistocene/Holocene extinction event and live to the present day. The muskox is thought to have been able to survive the last glacial period by finding ice-free areas (refugia) away from prehistoric peoples. Fossil DNA evidence suggests that muskoxen were not only more geographically widespread during the Pleistocene, but also more genetically diverse. During that time, other populations of muskoxen lived across the Arctic, from the Ural Mountains to Greenland. By contrast, the current genetic makeup of the species is more homogenous. Climate fluctuation may have affected this shift in genetic diversity: research indicates colder periods in Earth's history are correlated with more diversity, and warmer periods with more homogeneity. Muskox populations survived into the Holocene in Siberia, with their youngest records in the region being from the Taymyr Peninsula, dating to around 2,700 years ago (~700 BC). Physical characteristics Both male and female muskoxen have long, curved horns. Muskoxen stand high at withers, with females measuring in length, and the larger males . The small tail, often concealed under a layer of fur, measures only long. Adults, on average, weigh , but can range from . The thick coat and large head suggest a larger animal than the muskox truly is; the bison, to which the muskox is often compared, can weigh up to twice as much. However, heavy zoo-kept specimens have weighed up to . Their coat, a mix of black, gray and brown, includes long guard hairs that almost reach the ground. Rare "white muskoxen" have been spotted in the Queen Maud Gulf Bird Sanctuary. Muskoxen are occasionally semi-domesticated for wool, and rarely for meat and milk. The U.S. state of Alaska has several muskoxen farms specifically aimed at wool harvesting. The wool, called qiviut, is highly prized for its softness, length, and insulation value. Prices for yarn range between . A muskox can reach speeds of up to . Their life expectancy is between 12 and 20 years. Range Prehistory During the Pleistocene period, muskoxen were much more widespread. Fossil evidence shows that they lived across the Siberian and North American Arctic, from the Urals to Greenland. The ancestors of today's muskoxen came across the Bering Land Bridge to North America between 200,000 and 90,000 years ago. During the Wisconsinan, modern muskox thrived in the tundra south of the Laurentide Ice Sheet, in what is now the Midwest, the Appalachians and Virginia, while distant relatives Bootherium and Euceratherium lived in the forests of the Southern United States and the western shrubland, respectively. Though they were always less common than other Ice Age megafauna, muskox abundance peaked during the Würm II glaciation 20,000 years ago and declined afterwards, especially during the Pleistocene/Holocene extinction event, where its range was greatly reduced and only the populations in North America survived. The last known muskox population in Europe died out in Sweden 9,000 years ago. In Asia, muskox persisted until just 615-555 BCE in Tumat, Sakha Republic. Following the disappearance of the Laurentide Ice Sheet, the muskox gradually moved north across the Canadian Arctic Archipelago, arriving in Greenland from Ellesmere Island at about 350 AD, during the late Holocene. Their arrival in northwestern Greenland probably occurred within a few hundred years of the arrival of the Dorset and Thule cultures in the present-day Qaanaaq area. Human predation around Qaanaaq may have restricted muskoxen from moving down the west coast, and instead kept them confined to the northeastern fringes of the island. Recent native range in North America In modern times, muskoxen were restricted to the Arctic areas of Northern Canada, Greenland, and Alaska. The Alaskan population was wiped out in the late 19th or early 20th century. Their depletion has been attributed to excessive hunting, but an adverse change in climate may have contributed. However, muskoxen have since been reintroduced to Alaska. The United States Fish and Wildlife Service introduced the muskox onto Nunivak Island in 1935 to support subsistence living. Other reintroduced populations are in Arctic National Wildlife Refuge, Bering Land Bridge National Preserve, Yukon's Ivvavik National Park, a wildlife conservation center in Anchorage, Aulavik National Park in Northwest Territories, Kanuti National Wildlife Refuge, Gates of the Arctic National Park, and Whitehorse, Yukon's wildlife preserve. There have been at least two domestication endeavours. In the 1950s, an American researcher and adventurer was able to capture muskox calves in Northern Canada for relocation to a property he prepared in Vermont. One condition imposed by the Canadian government was that he was not allowed to kill adults defending their young. When nets and ropes proved useless, he and his crew herded family groups into open water, where calves were successfully separated from the adults. Once airfreighted to Montreal and trucked to Vermont, the young animals habituated to the temperate conditions. Although the calves thrived and grew to adulthood, parasite and disease resistance problems impaired the overall success of the effort. The surviving herd was eventually moved to a farm in Palmer, Alaska, where it has been successful since the mid-1950s. Reintroductions in Eurasia In 1913, workers building a railway over Dovrefjell found two fossil muskox vertebrae. This led to the idea of introducing muskoxen to Norway from Greenland. The first release in the world was made on Gurskøy outside Ålesund in 1925–26. They were muskoxen caught by Norwegian seal-hunting boats in Greenland. The animals colonized the island, but eventually died out there. An attempt to introduce the muskox to Svalbard also failed. Seventeen animals were released in 1929 by Adventfjorden on West Spitsbergen. In 1940, the herd numbered 50, but in the 1970s, the whole herd disappeared. In September 1932, polar researcher Adolf Hoel conducted another experiment, importing 10 muskoxen to Dovrefjell. This herd survived until World War II, when they were hunted and exterminated. In 1947 and later, new animals were released. A small group of muskoxen from Dovrefjell migrated across the national border to Sweden in 1971 and established themselves in Härjedalen, whereby a Swedish herd was established. The Norwegian population on Dovrefjell is managed over an area of and in the summer of 2012 consisted of approximately 300 animals. Since 1999, the population has mostly been increasing, but it suffered a measles outbreak in the summer of 2004 that killed 29. Some animals are also occasionally killed as a result of train collisions on the Dovre Railway. The population is divided into flocks in the area, area and Hjerkinn. In the summer they move down towards Driva, where there are lush grass pastures. Although the muskox belongs to the dry Arctic grassland, it seems to do well on Dovrefjell. However, the pastures are marginal, with little grass available in winter (the muskox eats only plants, not lichen as reindeer do), and over time, inbreeding depression is expected in such a small population which originated from only a few introduced animals. In addition to the population on Dovrefjell, the University of Tromsø had some animals on outside Tromsø until 2018. Muskoxen were introduced to Svalbard in 1925–26 and 1929, but this population died out in the 1970s. They were also introduced in Iceland around 1930 but did not survive. In Russia, animals imported from Banks and Nunivak were released in the Taymyr Peninsula in 1974 and 1975, and some from Nunivak were released in Wrangel Island in 1975. Both locations are north of the Arctic Circle. By 2019 the population on Wrangel Island was about 1100, and the Taymyr Peninsula, about 11,000–14,000. A few muskoxen herds migrated from the Taymyr Peninsula far to the south to the Putorana Plateau. Once established, these populations have been, in turn, used as sources for further reintroductions in Siberia between 1996 and 2010. One of the last of these actions was the release of six animals within the Pleistocene Park project area in the Kolyma River in 2010, where a team of Russian scientists led by Sergey Zimov aims to prove that muskoxen, along with other Pleistocene megafauna that survived into the early Holocene in northern Siberia, did not disappear from the region due to climate change, but because of human hunting. Introductions in eastern Canada Ancient muskox remains have never been found in eastern Canada, although the ecological conditions in the northern Labrador Peninsula are suitable for them. In 1967, 14 animals were captured near Eureka on Ellesmere Island by the Institute for Northern Agricultural Research (INAR) and brought to a farm in Old Fort Chimo Kuujjuaq, northern Quebec, for domestication to provide a local cottage industry based on qiviut, a fine natural fiber. The animals thrived and the qiviut industry showed early success with the training of Inuit knitters and marketing, but it soon became clear that the Quebec government had never intended that the muskoxen be domestic, but had used INAR to capture muskoxen to provide a wild population for hunting. Government officials demanded that INAR leave Quebec and the farm be closed. Subsequently, 54 animals from the farm were released in three places in northern Quebec between 1973 and 1983, and the remaining were ceded to local zoos. Between 1983 and 1986, the released animals increased from 148 to 290, at a rate of 25% per year, and by 2003, an estimated 1,400 muskoxen were in Quebec. Additionally, 112 adults and 25 calves were counted in the nearby Diana Island in 2005, having arrived there by their own means from the mainland. Vagrant adults are sometimes spotted in Labrador, though no herds have been observed in the region. Ecology During the summer, muskoxen live in wet areas, such as river valleys, moving to higher elevations in the winter to avoid deep snow. Muskoxen will eat grasses, arctic willows, woody plants, lichens (above lichens are excluded from the menu), and mosses. When food is abundant, they prefer succulent and nutritious grasses in an area. Willows are the most commonly eaten plants in the winter. Muskoxen require a high threshold of fat reserves in order to conceive, which reflects their conservative breeding strategy. Winter ranges typically have shallow snow to reduce the energy costs of digging through snow to reach forage. The primary predators of muskoxen are arctic wolves, which may account for up to half of all mortality for the species. Other occasional predators, likely mainly predators of calves or infirm adults, can include grizzly bears and polar bears and wolverines. Physiology Muskox are heterothermic mammals, meaning they have the ability to shut off thermal regulation in some parts of their body, like their lower limbs. Maintaining the lower limbs at a cooler temperature than the rest of their body helps reduce the loss of body heat from their extremities. Muskox display the unique characteristic of having hemoglobin that is three times less temperature sensitive than human hemoglobin. This temperature insensitivity allows the muskox's hemoglobin to have a heightened oxygen affinity in an extremely cold environment and continue to diffuse high amounts of oxygen into its cold tissues. Social behavior and reproduction Muskoxen live in herds of 12–24 in the winter and 8–20 in the summer when dominant bulls expel other males from the herd. They do not hold territories, but they do mark their trails with preorbital glands. Male and female muskoxen have separate age-based hierarchies, with mature oxen being dominant over juveniles. Dominant oxen tend to get access to the best resources and will displace subordinates from patches of grass during the winter. Muskox bulls assert their dominance in many different ways. One is a "rush and butt", in which a dominant bull rushes a subordinate from the side with its horns, and will warn the subordinate so it can have a chance to get away. Bulls will also roar, swing their heads, and paw the ground. Dominant bulls sometimes treat subordinate bulls like cows. A dominant bull will tap a subordinate with its foreleg, something they do to cows during mating. Dominant bulls will also mock copulate subordinates and sniff their genitals. A subordinate bull can challenge his status by charging a dominant bull. The mating (or "rutting") season of the muskoxen begins in late June or early July. During this time, dominant bulls will fight others out of the herds and establish harems of usually six or seven cows and their offspring. Fighting bulls will first rub their preorbital glands against their legs while bellowing loudly, and then display their horns. The bulls then back up about , lower their heads, and charge into each other, and will keep doing so until one bull gives up. Subordinate and elderly bulls will leave the herds to form bachelor groups or become solitary. However, when danger is present, the outside bulls can return to the herd for protection. Dominant bulls will prevent cows from leaving their harems. During mating, a bull will tap an estrous cow with his foreleg to calm her down and make her more receptive to his advances. The herds reassemble when summer ends. While the bulls are more aggressive during the rutting season and lead their groups, the females take charge during gestation. Pregnant females are aggressive and decide what distance the herd travels in a day and where they will bed for the night. The herds move more often when cows are lactating, to let them get enough food to nurse their offspring. Cows have an eight- to nine-month gestation period, with calving occurring from April to June. Cows do not calve every year. When winters are severe, cows will not go into estrus and thus not calve the next year. When calving, cows stay in the herd for protection. Muskox are precocial, and calves can keep up with the herd within just a few hours after birth. The calves are welcomed into the herd and nursed for the first two months. After that, a calf then begins eating vegetation and nurses only occasionally. Cows communicate with their calves through braying. The calf's bond with its mother weakens after two years. Muskoxen have a distinctive defensive behavior: when the herd is threatened, the adults will face outward to form a stationary ring or semicircle around the calves. The bulls are usually the front line for defense against predators, with the cows and juveniles gathering close to them. Bulls determine the defensive formation during rutting, while the cows decide the rest of the year. Components of glandular secretions The preorbital gland secretion of muskoxen has a "light, sweetish, ethereal" odor. Analysis of preorbital gland secretion extract showed the presence of cholesterol (which is nonvolatile), benzaldehyde, a series of straight-chain saturated γ-lactones ranging from C8H14O2 to C12H22O2 (with C10H18O2 being most abundant), and probably the monounsaturated γ-lactone C12H20O2. The saturated γ-lactone series has an odor similar to that of the secretion. The odor of dominant rutting males is described as "strong" and "rank". It derives from the preputial gland and is distributed over the fur of the abdomen via urine. Analysis of extract of washes of the prepuce revealed the presence of benzoic acid and p-cresol, along with a series of straight-chain saturated hydrocarbons from C22H46 to C32H66 (with C24H50 being most abundant). Danger to humans Muskoxen are not known to be aggressive. Fatal attacks are extremely rare, but humans who have come close and behaved aggressively have occasionally been attacked. On 22 July 1964, a 73-year-old man was killed in a muskox attack in Norway. The animal was later killed by local authorities. On 13 December 2022, a court services officer with the Alaska State Troopers was killed by a muskox near Nome, Alaska. The officer was trying to scare away a group of muskox near a dog kennel at his home when one of the animals attacked him. Conservation status Historically, this species declined because of overhunting, but populations have recovered following enforcement of hunting regulations. Management in the late 1900s was mostly conservative hunting quotas to foster recovery and recolonization from the historic declines. The current world population of muskoxen is estimated at between 80,000 and 125,000, with an estimated 47,000 living on Banks Island. In Greenland, there are no major threats. However, populations are often small in size and scattered; this makes them vulnerable to local fluctuations in climate. Most populations are within national parks, where they are protected from hunting. Muskoxen occur in four of Greenland's protected areas, with indigenous populations in Northeast Greenland National Park and introduced populations in and Kangerlussuaq and . In these areas, muskoxen receive full protection. Muskoxen are being domesticated for the production of qiviut.
Biology and health sciences
Artiodactyla
null
482371
https://en.wikipedia.org/wiki/Glass%20cockpit
Glass cockpit
A glass cockpit is an aircraft cockpit that features an array of electronic (digital) flight instrument displays, typically large LCD screens, rather than traditional analog dials and gauges. While a traditional cockpit relies on numerous mechanical gauges (nicknamed "steam gauges") to display information, a glass cockpit uses several multi-function displays and a primary flight display driven by flight management systems, that can be adjusted to show flight information as needed. This simplifies aircraft operation and navigation and allows pilots to focus only on the most pertinent information. They are also popular with airline companies as they usually eliminate the need for a flight engineer, saving costs. In recent years the technology has also become widely available in small aircraft. As aircraft displays have modernized, the sensors that feed them have modernized as well. Traditional gyroscopic flight instruments have been replaced by electronic attitude and heading reference systems (AHRS) and air data computers (ADCs), improving reliability and reducing cost and maintenance. GPS receivers are usually integrated into glass cockpits. Early glass cockpits, found in the McDonnell Douglas MD-80, Boeing 737 Classic, ATR 42, ATR 72 and in the Airbus A300-600 and A310, used electronic flight instrument systems (EFIS) to display attitude and navigational information only, with traditional mechanical gauges retained for airspeed, altitude, vertical speed, and engine performance. The Boeing 757 and 767-200/-300 introduced an electronic engine-indicating and crew-alerting system (EICAS) for monitoring engine performance while retaining mechanical gauges for airspeed, altitude and vertical speed. Later glass cockpits, found in the Boeing 737NG, 747-400, 767-400, 777, Airbus A320, later Airbuses, Ilyushin Il-96 and Tupolev Tu-204 have completely replaced the mechanical gauges and warning lights in previous generations of aircraft. While glass cockpit-equipped aircraft throughout the late 20th century still retained analog altimeters, attitude, and airspeed indicators as standby instruments in case the EFIS displays failed, more modern aircraft have increasingly been using digital standby instruments as well, such as the integrated standby instrument system. History Glass cockpits originated in military aircraft in the late 1960s and early 1970s; an early example is the Mark II avionics of the F-111D (first ordered in 1967, delivered from 1970 to 1973), which featured a multi-function display. Prior to the 1970s, air transport operations were not considered sufficiently demanding to require advanced equipment like electronic flight displays. Also, computer technology was not at a level where sufficiently light and powerful electronics were available. The increasing complexity of transport aircraft, the advent of digital systems and the growing air traffic congestion around airports began to change that. The Boeing 2707 was one of the earliest commercial aircraft designed with a glass cockpit. Most cockpit instruments were still analog, but cathode-ray tube (CRT) displays were to be used for the attitude indicator and horizontal situation indicator (HSI). However, the 2707 was cancelled in 1971 after insurmountable technical difficulties and ultimately the end of project funding by the US government. The average transport aircraft in the mid-1970s had more than one hundred cockpit instruments and controls, and the primary flight instruments were already crowded with indicators, crossbars, and symbols, and the growing number of cockpit elements were competing for cockpit space and pilot attention. As a result, NASA conducted research on displays that could process the raw aircraft system and flight data into an integrated, easily understood picture of the flight situation, culminating in a series of flights demonstrating a full glass cockpit system. The success of the NASA-led glass cockpit work is reflected in the total acceptance of electronic flight displays. The safety and efficiency of flights have been increased with improved pilot understanding of the aircraft's situation relative to its environment (or "situational awareness"). By the end of the 1990s, liquid-crystal display (LCD) panels were increasingly favored among aircraft manufacturers because of their efficiency, reliability and legibility. Earlier LCD panels suffered from poor legibility at some viewing angles and poor response times, making them unsuitable for aviation. Modern aircraft such as the Boeing 737 Next Generation, 777, 717, 747-400ER, 747-8F 767-400ER, 747-8, and 787, Airbus A320 family (later versions), A330 (later versions), A340-500/600, A340-300 (later versions), A380 and A350 are fitted with glass cockpits consisting of LCD units. The glass cockpit has become standard equipment in airliners, business jets, and military aircraft. It was fitted into NASA's Space Shuttle orbiters Atlantis, Columbia, Discovery, and Endeavour, and the Russian Soyuz TMA model spacecraft that were launched for the first time in 2002. By the end of the century glass cockpits began appearing in general aviation aircraft as well. In 2003, Cirrus Design's SR20 and SR22 became the first light aircraft equipped with glass cockpits, which they made standard on all Cirrus aircraft. By 2005, even basic trainers like the Piper Cherokee and Cessna 172 were shipping with glass cockpits as options (which nearly all customers chose), as well as many modern utility aircraft such as the Diamond DA42. The Lockheed Martin F-35 Lightning II features a "panoramic cockpit display" touchscreen that replaces most of the switches and toggles found in an aircraft cockpit. The civilian Cirrus Vision SF50 has the same, which they call a "Perspective Touch" glass cockpit. Uses Commercial aviation Unlike the previous era of glass cockpits—where designers merely copied the look and feel of conventional electromechanical instruments onto cathode-ray tubes—the new displays represent a true departure. They look and behave very similarly to other computers, with windows and data that can be manipulated with point-and-click devices. They also add terrain, approach charts, weather, vertical displays, and 3D navigation images. The improved concepts enable aircraft makers to customize cockpits to a greater degree than previously. All of the manufacturers involved have chosen to do so in one way or another—such as using a trackball, thumb pad or joystick as a pilot-input device in a computer-style environment. Many of the modifications offered by the aircraft manufacturers improve situational awareness and customize the human-machine interface to increase safety. Modern glass cockpits might include synthetic vision systems (SVS) or enhanced flight vision systems (EFVS). Synthetic vision systems display a realistic 3D depiction of the outside world (similar to a flight simulator), based on a database of terrain and geophysical features in conjunction with the attitude and position information gathered from the aircraft navigational systems. Enhanced flight vision systems add real-time information from external sensors, such as an infrared camera. All new airliners such as the Airbus A380, Boeing 787 and private jets such as Bombardier Global Express and Learjet use glass cockpits. General aviation Many modern general aviation aircraft are available with glass cockpits. Systems such as the Garmin G1000 are now available on many new GA aircraft, including the classic Cessna 172. Many small aircraft can also be modified post-production to replace analogue instruments. Glass cockpits are also popular as a retrofit for older private jets and turboprops such as Dassault Falcons, Raytheon Hawkers, Bombardier Challengers, Cessna Citations, Gulfstreams, King Airs, Learjets, Astras, and many others. Aviation service companies work closely with equipment manufacturers to address the needs of the owners of these aircraft. Consumer, research, hobby & recreational aviation Today, smartphones and tablets use mini-applications, or "apps", to remotely control complex devices, by WiFi radio interface. They demonstrate how the "glass cockpit" idea is being applied to consumer devices. Applications include toy-grade UAVs which use the display and touch screen of a tablet or smartphone to employ every aspect of the "glass cockpit" for instrument display, and fly-by-wire for aircraft control. Spaceflight The glass cockpit idea made news in 1980s trade magazines, like Aviation Week & Space Technology, when NASA announced that it would be replacing most of the electro-mechanical flight instruments in the space shuttles with glass cockpit components. The articles mentioned how glass cockpit components had the added benefit of being a few hundred pounds lighter than the original flight instruments and support systems used in the Space Shuttles. The was the first orbiter to be retrofitted with a glass cockpit in 2000 with the launch of STS-101. Columbia was the second orbiter with a glass cockpit on STS-109 in 2002, followed by Discovery in 2005 with STS-114, and Endeavour in 2007 with STS-118. NASA's Orion spacecraft will use glass cockpits derived from Boeing 787 Dreamliner. Safety As aircraft operation depends on glass cockpit systems, flight crews must be trained to deal with failures. The Airbus A320 family has seen fifty incidents where several flight displays were lost. On 25 January 2008, United Airlines Flight 731 experienced a serious glass-cockpit blackout, losing half of the Electronic Centralised Aircraft Monitor (ECAM) displays as well as all radios, transponders, Traffic Collision Avoidance System (TCAS), and attitude indicators. The pilots were able to land at Newark Airport without radio contact in good weather and daylight conditions. Airbus has offered an optional fix, which the US National Transportation Safety Board (NTSB) has suggested to the US Federal Aviation Administration (FAA) as mandatory, but the FAA has yet to make it a requirement. A preliminary NTSB factsheet is available. Due to the possibility of a blackout, glass cockpit aircraft also have an integrated standby instrument system that includes (at a minimum) an artificial horizon, altimeter and airspeed indicator. It is electronically separate from the main instruments and can run for several hours on a backup battery. In 2010, the NTSB published a study done on 8,000 general aviation light aircraft. The study found that, although aircraft equipped with glass cockpits had a lower overall accident rate, they also had a larger chance of being involved in a fatal accident. The NTSB Chairman said in response to the study:
Technology
Aircraft components
null
482534
https://en.wikipedia.org/wiki/Gabapentin
Gabapentin
Gabapentin, sold under the brand name Neurontin among others, is an anticonvulsant medication primarily used to treat neuropathic pain and also for partial seizures of epilepsy. It is a commonly used medication for the treatment of neuropathic pain caused by diabetic neuropathy, postherpetic neuralgia, and central pain. It is moderately effective: about 30–40% of those given gabapentin for diabetic neuropathy or postherpetic neuralgia have a meaningful benefit. Gabapentin, like other gabapentinoid drugs, acts by decreasing activity of the α2δ-1 protein, coded by the CACNA2D1 gene, first known as an auxiliary subunit of voltage gated calcium channels. However, see Pharmacodynamics, below. By binding to α2δ-1, gabapentin reduces the release of excitatory neurotransmitters (primarily glutamate) and as a result, reduces excess excitation of neuronal networks in the spinal cord and brain. Sleepiness and dizziness are the most common side effects. Serious side effects include respiratory depression, and allergic reactions. As with all other antiepileptic drugs approved by the FDA, gabapentin is labeled for an increased risk of suicide. Lower doses are recommended in those with kidney disease. Gabapentin was first approved for use in the United Kingdom in 1993. It has been available as a generic medication in the United States since 2004. It is the first of several other drugs that are similar in structure and mechanism, called gabapentinoids. In 2022, it was the tenth most commonly prescribed medication in the United States, with more than 40million prescriptions. During the 1990s, Parke-Davis, a subsidiary of Pfizer, used a number of illegal techniques to encourage physicians in the United States to prescribe gabapentin for unapproved uses. They have paid out millions of dollars to settle lawsuits regarding these activities. Medical uses Gabapentin is recommended for use in focal seizures and neuropathic pain. Gabapentin is prescribed off-label in the US and the UK, for example, for the treatment of non-neuropathic pain, anxiety disorders, sleep problems and bipolar disorder. In recent years, gabapentin has seen increased use, particularly in the elderly. There is concern regarding gabapentin's off-label use due to the lack of strong scientific evidence for its efficacy in multiple conditions, its proven side effects and its potential for misuse and physical/psychological dependency. Seizures Gabapentin is approved for the treatment of focal seizures; however, it is not effective for generalized epilepsy. Neuropathic pain Gabapentin is recommended as a first-line treatment for chronic neuropathic pain by various medical authorities. This is a general recommendation applicable to all neuropathic pain syndromes except for trigeminal neuralgia, where it may be used as a second- or third-line agent. Regarding the specific diagnoses, a systematic review has found evidence for gabapentin to provide pain relief for some people with postherpetic neuralgia and diabetic neuropathy. Gabapentin is approved for the former indication in the US. In addition to these two neuropathies, European Federation of Neurological Societies guideline notes gabapentin effectiveness for central pain. A combination of gabapentin with an opioid or nortriptyline may work better than either drug alone. Gabapentin shows substantial benefit (at least 50% pain relief or a patient global impression of change (PGIC) "very much improved") for neuropathic pain (postherpetic neuralgia or peripheral diabetic neuropathy) in 30–40% of subjects treated as compared to those treated with placebo. Evidence finds little or no benefit and significant risk in those with chronic low back pain or sciatica. Gabapentin is not effective in HIV-associated sensory neuropathy and neuropathic pain due to cancer. Anxiety There is a small amount of research on the use of gabapentin for the treatment of anxiety disorders. Gabapentin is effective for the long-term treatment of social anxiety disorder and in reducing preoperative anxiety. In a controlled trial of breast cancer survivors with anxiety, and a trial for social phobia, gabapentin significantly reduced anxiety levels. For panic disorder, gabapentin has produced mixed results. Sleep Gabapentin is effective in treating sleep disorders such as insomnia and restless legs syndrome that are the result of an underlying illness, but comes with some risk of discontinuation and withdrawal symptoms after prolonged use at higher doses. Gabapentin enhances slow-wave sleep in people with primary insomnia. It also improves sleep quality by elevating sleep efficiency and decreasing spontaneous arousal. Drug dependence Gabapentin is moderately effective in reducing the symptoms of alcohol withdrawal and associated craving. The evidence in favor of gabapentin is weak in the treatment of alcoholism: it does not contribute to the achievement of abstinence, and the data on the relapse of heavy drinking and percent of days abstinent do not robustly favor gabapentin; it only decreases the percent days of heavy drinking. Gabapentin is ineffective in cocaine dependence and methamphetamine use, and it does not increase the rate of smoking cessation. While some studies indicate that gabapentin does not significantly reduce the symptoms of opiate withdrawal, there is increasing evidence that gabapentinoids are effective in controlling some of the symptoms during opiate detoxification. A clinical study in Iran, where heroin dependence is a significant social and public health problem, showed gabapentin produced positive results during an inpatient therapy program, particularly by reducing opioid-induced hyperalgesia and drug craving. There is insufficient evidence for its use in cannabis dependence. Other Gabapentin is recommended as a first-line treatment of the acquired pendular nystagmus, torsional nystagmus, and infantile nystagmus; however, it does not work in periodic alternating nystagmus. Gabapentin decreases the frequency of hot flashes in both menopausal women and people with breast cancer. However, antidepressants have similar efficacy, and treatment with estrogen more effectively prevents hot flashes. Gabapentin reduces spasticity in multiple sclerosis and is prescribed as one of the first-line options. It is an established treatment of restless legs syndrome. Gabapentin alleviates itching in kidney failure (uremic pruritus) and itching of other causes. It may be an option in essential or orthostatic tremor. Gabapentin does not appear to provide benefit for bipolar disorder, complex regional pain syndrome, post-surgical pain, or tinnitus, or prevent episodic migraine in adults. Contraindications Gabapentin should be used carefully and at lower doses in people with kidney problems due to possible accumulation and toxicity. It is unclear if it is safe during pregnancy or breastfeeding. Side effects Dizziness and somnolence are the most frequent side effects. Fatigue, ataxia, peripheral edema (swelling of extremities), and nystagmus are also common. A 2017 meta-analysis found that gabapentin also increased the risk of difficulties in mentation and visual disturbances as compared to a placebo. Gabapentin is associated with a weight gain of after 1.5 months of use. Case studies indicate that it may cause anorgasmia and erectile dysfunction, as well as myoclonus that disappear after discontinuing gabapentin or replacing it with other medication. Fever, swollen glands that do not go away, eyes or skin turning yellow, unusual bruises or bleeding, unexpected muscle pain or weakness, rash, long-lasting stomach pain which may indicate an inflamed pancreas, hallucinations, anaphylaxis, respiratory depression, and increased suicidal ideation are rare but serious side effects. Suicide As with all antiepileptic drugs approved in the US, gabapentin label contains a warning of an increased risk of suicidal thoughts and behaviors. This warning is based on a meta-analysis of all approved antiepileptic drugs in 2008, and not with gabapentin alone. According to an experimental meta-analysis of insurance claims database, gabapentin use is associated with about 40% increased risk of suicide, suicide attempt and violent death as compared with a reference anticonvulsant drug topiramate. The risk is increased for people with bipolar disorder or epilepsy. Another study has shown an approximately doubled rate of suicide attempts and self-harm in people with bipolar disorder who are taking gabapentin versus those taking lithium. A large Swedish study suggests that gabapentinoids are associated with an increased risk of suicidal behaviour, unintentional overdoses, head/body injuries, and road traffic incidents and offences. On the other hand, a study published by the Harvard Data Science Review found that gabapentin was associated with a significantly reduced rate of suicide. Respiratory depression Serious breathing suppression, potentially fatal, may occur when gabapentin is taken together with opioids, benzodiazepines, or other depressants, or by people with underlying lung problems such as COPD. Gabapentin and opioids are commonly prescribed or abused together, and research indicates that the breathing suppression they cause is additive. For example, gabapentin use before joint replacement or laparoscopic surgery increased the risk of respiratory depression by 30–60%. A Canadian study showed that use of gabapentin and other gabapentinoids, whether for epilepsy, neuropathic pain or other chronic pain was associated with a 35–58% increased risk for severe exacerbation of pre-existing chronic obstructive pulmonary disease. Withdrawal and dependence Withdrawal symptoms typically occur 1–2 days after abruptly stopping gabapentin (almost unambiguously due to extended use and during a very short-term rebound phenomenon) similar to, albeit less intense than most benzodiazepines. Agitation, confusion and disorientation are the most frequently reported, followed by gastrointestinal complaints and sweating, and more rare tremor, tachycardia, hypertension and insomnia. In some cases, users experience withdrawal seizures after chronic or semi-chronic use in the absence of periodic cycles or breaks during repeating and consecutive use. All these symptoms subside when gabapentin is re-instated or tapered off gradually at an appropriate rate. On its own, gabapentin appears to not have a substantial addictive power. In human and animal experiments, it shows limited to no rewarding effects. The vast majority of people abusing gabapentin are current or former abusers of opioids or sedatives. In these persons, gabapentin can boost the opioid "high" as well as decrease commonly experienced opioid-withdrawal symptoms such as anxiety. Overdose Through excessive ingestion, accidental or otherwise, persons may experience overdose symptoms including drowsiness, sedation, blurred vision, slurred speech, somnolence, uncontrollable jerking motions, and anxiety. A very high amount taken is associated with breathing suppression, coma, and possibly death, particularly if combined with alcohol or opioids. Pharmacology Animal Models Gabapentin, prevents seizures in a dose-related manner in several laboratory animal models. These models include spinal extensor seizures from low-intensity electroshock to the forebrain in mice, maximal electroshock in rats, spinal extensor seizures in DBA/2 mice with a genetic sensitivity to seizures induced by loud noise, and in rats "kindled" to produce focal seizures by repeated prior electrical stimulation of the hippocampus. Gabapentin slightly increased spontaneous absence-like seizures in a genetically susceptible strain recorded with electroencephalography. All of these effects of gabapentin were seen at dosages at or below the threshold for producing ataxia. Gabapentin also has been tested in a wide variety of animal models that are relevant for analgesic actions. Generally, gabapentin is not active to prevent pain-related behaviors in models of acute nociceptive pain, but it prevents pain-related behaviors when animals are made sensitive by prior peripheral inflammation or peripheral nerve damage (inflammatory or neuropathic conditions). Pharmacodynamics Gabapentin is a ligand of the α2δ calcium channel subunit. The α2δ-1 protein is coded by the CACNA2D1 gene. α2δ was first described as an auxiliary protein connected to the main α1 subunit (the channel-forming protein) of high voltage activated voltage-dependent calcium channels (L-type, N-type, P/Q type, and R-type). The same α2δ protein has more recently been shown to interact directly with some NMDA-type and AMPA-type glutamate receptors at presynaptic sites and also with thrombospondin (an extracellular matrix protein secreted by astroglial cells). Gabapentin is not a direct calcium channel blocker: it exerts its actions by disrupting the regulatory function of α2δ and its interactions with other proteins. Gabapentin reduces delivery of intracellular calcium channels to the cell membrane, reduces the activation of the channels by the α2δ subunit, decreases signaling leading to neurotransmitters release, and disrupts interactions of α2δ with voltage gated calcium channels but also with NMDA receptors, neurexins, and thrombospondin. These proteins are found as mutually interacting parts of the presynaptic active zone, where numerous protein molecules interact with each other to enable and to regulate the release of neurotransmitters from presynaptic vesicles into the synaptic space. Out of the four known isoforms of α2δ protein, gabapentin binds with similar high affinity to two: α2δ-1 and α2δ-2. All of the pharmacological properties of gabapentin tested to date are explained by its binding to just one isoform – α2δ-1. The endogenous α-amino acids L-leucine and L-isoleucine, which resemble gabapentin in chemical structure, bind α2δ with similar affinity to gabapentin and are present in human cerebrospinal fluid at micromolar concentrations. They may be the endogenous ligands of the α2δ subunit, and they competitively antagonize the effects of gabapentin. Accordingly, while gabapentin has nanomolar affinity for the α2δ subunit, its potency in vivo is in the low micromolar range, and competition for binding by endogenous L-amino acids is likely to be responsible for this discrepancy. Gabapentin is a potent activator of voltage-gated potassium channels KCNQ3 and KCNQ5, even at low nanomolar concentrations. However, this activation is unlikely to be the dominant mechanism of gabapentin's therapeutic effects. Gabapentin is structurally similar to the neurotransmitter glutamate and competitively inhibits branched-chain amino acid aminotransferase (BCAT), slowing down the synthesis of glutamate. In particular, it inhibits BCAT-1 at high concentrations (Ki = 1 mM), but not BCAT-2. At very high concentrations gabapentin can suppress the growth of cancer cells, presumably by affecting mitochondrial catabolism, however, the precise mechanism remains elusive. Even though gabapentin is a structural GABA analogue, and despite its name, it does not bind to the GABA receptors, does not convert into or another GABA receptor agonist in vivo, and does not modulate GABA transport or metabolism within the range of clinical dosing. In vitro gabapentin has been found to very weakly inhibit the GABA aminotransferase enzyme (Ki = 17–20 mM), however, this effect is so weak it is not clinically relevant at prescribed doses. Pharmacokinetics Gabapentin is absorbed from the intestines by an active transport process mediated via an amino acid transporter, presumably, LAT2. As a result, the pharmacokinetics of gabapentin is dose-dependent, with diminished bioavailability and delayed peak levels at higher doses. The oral bioavailability of gabapentin is approximately 80% at 100 mg administered three times daily once every 8 hours, but decreases to 60% at 300 mg, 47% at 400 mg, 34% at 800 mg, 33% at 1,200 mg, and 27% at 1,600 mg, all with the same dosing schedule. Drugs that increase the transit time of gabapentin in the small intestine can increase its oral bioavailability; when gabapentin was co-administered with oral morphine, the oral bioavailability of a 600 mg dose of gabapentin increased by 50%. Gabapentin at a low dose of 100 mg has a Tmax (time to peak levels) of approximately 1.7 hours, while the Tmax increases to 3 to 4 hours at higher doses. Food does not significantly affect the Tmax of gabapentin and increases the Cmax and area-under-curve levels of gabapentin by approximately 10%. Gabapentin can cross the blood–brain barrier and enter the central nervous system. Gabapentin concentration in cerebrospinal fluid is approximately 9–14% of its blood plasma concentration. Due to its low lipophilicity, gabapentin requires active transport across the blood–brain barrier. The LAT1 is highly expressed at the blood–brain barrier and transports gabapentin across into the brain. As with intestinal absorption mediated by an amino acid transporter, the transport of gabapentin across the blood–brain barrier by LAT1 is saturable. Gabapentin does not bind to other drug transporters such as P-glycoprotein (ABCB1) or OCTN2 (SLC22A5). It is not significantly bound to plasma proteins (<1%). Gabapentin undergoes little or no metabolism. Gabapentin is generally safe in people with liver cirrhosis. Gabapentin is eliminated renally in the urine. It has a relatively short elimination half-life, with the reported average value of 5 to 7 hours. Because of its short elimination half-life, gabapentin must be administered 3 to 4 times per day to maintain therapeutic levels. Gabapentin XR (brand name Gralise) is taken once a day. Chemistry Gabapentin is a 3,3-disubstituted derivative of GABA. Therefore, it is a GABA analogue, as well as a γ-amino acid. It is similar to several other compounds that collectively are called gabapentinoids. Specifically, it is a derivative of GABA with a pentyl disubstitution at 3 position, hence, the name - gabapentin, in such a way as to form a six-membered ring. After the formation of the ring, the amine and carboxylic groups are not in the same relative positions as they are in the GABA; they are more conformationally constrained. Although it has been known for some time that gabapentin must bind to the α2δ-1 protein in order to act pharmacologically (see Pharmacodynamics), the three-dimensional structure of the α2δ-1 protein with gabapentin bound (or alternatively, the native amino acid, L-Isoleucine bound) has only recently been obtained by cryo-electron microscopy. A figure of this drug-bound structure is shown in the Chemistry section of the entry on gabapentinoid drugs. This study confirms other findings to show that both compounds alternatively can bind at a single extracellular site (somewhat distant from the calcium conducting pore of the voltage gated calcium channel α1 subunit) on the calcium channel and chemotaxis (Cache1) domain of α2δ-1. Synthesis A process for chemical synthesis and isolation of gabapentin with high yield and purity starts with conversion of 1,1-cyclohexanediacetic anhydride to 1,1-cyclohexanediacetic acid monoamide and is followed by a 'Hofmann' rearrangement in an aqueous solution of sodium hypobromite prepared in situ. History GABA is the principal inhibitory neurotransmitter in mammalian brain. By the early 1970s, it was appreciated that there are two main classes of GABA receptors, GABAA and GABAB and also that baclofen was an agonist of GABAB receptors. Gabapentin was designed, synthesized and tested in mice by researchers at the pharmaceutical company Goedecke AG in Freiburg, Germany (a subsidiary of Parke-Davis). It was meant to be an analogue of the neurotransmitter GABA that could more easily cross the blood–brain barrier. It was first synthesized in 1974/75 and described in 1975 by Satzinger and Hartenstein. The first pharmacology findings published were sedating properties and prevention of seizures in mice evoked by the GABA antagonist, thiosemicarbazide. Shortly after, gabapentin was shown in vitro to reduce the release of the neurotransmitter dopamine from slices of rat caudate nucleus (striatum). This study provided evidence that the action of gabapentin, unlike baclofen, did not arise from the GABAB receptor. Subsequently, more than 2,000 scientific papers have been published that contain the words "gabapentin pharmacology" or "pharmacology of gabapentin" (Google Scholar citation search). Initial clinical trials utilizing small numbers of subjects were for treatment of spasticity and migraine but neither study had statistical power to allow conclusions. In 1987, the first positive results with gabapentin were obtained in a clinical trial using three dose groups versus pre-treatment seizure frequency for 75 days, as add-on treatment in patients who still had seizures despite taking other medications. This study did not show statistically significant results, but it did show a strong dose-related trend to decreased frequency of seizures. Under the brand name Neurontin, it was first approved in the United Kingdom in May 1993, for the treatment of refractory epilepsy. Approval by the U.S. Food and Drug Administration followed in December 1993, also for use as an adjuvant (effective when added to other antiseizure drugs) medication to control partial seizures in adults; that indication was extended to children in 2000. Subsequently, gabapentin was approved in the United States for the treatment of pain from postherpetic neuralgia in 2002. A generic version of gabapentin first became available in the United States in 2004. An extended-release formulation of gabapentin for once-daily administration, under the brand name Gralise, was approved in the United States for the treatment of postherpetic neuralgia in January 2011. In recent years, gabapentin has been prescribed for an increasing range of disorders and is one of the more common medications used, particularly in elderly people. Society and culture Legal status United Kingdom Effective April 2019, the United Kingdom reclassified the drug as a class C controlled substance. United States Gabapentin is not a controlled substance under the federal Controlled Substances Act. Effective 1 July 2017, Kentucky classified gabapentin as a schedule V controlled substance statewide. Gabapentin is scheduled V drug in other states such as West Virginia, Tennessee, Alabama, Utah, and Virginia. Off-label promotion Although some small, non-controlled studies in the 1990s—mostly sponsored by gabapentin's manufacturer—suggested that treatment for bipolar disorder with gabapentin may be promising, the preponderance of evidence suggests that it is not effective. Franklin v. Parke-Davis case After the corporate acquisition of the original patent holder, the pharmaceutical company Pfizer admitted that there had been violations of FDA guidelines regarding the promotion of unproven off-label uses for gabapentin in the Franklin v. Parke-Davis case. While off-label prescriptions are common for many drugs, marketing of off-label uses of a drug is not. In 2004, Warner-Lambert (which subsequently was acquired by Pfizer) agreed to plead guilty for activities of its Parke-Davis subsidiary, and to pay $430 million in fines to settle civil and criminal charges regarding the marketing of Neurontin for off-label purposes. The 2004 settlement was one of the largest in U.S. history up to that point, and the first off-label promotion case brought successfully under the False Claims Act. Kaiser Foundation Hospitals and Kaiser Foundation Health Plan sued Pfizer Inc., alleging that the pharmaceutical company had misled Kaiser by recommending Neurontin as an off-label treatment for certain conditions (including bipolar disorder, migraines, and neuropathic pain). In 2010, a federal jury in Massachusetts ruled in Kaiser's favor, finding that Pfizer violated the federal Racketeer Influenced and Corrupt Organizations (RICO) Act and was liable for in damages, which was automatically trebled to just under $142.1 million. Aetna, Inc. and a group of employer health plans prevailed in their similar Neurontin-related claims against Pfizer. Pfizer appealed, but the U.S. Court of Appeals for the First Circuit upheld the verdict, and in 2013, the US Supreme Court declined to hear the case. Gabasync Gabasync, a treatment consisting of a combination of gabapentin and two other medications (flumazenil and hydroxyzine) as well as therapy, is an ineffective treatment promoted for methamphetamine addiction, though it had also been claimed to be effective for dependence on alcohol or cocaine. It was marketed as PROMETA. While the individual drugs had been approved by the FDA, their off-label use for addiction treatment has not. Gabasync was marketed by Hythiam, Inc. which is owned by Terren Peizer, a former junk bond salesman who has since been indicted for securities fraud relative to another company. Hythiam charges up to $15,000 per patient to license its use (of which half goes to the prescribing physician, and half to Hythiam). In November 2011, the results of a double-blind, placebo-controlled study (financed by Hythiam and carried out at UCLA) were published in the peer-reviewed journal Addiction. It concluded that Gabasync is ineffective: "The PROMETA protocol, consisting of flumazenil, gabapentin and hydroxyzine, appears to be no more effective than placebo in reducing methamphetamine use, retaining patients in treatment or reducing methamphetamine craving." Barrons, in a November 2005 article entitled "Curb Your Cravings For This Stock", wrote "If the venture works out for patients and the investing public, it'll be a rare success for Peizer, who's promoted a series of disappointing small-cap medical or technology stocks ... since his days at Drexel". Journalist Scott Pelley said to Peizer in 2007: "Depending and who you talk to, you're either a revolutionary or a snake oil salesman." 60 Minutes, NBC News, and The Dallas Morning News criticized Peizer after the company bypassed clinical studies and government approval when bringing to market Prometa; the addiction drug proved to be completely ineffective. Journalist Adam Feuerstein opined: "most of what Peizer says is dubious-sounding hype". Usage trends The period from 2008 to 2018 saw a significant increase in the consumption of gabapentinoids. A study published in Nature Communications in 2023 highlights this trend, demonstrating a notable escalation in sales of gabapentinoids. The study, which analyzed healthcare data across 65 countries/ regions, found that the consumption rate of gabapentinoids had doubled over the decade, driven by their use in a wide range of indications. Brand names Gabapentin was originally marketed under the brand name Neurontin. Since it became generic, it has been marketed worldwide using over 300 different brand names. An extended-release formulation of gabapentin for once-daily administration was introduced in 2011, for postherpetic neuralgia under the brand name Gralise. In the US, Neurontin is marketed by Viatris after Upjohn was spun off from Pfizer. Related drugs Parke-Davis developed a drug called pregabalin, which is related in structure to gabapentin, as a successor to gabapentin. Another similar drug atagabalin has been unsuccessfully tried by Pfizer as a treatment for insomnia. A prodrug form (gabapentin enacarbil) was approved by the U.S. Food and Drug Administration (FDA). Recreational use When taken in excess, gabapentin can induce euphoria, a sense of calm, a cannabis-like high, improved sociability, and reduced alcohol or cocaine cravings. Also known on the streets as "Gabbies", gabapentin was reported in 2017 to be increasingly abused and misused for these euphoric effects. About 1 percent of the responders to an Internet poll and 22 percent of those attending addiction facilities had a history of abuse of gabapentin. Gabapentin misuse, toxicity, and use in suicide attempts among adults in the US increased from 2013 to 2017. After Kentucky implemented stricter legislation regarding opioid prescriptions in 2012, there was an increase in gabapentin-only and multi-drug use from 2012 to 2015. The majority of these cases were from overdose in suspected suicide attempts. These rates were also accompanied by increases in abuse and recreational use. Withdrawal symptoms, often resembling those of benzodiazepine withdrawal, play a role in the physical dependence some users experience. Its misuse predominantly coincides with the usage of other CNS depressant drugs, namely opioids, benzodiazepines, and alcohol. Veterinary use In cats, gabapentin can be used as an analgesic in multi-modal pain management, anxiety medication to reduce stress during travel or vet visits, and anticonvulsant. Veterinarians may prescribe gabapentin as an anticonvulsant and pain reliever in dogs. It has beneficial effects for treating epilepsy, different kinds of pain (chronic, neuropathic, and post-operative pain), and anxiety, lip-licking behaviour, storm phobia, fear-based aggression. It is also used to treat chronic pain-associated nerve inflammation in horses and dogs. Side effects include tiredness and loss of coordination, but these effects generally go away within 24 hours of starting the medication.
Biology and health sciences
Specific drugs
Health
482626
https://en.wikipedia.org/wiki/Western%20Interior%20Seaway
Western Interior Seaway
The Western Interior Seaway (also called the Cretaceous Seaway, the Niobraran Sea, the North American Inland Sea, or the Western Interior Sea) was a large inland sea that split the continent of North America into two landmasses for 34 million years. The ancient sea, which existed from the early Late Cretaceous (100 Ma) to the earliest Paleocene (66 Ma), connected the Gulf of Mexico to the Arctic Ocean. The two land masses it created were Laramidia to the west and Appalachia to the east. At its largest extent, it was deep, wide and over long. Origin and geology By the late Cretaceous, Eurasia and the Americas had separated along the south Atlantic, and subduction on the west coast of the Americas had commenced, resulting in the Laramide orogeny, the early phase of growth of the modern Rocky Mountains. The Western Interior Seaway may be seen as a downwarping of the continental crust ahead of the growing Laramide/Rockies mountain chain. The earliest phase of the seaway began in the mid-Cretaceous when an arm of the Arctic Ocean transgressed south over western North America; this formed the Mowry Sea, so named for the Mowry Shale, an organic-rich rock formation. In the south, the Gulf of Mexico was originally an extension of the Tethys Ocean. In time, the southern embayment merged with the Mowry Sea in the late Cretaceous, forming a completed seaway, creating isolated environments for land animals and plants. Relative sea levels fell multiple times, as a margin of land temporarily rose above the water along the ancestral Transcontinental Arch, each time rejoining the separated, divergent land populations, allowing a temporary mixing of newer species before again separating the populations. At its largest, the Western Interior Seaway stretched from the Rockies east to the Appalachian Mountains, some wide. At its deepest, it may have been only deep, shallow in terms of seas. Two great continental watersheds drained into it from east and west, diluting its waters and bringing resources in eroded silt that formed shifting delta systems along its low-lying coasts. There was little sedimentation on the eastern shores of the seaway; the western boundary, however, consisted of a thick clastic wedge eroded eastward from the Sevier orogenic belt. The western shore was thus highly variable, depending on variations in sea level and sediment supply. Widespread carbonate deposition suggests that the seaway was warm and tropical, with abundant calcareous planktonic algae. Remnants of these deposits are found in northwest Kansas. A prominent example is Monument Rocks, an exposed chalk formation towering over the surrounding range land. The Western Interior Seaway is believed to have behaved similarly to a giant estuary in terms of water mass transport. Riverine inputs exited the seaway as coastal jets, while correspondingly drawing in water from the Tethys in the south and Boreal waters from the north. During the late Cretaceous, the Western Interior Seaway went through multiple periods of anoxia, when the bottom water was devoid of oxygen and the water column was stratified. At the end of the Cretaceous, continued Laramide uplift hoisted the sandbanks (sandstone) and muddy brackish lagoons (shale), thick sequences of silt and sandstone still seen today as the Laramie Formation, while low-lying basins between them gradually subsided. The Western Interior Seaway divided across the Dakotas and retreated south towards the Gulf of Mexico. This shrunken and final regressive phase is sometimes called the Pierre Seaway. During the early Paleocene, parts of the Western Interior Seaway still occupied areas of the Mississippi Embayment, submerging the site of present-day Memphis. Later transgression, however, was associated with the Cenozoic Tejas sequence, rather than with the previous event responsible for the seaway. Fauna The Western Interior Seaway was a shallow sea, filled with abundant marine life. Interior seaway denizens included predatory marine reptiles such as plesiosaurs, and mosasaurs. Other marine life included sharks such as Squalicorax, Cretoxyrhina, and the giant durophagous Ptychodus mortoni (believed to be  long); and advanced bony fish including Pachyrhizodus, Enchodus, and the massive long Xiphactinus, larger than any modern bony fish. Other sea life included invertebrates such as mollusks, ammonites, squid-like belemnites, and plankton including coccolithophores that secreted the chalky platelets that give the Cretaceous its name, foraminiferans and radiolarians. The seaway was home to early birds, including the flightless Hesperornis that had stout legs for swimming through water and tiny wings used for marine steering rather than flight; and the tern-like Ichthyornis, an early avian with a toothy beak. Ichthyornis shared the sky with large pterosaurs such as Nyctosaurus and Pteranodon. Pteranodon fossils are very common; it was probably a major participant in the surface ecosystem, though it was found in only the southern reaches of the seaway. Inoceramids (oyster-like bivalve molluscs) were well-adapted to life in the oxygen-poor bottom mud of the seaway. These left abundant fossils in the Kiowa, Greenhorn, Niobrara, Mancos, and Pierre formations. There is great variety in the shells and the many distinct species have been dated and can be used to identify specific beds in those rock formations of the seaway. Many species can easily fit in the palm of the hand, while some like Inoceramus (Haploscapha) grandis could be well over a meter in diameter. Entire schools of fish sometimes sought shelter within the shell of the giant Platyceramus. The shells of the genus are known for being composed of prismatic calcitic crystals that grew perpendicular to the surface, and fossils often retain a pearly luster.
Physical sciences
Paleogeography
Earth science
482629
https://en.wikipedia.org/wiki/Scavenger
Scavenger
Scavengers are animals that consume dead organisms that have died from causes other than predation or have been killed by other predators. While scavenging generally refers to carnivores feeding on carrion, it is also a herbivorous feeding behavior. Scavengers play an important role in the ecosystem by consuming dead animal and plant material. Decomposers and detritivores complete this process, by consuming the remains left by scavengers. Scavengers aid in overcoming fluctuations of food resources in the environment. The process and rate of scavenging is affected by both biotic and abiotic factors, such as carcass size, habitat, temperature, and seasons. Etymology Scavenger is an alteration of scavager, from Middle English skawager meaning "customs collector", from skawage meaning "customs", from Old North French escauwage meaning "inspection", from schauwer meaning "to inspect", of Germanic origin; akin to Old English scēawian and German schauen meaning "to look at", and modern English "show" (with semantic drift). Types of scavengers (animals) Obligate scavenging (subsisting entirely or mainly on dead animals) is rare among vertebrates, due to the difficulty of finding enough carrion without expending too much energy. Well-known invertebrate scavengers of animal material include burying beetles and blowflies, which are obligate scavengers, and yellowjackets. Fly larvae are also common scavengers for organic materials at the bottom of freshwater bodies. For example, Tokunagayusurika akamusi is a species of midge fly whose larvae live as obligate scavengers at the bottom of lakes and whose adults almost never feed and only live up to a few weeks. Most scavenging animals are facultative scavengers that gain most of their food through other methods, especially predation. Many large carnivores that hunt regularly, such as hyenas and jackals, but also animals rarely thought of as scavengers, such as African lions, leopards, and wolves will scavenge if given the chance. They may also use their size and ferocity to intimidate the original hunters (the cheetah is a notable victim, rather than a perpetrator). Almost all scavengers above insect size are predators and will hunt if not enough carrion is available, as few ecosystems provide enough dead animals year-round to keep its scavengers fed on that alone. Scavenging wild dogs and crows frequently exploit roadkill. Scavengers of dead plant material include termites that build nests in grasslands and then collect dead plant material for consumption within the nest. The interaction between scavenging animals and humans is seen today most commonly in suburban settings with animals such as opossums, polecats and raccoons. In some African towns and villages, scavenging from hyenas is also common. In the prehistoric eras, the species Tyrannosaurus rex may have been an apex predator, preying upon hadrosaurs, ceratopsians, and possibly juvenile sauropods, although some experts have suggested the dinosaur was primarily a scavenger. The debate about whether Tyrannosaurus was an apex predator or scavenger was among the longest ongoing feuds in paleontology; however, most scientists now agree that Tyrannosaurus was an opportunistic carnivore, acting mostly as a predator but also scavenging when it could sense it. Recent research also shows that while an adult T. rex would energetically gain little through scavenging, smaller theropods of approximately might have gained levels similar to those of hyenas, though not enough for them to rely on scavenging. Other research suggests that carcasses of giant sauropods may have made scavenging much more profitable to carnivores than it is now. For example, a single 40 tonne Apatosaurus carcass would have been worth roughly 6 years of calories for an average allosaur. As a result of this resource oversupply, it is possible that some theropods evolved to get most of their calories by scavenging giant sauropod carcasses, and may not have needed to consistently hunt in order to survive. The same study suggested that theropods in relatively sauropod-free environments, such as tyrannosaurs, were not exposed to the same type of carrion oversupply, and were therefore forced to hunt in order to survive. Animals which consume feces, such as dung beetles, are referred to as coprovores. Animals that collect small particles of dead organic material of both animal and plant origin are referred to as detritivores. Ecological function Scavengers play a fundamental role in the environment through the removal of decaying organisms, serving as a natural sanitation service. While microscopic and invertebrate decomposers break down dead organisms into simple organic matter which are used by nearby autotrophs, scavengers help conserve energy and nutrients obtained from carrion within the upper trophic levels, and are able to disperse the energy and nutrients farther away from the site of the carrion than decomposers. Scavenging unites animals which normally would not come into contact, and results in the formation of highly structured and complex communities which engage in nonrandom interactions. Scavenging communities function in the redistribution of energy obtained from carcasses and reducing diseases associated with decomposition. Oftentimes, scavenger communities differ in consistency due to carcass size and carcass types, as well as by seasonal effects as consequence of differing invertebrate and microbial activity. Competition for carrion results in the inclusion or exclusion of certain scavengers from access to carrion, shaping the scavenger community. When carrion decomposes at a slower rate during cooler seasons, competitions between scavengers decrease, while the number of scavenger species present increases. Alterations in scavenging communities may result in drastic changes to the scavenging community in general, reduce ecosystem services and have detrimental effects on animal and humans. The reintroduction of gray wolves (Canis lupus) into Yellowstone National Park in the United States caused drastic changes to the prevalent scavenging community, resulting in the provision of carrion to many mammalian and avian species. Likewise, the reduction of vulture species in India lead to the increase of opportunistic species such as feral dogs and rats. The presence of both species at carcasses resulted in the increase of diseases such as rabies and bubonic plague in wildlife and livestock, as feral dogs and rats are transmitters of such diseases. Furthermore, the decline of vulture populations in India has been linked to the increased rates of anthrax in humans due to the handling and ingestion of infected livestock carcasses. An increase of disease transmission has been observed in mammalian scavengers in Kenya due to the decrease in vulture populations in the area, as the decrease in vulture populations resulted in an increase of the number of mammalian scavengers at a given carcass along with the time spent at a carcass. Disease transmission Scavenging may provide a direct and indirect method for transmitting disease between animals. Scavengers of infected carcasses may become hosts for certain pathogens and consequently vectors of disease themselves. An example of this phenomenon is the increased transmission of tuberculosis observed when scavengers engage in eating infected carcasses. Likewise, the ingestion of bat carcasses infected with rabies by striped skunks (Mephitis mephitis) resulted in increased infection of these organisms with the virus. A major vector of transmission of diseases are various bird species, with outbreak being influenced by such carrier birds and their environment. An avian cholera outbreak from 2006 to 2007 off the coast Newfoundland, Canada resulted in the mortality of many marine bird species. The transmission, perpetuation and spread of the outbreak was mainly restricted to gull species who scavenge for food in the area. Similarly, an increase of transmission of avian influenza virus to chickens by domestic ducks from Indonesian farms permitted to scavenge surrounding areas was observed in 2007. The scavenging of ducks in rice paddy fields in particular resulted in increased contact with other bird species feeding on leftover rice, which may have contributed to increased infection and transmission of the avian influenza virus. The domestic ducks may not have demonstrated symptoms of infection themselves, though were observed to excrete high concentrations of the avian influenza virus. Threats Many species that scavenge face persecution globally. Vultures, in particular, have faced incredible persecution and threats by humans. Before its ban by regional governments in 2006, the veterinary drug Diclofenac has resulted in at least a 95% decline of Gyps vultures in Asia. Habitat loss and food shortage have contributed to the decline of vulture species in West Africa due to the growing human population and over-hunting of vulture food sources, as well as changes in livestock husbandry. Poisoning certain predators to increase the number of game animals is still a common hunting practice in Europe and contributes to the poisoning of vultures when they consume the carcasses of poisoned predators. Benefits to humans Highly efficient scavengers, also known as dominant or apex-scavengers, can have benefits to humans. Increases in dominant scavenger populations, such as vultures, can reduce populations of smaller opportunistic scavengers, such as rats. These smaller scavengers are often pests and disease vectors. In humans In the 1980s, Lewis Binford suggested that early humans primarily obtained meat via scavenging, not through hunting. In 2010, Dennis Bramble and Daniel Lieberman proposed that early carnivorous human ancestors subsequently developed long-distance running behaviors which improved the ability to scavenge and hunt: they could reach scavenging sites more quickly and also pursue a single animal until it could be safely killed at close range due to exhaustion and hyperthermia. In Tibetan Buddhism, the practice of excarnation—that is, the exposure of dead human bodies to carrion birds and/or other scavenging animals—is the distinctive characteristic of sky burial, which involves the dismemberment of human cadavers of whom the remains are fed to vultures, and traditionally the main funerary rite (alongside cremation) used to dispose of the human body. A similar funerary practice that features excarnation can be found in Zoroastrianism; in order to prevent the pollution of the sacred elements (fire, earth, and water) from contact with decomposing bodies, human cadavers are exposed on the Towers of Silence to be eaten by vultures and wild dogs. Studies in behavioral ecology and ecological epidemiology have shown that cannibalistic necrophagy, although rare, has been observed as a survival behavior in several social species, including anatomically modern humans; however, episodes of human cannibalism occur rarely in most human societies. Many instances have occurred in human history, especially in times of war and famine, where necrophagy and human cannibalism emerged as a survival behavior, although anthropologists report the usage of ritual cannibalism among funerary practices and as the preferred means of disposal of the dead in some tribal societies. Gallery
Biology and health sciences
Ethology
null
482869
https://en.wikipedia.org/wiki/Sulfite
Sulfite
Sulfites or sulphites are compounds that contain the sulfite ion (systematic name: sulfate(IV) ion), . The sulfite ion is the conjugate base of bisulfite. Although its acid (sulfurous acid) is elusive, its salts are widely used. Sulfites are substances that naturally occur in some foods and the human body. They are also used as regulated food additives. When in food or drink, sulfites are often lumped together with sulfur dioxide. Structure The structure of the sulfite anion can be described with three equivalent resonance structures. In each resonance structure, the sulfur atom is double-bonded to one oxygen atom with a formal charge of zero (neutral), and sulfur is singly bonded to the other two oxygen atoms, which each carry a formal charge of −1, together accounting for the −2 charge on the anion. There is also a non-bonded lone pair on the sulfur, so the structure predicted by VSEPR theory is trigonal pyramidal, as in ammonia (NH3). In the hybrid resonance structure, the S−O bonds are equivalently of bond order one and one-third. Evidence from 17O NMR spectroscopic data suggests that protonation of the sulfite ion gives a mixture of isomers: Commercial uses Sulfites are used as a food preservative or enhancer. They may come in various forms, such as: Sulfur dioxide, which is not a sulfite, but a closely related chemical oxide Potassium bisulfite or potassium metabisulfite Sodium bisulfite, sodium metabisulfite or sodium sulfite Wine Sulfites occur naturally in all wines to some extent. Sulfites are commonly introduced to arrest fermentation at a desired time, and may also be added to wine as preservatives to prevent spoilage and oxidation at several stages of the winemaking. Sulfur dioxide (SO2) protects wine not only from oxidation, but also from bacteria. Organic wines are not necessarily sulfite-free, but generally have lower amounts and regulations stipulate lower maximum sulfite contents for these wines. In general, white wines contain more sulfites than red wines and sweeter wines contain more sulfites than drier ones. In the United States, wines bottled after mid-1987 must have a label stating that they contain sulfites if they contain more than 10 parts per million (ppm). In the European Union an equivalent regulation came into force in November 2005. This includes sulfur dioxide, and the limit is on the milligrams per kilogram or per litre of sulfur dioxide equivalent. In 2012, a new regulation for organic wines came into force. In the United Kingdom, similar laws apply. Bottles of wine that contain over 10 mg/L (ppm) of "sulfites" (or sulfur dioxide) are required to bear "contains sulphites" on the label. This does not differ if sulfites are naturally occurring or added in the winemaking process. Other foods Sulfites are often used as preservatives in dried fruits, preserved radish, and dried potato products. Most beers no longer contain sulfites, although some alcoholic ciders contain them. Although shrimp are sometimes treated with sulfites on fishing vessels, the chemical may not appear on the label. In 1986, the Food and Drug Administration in the United States banned the addition of sulfites to all fresh fruit and vegetables that are eaten raw. E numbers E numbers for sulfites as food additives are: Health effects Allergic reactions to sulfites appear to be very rare in the general population, but more common in hyperallergic individuals. Sulfites are counted among the top nine food allergens, but a reaction to sulfite is not a true allergy. Some people have positive skin allergy tests to sulfites indicating true (IgE-mediated) allergy. Chronic skin conditions in the hands, perineum, and face have been reported in individuals that regularly use cosmetics or medications containing sulfites. Occupational exposure to sulfites has been reported to cause persistent skin symptoms. It may cause breathing difficulty within minutes after eating a food containing it. Asthmatics and possibly people with salicylate sensitivity (or aspirin sensitivity) are at an elevated risk for reaction to sulfites. Anaphylaxis and life-threatening reactions are rare. Other potential symptoms include sneezing, swelling of the throat, hives, and migraine. A 2017 study has shown negative impacts of sulfites on bacteria found in the human microbiome. Use and labeling regulations In 1986, the U.S. Food and Drug Administration banned the use of sulfites as preservatives on foods intended to be eaten fresh (such as salad ingredients). This has contributed to the increased use of erythorbic acid and its salts as preservatives. They also cannot be added to foods high in vitamin B1 such as meats because sulfites can destroy vitamin B1 from foods Generally, U.S. labeling regulations do not require products to indicate the presence of sulfites in foods unless it is added specifically as a preservative; still, many companies voluntarily label sulfite-containing foods. Sulfites used in food processing (but not as a preservative) are required to be listed if they are not incidental additives (21 CFR 101.100(a)(3)), and if there are more than 10 ppm in the finished product (21 CFR 101.100(a)(4)) Sulfites that are allowed to be added in food in the US are sulfur dioxide, sodium sulfite, sodium bisulfite, potassium bisulfite, sodium metabisulfite, and potassium metabisulfite. Products likely to contain sulfites at less than 10 ppm (fruits and alcoholic beverages) do not require ingredients labels, and the presence of sulfites usually is undisclosed. In Australia and New Zealand, sulfites must be declared in the statement of ingredients when present in packaged foods in concentrations of 10 mg/kg (ppm) or more as an ingredient; or as an ingredient of a compound ingredient; or as a food additive or component of a food additive; or as a processing aid or component of a processing aid. Sulfites that can be added to foods in Canada are potassium bisulfite, potassium metabisulfite, sodium bisulfite, sodium dithionite, sodium metabisulfite, sodium sulfite, sulfur dioxide and sulfurous acid. These can also be declared using the common names sulfites, sulfates, sulfiting agents. In the European Union, "EU law requires food labels to indicate "contains sulfites" (when exceeding 10 milligrams per kilogram or per litre) without specifying the amount". Metabolic diseases High sulfite content in the blood and urine of babies can be caused by molybdenum cofactor deficiency disease which leads to neurological damage and early death unless treated. Treatment, requiring daily injections, became available in 2009.
Physical sciences
Sulfuric oxyanions
Chemistry
482952
https://en.wikipedia.org/wiki/Frequency%20mixer
Frequency mixer
In electronics, a mixer, or frequency mixer, is an electrical circuit that creates new frequencies from two signals applied to it. In its most common application, two signals are applied to a mixer, and it produces new signals at the sum and difference of the original frequencies. Other frequency components may also be produced in a practical frequency mixer. Mixers are widely used to shift signals from one frequency range to another, a process known as heterodyning, for convenience in transmission or further signal processing. For example, a key component of a superheterodyne receiver is a mixer used to move received signals to a common intermediate frequency. Frequency mixers are also used to modulate a carrier signal in radio transmitters. Types The essential characteristic of a mixer is that it produces a component in its output which is the product of the two input signals. Both active and passive circuits can realize mixers. Passive mixers use one or more diodes and rely on their nonlinear current–voltage relationship to provide the multiplying element. In a passive mixer, the desired output signal is always of lower power than the input signals. Active mixers use an amplifying device (such as a transistor or vacuum tube) that may increase the strength of the product signal. Active mixers improve isolation between the ports, but may have higher noise and more power consumption. An active mixer can be less tolerant of overload. Mixers may be built of discrete components, may be part of integrated circuits, or can be delivered as hybrid modules. Mixers may also be classified by their topology: An unbalanced mixer, in addition to producing a product signal, allows both input signals to pass through and appear as components in the output. A single balanced mixer is arranged with one of its inputs applied to a balanced (differential) circuit so that either the local oscillator (LO) or signal input (RF) is suppressed at the output, but not both. A double balanced mixer has both its inputs applied to differential circuits, so that neither of the input signals and only the product signal appears at the output. Double balanced mixers are more complex and require higher drive levels than unbalanced and single balanced designs. Selection of a mixer type is a trade off for a particular application. Mixer circuits are characterized by their properties such as conversion gain (or loss), noise figure and nonlinearity. Nonlinear electronic components that are used as mixers include diodes and transistors biased near cutoff. Linear, time-varying devices, such as analog multipliers, provide superior performance, as it is only in true multipliers that the output amplitude is proportional to the input amplitude, as required for linear conversion. Ferromagnetic-core inductors driven into saturation have also been used. In nonlinear optics, crystals with nonlinear characteristics are used to mix two frequencies of laser light to create optical heterodynes. Diode A diode can be used to create a simple unbalanced mixer. The current through an ideal semiconductor diode is primarily an exponential function of the voltage across it is: where is the saturation current, is the charge of an electron, is the nonideality factor, is the Boltzmann constant, and is the absolute temperature. The exponential can be expanded as the power series The ellipsis represents all higher powers of the sum. Because higher powers fall off with , they can be assumed to be negligible for small signals, so an approximation using just the first three terms is: Suppose that the sum of the two input signals is applied to a diode, and that an output voltage is generated that is proportional to the current through the diode (perhaps by providing the voltage that is present across a resistor in series with the diode). Then, disregarding the constants in the diode equation, the output voltage will be proportional to: In addition to the original two signals , this output voltage has , which when rewritten as is revealed to contain the multiplication of the original two signals . If two sinusoids of different frequencies are fed as input into the diode, such that and , then the output becomes: Expanding the square term yields: According to the prosthaphaeresis product to sum identity , the product can be expressed as the sum of two sinusoids at the sum and difference frequencies of and : These new frequencies are in addition to the original frequencies of and . A narrowband filter may be used to remove undesired frequencies from the output signal. Switching Another form of mixer operates by switching, which is equivalent to multiplication of an input signal by a square wave. In a double-balanced mixer, the (smaller) input signal is alternately inverted or non inverted according to the phase of the local oscillator (LO). That is, the input signal is effectively multiplied by a square wave that alternates between +1 and -1 at the LO rate. In a single-balanced switching mixer, the input signal is alternately passed or blocked. The input signal is thus effectively multiplied by a square wave that alternates between 0 and +1. This results in frequency components of the input signal being present in the output together with the product, since the multiplying signal can be viewed as a square wave with a DC offset (i.e. a zero frequency component). The aim of a switching mixer is to achieve the linear operation by means of hard switching, driven by the local oscillator. In the frequency domain, the switching mixer operation leads to the usual sum and difference frequencies, but also to further terms e.g. ±3fLO, ±5fLO, etc. The advantage of a switching mixer is that it can achieve (with the same effort) a lower noise figure (NF) and larger conversion gain. This is because the switching diodes or transistors act either like a small resistor (switch closed) or large resistor (switch open), and in both cases only a minimal noise is added. From the circuit perspective, many multiplying mixers can be used as switching mixers, just by increasing the LO amplitude. So RF engineers simply talk about mixers, while they mean switching mixers. Applications The mixer circuit can be used not only to shift the frequency of an input signal as in a receiver, but also as a product detector, modulator, phase detector or frequency multiplier. For example, a communications receiver might contain two mixer stages for conversion of the input signal to an intermediate frequency and another mixer employed as a detector for demodulation of the signal.
Technology
Functional circuits
null
483010
https://en.wikipedia.org/wiki/Potassium%20permanganate
Potassium permanganate
Potassium permanganate is an inorganic compound with the chemical formula KMnO4. It is a purplish-black crystalline salt, which dissolves in water as K+ and ions to give an intensely pink to purple solution. Potassium permanganate is widely used in the chemical industry and laboratories as a strong oxidizing agent, and also as a medication for dermatitis, for cleaning wounds, and general disinfection. It is on the World Health Organization's List of Essential Medicines. In 2000, worldwide production was estimated at 30,000 tons. Properties Potassium permanganate is the potassium salt of the tetrahedral transition metal oxo complex permanganate, in which four ligands are bound to a manganese(VII) center. Structure forms orthorhombic crystals with constants: a = 910.5 pm, b = 572.0 pm, c = 742.5 pm. The overall motif is similar to that for barium sulfate, with which it forms solid solutions. In the solid (as in solution), each centre is tetrahedral. The Mn–O distances are 1.62 Å. Color The purplish-black color of solid potassium permanganate, and the intensely pink to purple color of its solutions, is caused by its permanganate anion, which gets its color from a strong charge-transfer absorption band caused by excitation of electrons from oxo ligand orbitals to empty orbitals of the manganese(VII) center. Medical uses Mechanism of action Potassium permanganate functions as a strong oxidising agent. Through this mechanism it results in disinfection, astringent effects, and decreased smell. Clinical use Potassium permanganate is used for a number of skin conditions. This includes fungal infections of the foot, impetigo, pemphigus, superficial wounds, dermatitis, and topical ulcers. Radioactive contamination of the skin can be cleaned with potassium permanganate and vigorous scrubbing. For topical ulcers it is used together with procaine benzylpenicillin. Typically it is used in skin conditions that produce a lot of liquid. It can be applied as a soaked dressing or a bath. It can be used in children and adults. Petroleum jelly may be used on the nails before soaking to prevent their discoloration. For treating eczema, it is recommended using for only a few days at a time due to the possibility of it irritating the skin. The U.S. Food and Drug Administration does not recommend its use in the crystal or tablet form. It should only be used in a diluted liquid form. Historical use Potassium permanganate was first made in the 1600s and came into common medical use at least as early as the 1800s. During World War I Canadian soldiers were given potassium permanganate (to be applied mixed with an ointment) in an effort to prevent sexually transmitted infections. Some have attempted to bring about an abortion by putting it in the vagina, though this is not effective. Other historical uses have included an effort to wash out the stomach in those with strychnine or picrotoxin poisoning. Side effects Side effects from topical use may include irritation of the skin and discoloration of clothing. A harsh burn on a child from an undissolved tablet has been reported. Higher concentration solutions can result in chemical burns. Therefore, the British National Formulary recommends 100 mg be dissolved in a liter of water before use to form a 1:10,000 (0.01%) solution. Wrapping the dressings soaked with potassium permanganate is not recommended. Potassium permanganate is toxic if taken by mouth. Side effects may include nausea, vomiting, and shortness of breath may occur. If a sufficiently large amount (about 10 grams) is eaten death may occur. Concentrated solutions when drunk have resulted in acute respiratory distress syndrome or swelling of the airway. Recommended measures for those who have ingested potassium permanganate include gastroscopy. Activated charcoal or medications to cause vomiting are not recommended. While medications like ranitidine and acetylcysteine may be used in toxicity, evidence for this use is poor. Pharmaceuticals In the United States the FDA requires tablets of the medication to be sold by prescription. Potassium permanganate, however, does not have FDA approved uses and therefore non medical grade potassium permanganate is sometimes used for medical purposes.[citation needed] It is available under a number of brand names including Permasol, Koi Med Tricho-Ex, and Kalii permanganas RFF. It is occasionally called "Condy's crystals". Veterinary medicine Potassium permanganate may be used to prevent the spread of glanders among horses. Industrial and other uses Almost all applications of potassium permanganate exploit its oxidizing properties. As a strong oxidant that does not generate toxic byproducts, KMnO4 has many niche uses. Water treatment Potassium permanganate is used extensively in the water treatment industry. It is used as a regeneration chemical to remove iron and hydrogen sulfide (rotten egg smell) from well water via a "manganese greensand" filter. "Pot-Perm" is also obtainable at pool supply stores and is used additionally to treat wastewater. Historically it was used to disinfect drinking water and can turn the water pink. Modern hiking and survivalist guides advise against using potassium permanganate in the field because it is difficult to dose correctly. It currently finds application in the control of nuisance organisms such as zebra mussels in fresh water collection and treatment systems. Synthesis of organic compounds A major application of KMnO4 is as a reagent for the synthesis of organic compounds. Significant amounts are required for the synthesis of ascorbic acid, chloramphenicol, saccharin, isonicotinic acid, and pyrazinoic acid. KMnO4 is used in qualitative organic analysis to test for the presence of unsaturation. It is sometimes referred to as Baeyer's reagent after the German organic chemist Adolf von Baeyer. The reagent is an alkaline solution of potassium permanganate. Reaction with double or triple bonds ( or ) causes the color to fade from purplish-pink to brown. Aldehydes and formic acid (and formates) also give a positive test. The test is antiquated. KMnO4 solution is a common thin layer chromatography (TLC) stain for the detection of oxidizable functional groups, such as alcohols, aldehydes, alkenes, and ketones. Such compounds result in a white to orange spot on TLC plates. Analytical use Potassium permanganate can be used to quantitatively determine the total oxidizable organic material in an aqueous sample. The value determined is known as the permanganate value. In analytical chemistry, a standardized aqueous solution of KMnO4 is sometimes used as an oxidizing titrant for redox titrations (permanganometry). As potassium permanganate is titrated, the solution becomes a light shade of purple, which darkens as excess of the titrant is added to the solution. In a related way, it is used as a reagent to determine the Kappa number of wood pulp. For the standardization of KMnO4 solutions, reduction by oxalic acid is often used. In agricultural chemistry, it is used for estimation of active carbon in soil. Aqueous, acidic solutions of KMnO4 are used to collect gaseous mercury in flue gas during stationary source emissions testing. In histology, potassium permanganate was used as a bleaching agent. Fruit preservation Ethylene absorbents extend storage time of bananas even at high temperatures. This effect can be exploited by packing bananas in polyethylene together with potassium permanganate. By removing ethylene by oxidation, the permanganate delays the ripening, increasing the fruit's shelf life up to 4 weeks without the need for refrigeration. The chemical reaction, in which ethylene (C2H4) is oxidised by potassium permanganate (KMnO4) to carbon dioxide (CO2), manganese oxide (MnO2) and potassium hydroxide (KOH), in the presence of water, is presented as follows: 3 C2H4 + 12 KMnO4 + 2 H2O → 6 CO2 + 2 H2O + 12 MnO2 + 12 KOH Survival kits Potassium permanganate is sometimes included in survival kits: as a hypergolic fire starter (when mixed with glycerol antifreeze from a car radiator); as a water sterilizer; and for creating distress signals on snow. Fire service Potassium permanganate is added to "plastic sphere dispensers" to create backfires, burnouts, and controlled burns. Polymer spheres resembling ping-pong balls containing small amounts of permanganate are injected with ethylene glycol and projected towards the area where ignition is desired, where they spontaneously ignite seconds later. Both handheld helicopter- unmanned aircraft systems (UAS) or boat-mounted plastic sphere dispensers are used. Other uses Potassium permanganate is one of the principal chemicals used in the film and television industries to "age" props and set dressings. Its ready conversion to brown MnO2 creates "hundred-year-old" or "ancient" looks on hessian cloth (burlap), ropes, timber and glass. Potassium permanganate can be used to oxidize cocaine paste to purify it and increase its stability. This led to the Drug Enforcement Administration launching Operation Purple in 2000, with the goal of monitoring the world supply of potassium permanganate; however, potassium permanganate derivatives and substitutes were soon used thereafter to avoid the operation. Potassium permanganate is used as an oxidizing agent in the synthesis of cocaine and methcathinone. Potassium permanganate is one of a number of possible treatments for Ichthyophthirius multifiliis (commonly known as "ich"), a parasite that infects and usually kills freshwater aquarium fish. History In 1659, Johann Rudolf Glauber fused a mixture of the mineral pyrolusite (manganese dioxide, MnO2) and potassium carbonate to obtain a material that, when dissolved in water, gave a green solution (potassium manganate) which slowly shifted to violet and then finally red. The reaction that produced the color changes that Glauber observed in his solution of potassium permanganate and potassium manganate (K2MnO4) is now known as the "chemical chameleon". This report represents the first description of the production of potassium permanganate. Just under 200 years later, London chemist Henry Bollmann Condy had an interest in disinfectants; he found that fusing pyrolusite with sodium hydroxide (NaOH) and dissolving it in water produced a solution with disinfectant properties. He patented this solution, and marketed it as 'Condy's Fluid'. Although effective, the solution was not very stable. This was overcome by using potassium hydroxide (KOH) rather than NaOH. This was more stable, and had the advantage of easy conversion to the equally effective potassium permanganate crystals. This crystalline material was known as 'Condy's crystals' or 'Condy's powder'. Potassium permanganate was comparatively easy to manufacture, so Condy was subsequently forced to spend considerable time in litigation to stop competitors from marketing similar products. Early photographers used it as a component of flash powder. It is now replaced with other oxidizers, due to the instability of permanganate mixtures. Preparation Potassium permanganate is produced industrially from manganese dioxide, which also occurs as the mineral pyrolusite. In 2000, worldwide production was estimated at 30,000 tonnes. The MnO2 is fused with potassium hydroxide and heated in air or with another source of oxygen, like potassium nitrate or potassium chlorate. This process gives potassium manganate: 2 MnO2 + 4 KOH + O2 -> 2 K2MnO4 + 2 H2O With sodium hydroxide, the end product is not sodium manganate but an Mn(V) compound, which is one reason why the potassium permanganate is more commonly used than sodium permanganate. Furthermore, the potassium salt crystallizes better. The potassium manganate is then converted into permanganate by electrolytic oxidation in alkaline media: 2 K2MnO4 + 2 H2O -> 2 KMnO4 + 2 KOH + H2 Other methods Although of no commercial importance, potassium manganate can be oxidized by chlorine or by disproportionation under acidic conditions. The chlorine oxidation reaction is 2 K2MnO4 + Cl2 -> 2 KMnO4 + 2 KCl and the acid-induced disproportionation reaction may be written as 3 K2MnO4 + 4 HCl -> 2 KMnO4 + MnO2 + 2 H2O + 4 KCl A weak acid such as carbonic acid is sufficient for this reaction: 3 K2MnO4 + 2 CO2 -> 2 KMnO4 + 2 K2CO3 + MnO2 Permanganate salts may also be generated by treating a solution of Mn2+ ions with strong oxidants such as lead dioxide (PbO2), sodium bismuthate (NaBiO3), or peroxydisulfate. Tests for the presence of manganese exploit the vivid violet color of permanganate produced by these reagents. Reactions Organic chemistry Dilute solutions of KMnO4 convert alkenes into diols. This behaviour is also used as a qualitative test for the presence of double or triple bonds in a molecule, since the reaction decolorizes the initially purple permanganate solution and generates a brown precipitate (MnO2). In this context, it is sometimes called Baeyer's reagent. However, bromine serves better in measuring unsaturation (double or triple bonds) quantitatively, since KMnO4, being a very strong oxidizing agent, can react with a variety of groups. Under acidic conditions, the alkene double bond is cleaved to give the appropriate carboxylic acid: CH3(CH2)17CH=CH2 + 2 KMnO4 + 3 H2SO4 -> CH3(CH2)17COOH + CO2 + 4 H2O + K2SO4 + 2 MnSO4 Potassium permanganate oxidizes aldehydes to carboxylic acids, illustrated by the conversion of n-heptanal to heptanoic acid: 5 C6H13CHO + 2 KMnO4 + 3 H2SO4 -> 5 C6H13COOH + 3 H2O + K2SO4 + 2 MnSO4 Even an alkyl group (with a benzylic hydrogen) on an aromatic ring is oxidized, e.g. toluene to benzoic acid. 5 C6H5CH3 + 6 KMnO4 + 9 H2SO4 -> 5 C6H5COOH + 14 H2O + 3 K2SO4 + 6 MnSO4 Glycols and polyols are highly reactive toward KMnO4. For example, addition of potassium permanganate to an aqueous solution of sugar and sodium hydroxide produces the chemical chameleon reaction, which involves dramatic color changes associated with the various oxidation states of manganese. A related vigorous reaction is exploited as a fire starter in survival kits. For example, a mixture of potassium permanganate and glycerol or pulverized glucose ignites readily. Its sterilizing properties are another reason for inclusion of KMnO4 in a survival kit. Ion exchange Treating a mixture of aqueous potassium permanganate with a quaternary ammonium salt results in ion exchange, precipitating the quat salt of permanganate. Solutions of these salts are sometimes soluble in organic solvents: KMnO4 + R4NCl -> R4NMnO4 + KCl Similarly, addition of a crown ether also gives a lipophilic salt. Reaction with acids and bases Permanganate reacts with concentrated hydrochloric acid to give chlorine and manganese(II): 2 KMnO4 + 16 HCl -> 2 MnCl2 + 5 Cl2 + 2 KCl + 8 H2O In neutral solution, permanganate slowly reduces to manganese dioxide (MnO2). This is the material that stains one's skin when handling KMnO4. KMnO4 reduces in alkaline solution to give green K2MnO4: 4 KMnO4 + 4 KOH -> 4 K2MnO4 + O2 + 2 H2O This reaction illustrates the relatively rare role of hydroxide as a reducing agent. Addition of concentrated sulfuric acid to potassium permanganate gives Mn2O7. Although no reaction may be apparent, the vapor over the mixture will ignite paper impregnated with alcohol. Potassium permanganate and sulfuric acid react to produce some ozone, which has a high oxidizing power and rapidly oxidizes the alcohol, causing it to combust. As the reaction also produces explosive Mn2O7, this should only be attempted with great caution. Thermal decomposition Solid potassium permanganate decomposes when heated: 2 KMnO4 -> K2MnO4 + MnO2 + O2 It is a redox reaction. Safety and handling Potassium permanganate poses risks as an oxidizer. Contact with skin can cause skin irritation and in some cases severe allergic reaction. It can also result in discoloration and clothing stains.
Physical sciences
Metallic oxyanions
Chemistry
483060
https://en.wikipedia.org/wiki/Icebreaker
Icebreaker
An icebreaker is a special-purpose ship or boat designed to move and navigate through ice-covered waters, and provide safe waterways for other boats and ships. Although the term usually refers to ice-breaking ships, it may also refer to smaller vessels, such as the icebreaking boats that were once used on the canals of the United Kingdom. For a ship to be considered an icebreaker, it requires three traits most normal ships lack: a strengthened hull, an ice-clearing shape, and the power to push through sea ice. Icebreakers clear paths by pushing straight into frozen-over water or pack ice. The bending strength of sea ice is low enough that the ice breaks usually without noticeable change in the vessel's trim. In cases of very thick ice, an icebreaker can drive its bow onto the ice to break it under the weight of the ship. A buildup of broken ice in front of a ship can slow it down much more than the breaking of the ice itself, so icebreakers have a specially designed hull to direct the broken ice around or under the vessel. The external components of the ship's propulsion system (propellers, propeller shafts, etc.) are at greater risk of damage than the vessel's hull, so the ability of an icebreaker to propel itself onto the ice, break it, and clear the debris from its path successfully is essential for its safety. History Earliest icebreakers Prior to ocean-going ships, ice breaking technology was developed on inland canals and rivers using laborers with axes and hooks. The first recorded primitive icebreaker ship was a barge used by the Belgian town of Bruges in 1383 to help clear the town moat. The efforts of the ice-breaking barge were successful enough to warrant the town purchasing four such ships. Ice breaking barges continued to see use during the colder winters of the Little Ice Age with growing use in the Low Country where significant amounts of trade and transport of people and goods took place. In the 15th century the use of ice breakers in Flanders (Oudenaarde, Kortrijk, Ieper, Veurne, Diksmuide and Hulst) was already well established. The use of the ice breaking barges expanded in the 17th century where every town of some importance in the Low Country used some form of icebreaker to keep their waterways clear. Before the 17th century the specifications of icebreakers are unknown. The specifications for ice breaking vessels show that they were dragged by teams of horses and the heavy weight of the ship pushed down on the ice breaking it. They were used in conjunction with teams of men with axes and saws and the technology behind them didn't change much until the industrial revolution. Sailing ships in the polar waters Ice-strengthened ships were used in the earliest days of polar exploration. These were originally wooden and based on existing designs, but reinforced, particularly around the waterline with double planking to the hull and strengthening cross members inside the ship. Bands of iron were wrapped around the outside. Sometimes metal sheeting was placed at the bows, at the stern, and along the keel. Such strengthening was designed to help the ship push through ice and also to protect the ship in case it was "nipped" by the ice. Nipping occurs when ice floes around a ship are pushed against the ship, trapping it as if in a vise and causing damage. This vise-like action is caused by the force of winds and tides on ice formations. The first boats to be used in the polar waters were those of the Eskimos. Their kayaks are small human-powered boats with a covered deck, and one or more cockpits, each seating one paddler who strokes a single or double-bladed paddle. Such boats have no icebreaking capabilities, but they are light and well fit to carry over the ice. In the 9th and 10th centuries, the Viking expansion reached the North Atlantic, and eventually Greenland and Svalbard in the Arctic. Vikings, however, operated their ships in the waters that were ice-free for most of the year, in the conditions of the Medieval Warm Period. In the 11th century, in North Russia the coasts of the White Sea, named so for being ice-covered for over half of a year, started being settled. The mixed ethnic group of the Karelians and the Russians in the North-Russia that lived on the shores of the Arctic Ocean became known as Pomors ("seaside settlers"). Gradually they developed a special type of small one- or two-mast wooden sailing ships, used for voyages in the ice conditions of the Arctic seas and later on Siberian rivers. These earliest icebreakers were called kochi. The koch's hull was protected by a belt of ice-floe resistant flush skin-planking along the variable water-line, and had a false keel for on-ice portage. If a koch became squeezed by the ice-fields, its rounded bodylines below the water-line would allow for the ship to be pushed up out of the water and onto the ice with no damage. In the 19th century, similar protective measures were adopted to modern steam-powered icebreakers. Some notable sailing ships in the end of the Age of Sail also featured the egg-shaped form like that of Pomor boats, for example the Fram, used by Fridtjof Nansen and other great Norwegian Polar explorers. Fram was the wooden ship to have sailed farthest north (85°57'N) and farthest south (78°41'S), and one of the strongest wooden ships ever built. Steam-powered icebreakers An early ship designed to operate in icy conditions was a wooden paddle steamer, City Ice Boat No. 1, that was built for the city of Philadelphia by Vandusen & Birelyn in 1837. The ship was powered by two steam engines and her wooden paddles were reinforced with iron coverings. With a rounded shape and strong metal hull, the Russian of 1864 was an important predecessor of modern icebreakers with propellers. The ship was built on the orders of merchant and shipbuilder Mikhail Britnev. She had the bow altered to achieve an ice-clearing capability (20° raise from keel line). This allowed Pilot to push herself on the top of the ice and consequently break it. Britnev fashioned the bow of his ship after the shape of old Pomor boats, which had been navigating icy waters of the White Sea and Barents Sea for centuries. Pilot was used between 1864 and 1890 for navigation in the Gulf of Finland between Kronstadt and Oranienbaum thus extending the summer navigation season by several weeks. Inspired by the success of Pilot, Mikhail Britnev built a second similar vessel Boy ("Breakage" in Russian) in 1875 and a third Booy ("Buoy" in Russian) in 1889. The cold winter of 1870–1871 caused the Elbe River and the port of Hamburg to freeze over, causing a prolonged halt to navigation and huge commercial losses. Carl Ferdinand Steinhaus reused the altered bow Pilots design from Britnev to make his own icebreaker, Eisbrecher I. The first true modern sea-going icebreaker was built at the turn of the 20th century. Icebreaker , was built in 1899 at the Armstrong Whitworth naval yard in England under contract from the Imperial Russian Navy. The ship borrowed the main principles from Pilot and applied them to the creation of the first polar icebreaker, which was able to run over and crush pack ice. The ship displaced 5,000 tons, and her steam-reciprocating engines delivered . The ship was decommissioned in 1963 and scrapped in 1964, making her one of the longest serving icebreakers in the world. In Canada, the government needed to provide a way to prevent flooding due to ice jam on the St. Lawrence River. Icebreakers were built in order to maintain the river free of ice jam, east of Montréal. In about the same time, Canada had to fill its obligations in the Canadian Arctic. Large steam icebreakers, like the (1930) and (1952), were built for this dual use (St. Lawrence flood prevention and Arctic replenishment). At the beginning of the 20th century, several other countries began to operate purpose-built icebreakers. Most were coastal icebreakers, but Canada, Russia, and later, the Soviet Union, also built several oceangoing icebreakers up to 11,000 tons in displacement. Diesel-powered icebreakers Before the first diesel-electric icebreakers were built in the 1930s, icebreakers were either coal- or oil-fired steam ships. Reciprocating steam engines were preferred in icebreakers due to their reliability, robustness, good torque characteristics, and ability to reverse the direction of rotation quickly. During the steam era, the most powerful pre-war steam-powered icebreakers had a propulsion power of about . The world's first diesel-electric icebreaker was the 4,330-ton Swedish icebreaker in 1933. At divided between two propellers in the stern and one propeller in the bow, she remained the most powerful Swedish icebreaker until the commissioning of in 1957. Ymer was followed by the Finnish , the first diesel-electric icebreaker in Finland, in 1939. Both vessels were decommissioned in the 1970s and replaced by much larger icebreakers in both countries, the 1976-built in Finland and the 1977-built in Sweden. In 1941, the United States started building the . Research in Scandinavia and the Soviet Union led to a design that had a very strongly built short and wide hull, with a cut away forefoot and a rounded bottom. Powerful diesel-electric machinery drove two stern and one auxiliary bow propeller. These features would become the standard for postwar icebreakers until the 1980s. Since the mid-1970s, the most powerful diesel-electric icebreakers have been the formerly Soviet and later Russian icebreakers Ermak, Admiral Makarov and Krasin which have nine twelve-cylinder diesel generators producing electricity for three propulsion motors with a combined output of . In the late 2020s, they will be surpassed by the new Canadian polar icebreakers and , which will have a combined propulsion power of . Canada In Canada, diesel-electric icebreakers started to be built in 1952, first with HMCS Labrador (was transferred later to the Canadian Coast Guard), using the USCG Wind-class design but without the bow propeller. Then in 1960, the next step in the Canadian development of large icebreakers came when was completed at Lauzon, Quebec. A considerably bigger and more powerful ship than Labrador, John A.Macdonald was an ocean-going icebreaker able to meet the most rigorous polar conditions. Her diesel-electric machinery of was arranged in three units transmitting power equally to each of three shafts. Canada's largest and most powerful icebreaker, the , was delivered in 1969. Her original three steam turbine, nine generator, and three electric motor system produces . A multi-year mid-life refit project (1987–1993) saw the ship get a new bow, and a new propulsion system. The new power plant consists of five diesels, three generators, and three electric motors, giving about the same propulsion power. On 22 August 1994 Louis S. St-Laurent and became the first North American surface vessels to reach the North Pole. The vessel was originally scheduled to be decommissioned in 2000; however, a refit extended the decommissioning date to 2017. It is now planned to be kept in service through the 2020s pending the introduction of two new polar icebreakers, and , for the Coast Guard. Nuclear-powered icebreakers Russia currently operates all existing and functioning nuclear-powered icebreakers. The first one, NS , was launched in 1957 and entered operation in 1959, before being officially decommissioned in 1989. It was both the world's first nuclear-powered surface ship and the first nuclear-powered civilian vessel. The second Soviet nuclear icebreaker was NS , the lead ship of the . In service since 1975, she was the first surface ship to reach the North Pole, on August 17, 1977. Several nuclear-powered icebreakers were also built outside the Soviet Union. Two shallow-draft Taymyr-class nuclear icebreakers were built in Finland for the Soviet Union in the late 1980s. In May 2007, sea trials were completed for the nuclear-powered Russian icebreaker NS . The vessel was put into service by Murmansk Shipping Company, which manages all eight Russian state-owned nuclear icebreakers. The keel was originally laid in 1989 by Baltic Works of Leningrad, and the ship was launched in 1993 as NS Ural. This icebreaker is intended to be the sixth and last of the Arktika class. Function Today, most icebreakers are needed to keep trade routes open where there are either seasonal or permanent ice conditions. While the merchant vessels calling ports in these regions are strengthened for navigation in ice, they are usually not powerful enough to manage the ice by themselves. For this reason, in the Baltic Sea, the Great Lakes and the Saint Lawrence Seaway, and along the Northern Sea Route, the main function of icebreakers is to escort convoys of one or more ships safely through ice-filled waters. When a ship becomes immobilized by ice, the icebreaker has to free it by breaking the ice surrounding the ship and, if necessary, open a safe passage through the ice field. In difficult ice conditions, the icebreaker can also tow the weakest ships. Some icebreakers are also used to support scientific research in the Arctic and Antarctic. In addition to icebreaking capability, the ships need to have reasonably good open-water characteristics for transit to and from the polar regions, facilities and accommodation for the scientific personnel, and cargo capacity for supplying research stations on the shore. Countries such as Argentina and South Africa, which do not require icebreakers in domestic waters, have research icebreakers for carrying out studies in the polar regions. As offshore drilling moves to the Arctic seas, icebreaking vessels are needed to supply cargo and equipment to the drilling sites and protect the drillships and oil platforms from ice by performing ice management, which includes for example breaking drifting ice into smaller floes and steering icebergs away from the protected object. In the past, such operations were carried out primarily in North America, but today Arctic offshore drilling and oil production is also going on in various parts of the Russian Arctic. The United States Coast Guard uses icebreakers to help conduct search and rescue missions in the icy, polar oceans. United States icebreakers serve to defend economic interests and maintain the nation's presence in the Arctic and Antarctic regions. As the icecaps in the Arctic continue to melt, there are more passageways being discovered. These possible navigation routes cause an increase of interests in the polar hemispheres from nations worldwide. The United States polar icebreakers must continue to support scientific research in the expanding Arctic and Antarctic oceans. Every year, a heavy icebreaker must perform Operation Deep Freeze, clearing a safe path for resupply ships to the National Science Foundation’s facility McMurdo in Antarctica. The most recent multi-month excursion was led by the Polar Star which escorted a container and fuel ship through treacherous conditions before maintaining the channel free of ice. Characteristics Ice resistance and hull form Icebreakers are often described as ships that drive their sloping bows onto the ice and break it under the weight of the ship. In reality, this only happens in very thick ice where the icebreaker will proceed at walking pace or may even have to repeatedly back down several ship lengths and ram the ice pack at full power. More commonly the ice, which has a relatively low flexural strength, is easily broken and submerged under the hull without a noticeable change in the icebreaker's trim while the vessel moves forward at a relatively high and constant speed. When an icebreaker is designed, one of the main goals is to minimize the forces resulting from crushing and breaking the ice, and submerging the broken floes under the vessel. The average value of the longitudinal components of these instantaneous forces is called the ship's ice resistance. Naval architects who design icebreakers use the so-called h-v-curve to determine the icebreaking capability of the vessel. It shows the speed (v) that the ship is able to achieve as a function of ice thickness (h). This is done by calculating the velocity at which the thrust from the propellers equals the combined hydrodynamic and ice resistance of the vessel. An alternative means to determine the icebreaking capability of a vessel in different ice conditions such as pressure ridges is to perform model tests in an ice tank. Regardless of the method, the actual performance of new icebreakers is verified in full scale ice trials once the ship has been built. In order to minimize the icebreaking forces, the hull lines of an icebreaker are usually designed so that the flare at the waterline is as small as possible. As a result, icebreaking ships are characterized by a sloping or rounded stem as well as sloping sides and a short parallel midship to improve maneuverability in ice. However, the spoon-shaped bow and round hull have poor hydrodynamic efficiency and seakeeping characteristics, and make the icebreaker susceptible to slamming, or the impacting of the bottom structure of the ship onto the sea surface. For this reason, the hull of an icebreaker is often a compromise between minimum ice resistance, maneuverability in ice, low hydrodynamic resistance, and adequate open water characteristics. Some icebreakers have a hull that is wider in the bow than in the stern. These so-called "reamers" increase the width of the ice channel and thus reduce frictional resistance in the aftship as well as improve the ship's maneuverability in ice. In addition to low friction paint, some icebreakers utilize an explosion-welded abrasion-resistant stainless steel ice belt that further reduces friction and protects the ship's hull from corrosion. Auxiliary systems such as powerful water deluges and air bubbling systems are used to reduce friction by forming a lubricating layer between the hull and the ice. Pumping water between tanks on both sides of the vessel results in continuous rolling that reduces friction and makes progress through the ice easier. Experimental bow designs such as the flat Thyssen-Waas bow and a cylindrical bow have been tried over the years to further reduce the ice resistance and create an ice-free channel. Structural design Icebreakers and other ships operating in ice-filled waters require additional structural strengthening against various loads resulting from the contact between the hull of the vessel and the surrounding ice. As ice pressures vary between different regions of the hull, the most reinforced areas in the hull of an icegoing vessel are the bow, which experiences the highest ice loads, and around the waterline, with additional strengthening both above and below the waterline to form a continuous ice belt around the ship. Short and stubby icebreakers are generally built using transverse framing in which the shell plating is stiffened with frames placed about apart as opposed to longitudinal framing used in longer ships. Near the waterline, the frames running in vertical direction distribute the locally concentrated ice loads on the shell plating to longitudinal girders called stringers, which in turn are supported by web frames and bulkheads that carry the more spread-out hull loads. While the shell plating, which is in direct contact with the ice, can be up to thick in older polar icebreakers, the use of high strength steel with yield strength up to in modern icebreakers results in the same structural strength with smaller material thicknesses and lower steel weight. Regardless of the strength, the steel used in the hull structures of an icebreaker must be capable of resisting brittle fracture in low ambient temperatures and high loading conditions, both of which are typical for operations in ice-filled waters. If built according to the rules set by a classification society such as American Bureau of Shipping, Det Norske Veritas or Lloyd's Register, icebreakers may be assigned an ice class based on the level of ice strengthening in the ship's hull. It is usually determined by the maximum ice thickness where the ship is expected to operate and other requirements such as possible limitations on ramming. While the ice class is generally an indication of the level of ice strengthening, not the actual icebreaking capability of an icebreaker, some classification societies such as the Russian Maritime Register of Shipping have operational capability requirements for certain ice classes. Since the 2000s, International Association of Classification Societies (IACS) has proposed adopting an unified system known as the Polar Class (PC) to replace classification society specific ice class notations. Power and propulsion Since the Second World War, most icebreakers have been built with diesel-electric propulsion in which diesel engines coupled to generators produce electricity for propulsion motors that turn the fixed pitch propellers. The first diesel-electric icebreakers were built with direct current (DC) generators and propulsion motors, but over the years the technology advanced first to alternating current (AC) generators and finally to frequency-controlled AC-AC systems. In modern diesel-electric icebreakers, the propulsion system is built according to the power plant principle in which the main generators supply electricity for all onboard consumers and no auxiliary engines are needed. Although the diesel-electric powertrain is the preferred choice for icebreakers due to the good low-speed torque characteristics of the electric propulsion motors, icebreakers have also been built with diesel engines mechanically coupled to reduction gearboxes and controllable pitch propellers. The mechanical powertrain has several advantages over diesel-electric propulsion systems, such as lower weight and better fuel efficiency. However, diesel engines are sensitive to sudden changes in propeller revolutions, and to counter this mechanical powertrains are usually fitted with large flywheels or hydrodynamic couplings to absorb the torque variations resulting from propeller-ice interaction. The 1969-built Canadian polar icebreaker CCGS Louis S. St-Laurent was one of the few icebreakers fitted with steam boilers and turbogenerators that produced power for three electric propulsion motors. It was later refitted with five diesel engines, which provide better fuel economy than steam turbines. Later Canadian icebreakers were built with diesel-electric powertrain. Two Polar-class icebreakers operated by the United States Coast Guard, have a combined diesel-electric and mechanical propulsion system that consists of six diesel engines and three gas turbines. While the diesel engines are coupled to generators that produce power for three propulsion motors, the gas turbines are directly coupled to the propeller shafts driving controllable pitch propellers. The diesel-electric power plant can produce up to while the gas turbines have a continuous combined rating of . The number, type and location of the propellers depends on the power, draft and intended purpose of the vessel. Smaller icebreakers and icebreaking special purpose ships may be able to do with just one propeller while large polar icebreakers typically need up to three large propellers to absorb all power and deliver enough thrust. Some shallow draught river icebreakers have been built with four propellers in the stern. Nozzles may be used to increase the thrust at lower speeds, but they may become clogged by ice. Until the 1980s, icebreakers operating regularly in ridged ice fields in the Baltic Sea were fitted with first one and later two bow propellers to create a powerful flush along the hull of the vessel. This considerably increased the icebreaking capability of the vessels by reducing the friction between the hull and the ice, and allowed the icebreakers to penetrate thick ice ridges without ramming. However, the bow propellers are not suitable for polar icebreakers operating in the presence of harder multi-year ice and thus have not been used in the Arctic. Azimuth thrusters remove the need of traditional propellers and rudders by having the propellers in steerable gondolas that can rotate 360 degrees around a vertical axis. These thrusters improve propulsion efficiency, icebreaking capability and maneuverability of the vessel. The use of azimuth thrusters also allows a ship to move astern in ice without losing manoeuvrability. This has led to the development of double acting ships, vessels with the stern shaped like an icebreaker's bow and the bow designed for open water performance. In this way, the ship remains economical to operate in open water without compromising its ability to operate in difficult ice conditions. Azimuth thrusters have also made it possible to develop new experimental icebreakers that operate sideways to open a wide channel through ice. Nuclear-powered The steam-powered icebreakers were resurrected in the late 1950s when the Soviet Union commissioned the first nuclear-powered icebreaker, Lenin, in 1959. It had a nuclear-turbo-electric powertrain in which the nuclear reactor was used to produce steam for turbogenerators, which in turn produced electricity for propulsion motors. Starting from 1975, the Russians commissioned six Arktika-class nuclear icebreakers. Soviets also built a nuclear-powered icebreaking cargo ship, Sevmorput, which had a single nuclear reactor and a steam turbine directly coupled to the propeller shaft. Russia, which remains the sole operator of nuclear-powered icebreakers, is currently building icebreakers to replace the aging Arktika class. The first vessel of this type entered service in 2020. Resonance method A hovercraft can break ice by the resonance method. This causes the ice and water to oscillate up and down until the ice suffers sufficient mechanical fatigue to cause a fracture.
Technology
Naval transport
null
483173
https://en.wikipedia.org/wiki/Branch%20point
Branch point
In the mathematical field of complex analysis, a branch point of a multivalued function is a point such that if the function is -valued (has values) at that point, all of its neighborhoods contain a point that has more than values. Multi-valued functions are rigorously studied using Riemann surfaces, and the formal definition of branch points employs this concept. Branch points fall into three broad categories: algebraic branch points, transcendental branch points, and logarithmic branch points. Algebraic branch points most commonly arise from functions in which there is an ambiguity in the extraction of a root, such as solving the equation for as a function of . Here the branch point is the origin, because the analytic continuation of any solution around a closed loop containing the origin will result in a different function: there is non-trivial monodromy. Despite the algebraic branch point, the function is well-defined as a multiple-valued function and, in an appropriate sense, is continuous at the origin. This is in contrast to transcendental and logarithmic branch points, that is, points at which a multiple-valued function has nontrivial monodromy and an essential singularity. In geometric function theory, unqualified use of the term branch point typically means the former more restrictive kind: the algebraic branch points. In other areas of complex analysis, the unqualified term may also refer to the more general branch points of transcendental type. Algebraic branch points Let be a connected open set in the complex plane and a holomorphic function. If is not constant, then the set of the critical points of , that is, the zeros of the derivative , has no limit point in . So each critical point of lies at the center of a disc containing no other critical point of in its closure. Let be the boundary of , taken with its positive orientation. The winding number of with respect to the point is a positive integer called the ramification index of . If the ramification index is greater than 1, then is called a ramification point of , and the corresponding critical value is called an (algebraic) branch point. Equivalently, is a ramification point if there exists a holomorphic function defined in a neighborhood of such that for integer . Typically, one is not interested in itself, but in its inverse function. However, the inverse of a holomorphic function in the neighborhood of a ramification point does not properly exist, and so one is forced to define it in a multiple-valued sense as a global analytic function. It is common to abuse language and refer to a branch point of as a branch point of the global analytic function . More general definitions of branch points are possible for other kinds of multiple-valued global analytic functions, such as those that are defined implicitly. A unifying framework for dealing with such examples is supplied in the language of Riemann surfaces below. In particular, in this more general picture, poles of order greater than 1 can also be considered ramification points. In terms of the inverse global analytic function , branch points are those points around which there is nontrivial monodromy. For example, the function has a ramification point at . The inverse function is the square root , which has a branch point at . Indeed, going around the closed loop , one starts at and . But after going around the loop to , one has . Thus there is monodromy around this loop enclosing the origin. Transcendental and logarithmic branch points Suppose that g is a global analytic function defined on a punctured disc around z0. Then g has a transcendental branch point if z0 is an essential singularity of g such that analytic continuation of a function element once around some simple closed curve surrounding the point z0 produces a different function element. An example of a transcendental branch point is the origin for the multi-valued function for some integer k > 1. Here the monodromy group for a circuit around the origin is finite. Analytic continuation around k full circuits brings the function back to the original. If the monodromy group is infinite, that is, it is impossible to return to the original function element by analytic continuation along a curve with nonzero winding number about z0, then the point z0 is called a logarithmic branch point. This is so called because the typical example of this phenomenon is the branch point of the complex logarithm at the origin. Going once counterclockwise around a simple closed curve encircling the origin, the complex logarithm is incremented by 2i. Encircling a loop with winding number w, the logarithm is incremented by 2i w and the monodromy group is the infinite cyclic group . Logarithmic branch points are special cases of transcendental branch points. There is no corresponding notion of ramification for transcendental and logarithmic branch points since the associated covering Riemann surface cannot be analytically continued to a cover of the branch point itself. Such covers are therefore always unramified. Examples 0 is a branch point of the square root function. Suppose w = z1/2, and z starts at 4 and moves along a circle of radius 4 in the complex plane centered at 0. The dependent variable w changes while depending on z in a continuous manner. When z has made one full circle, going from 4 back to 4 again, w will have made one half-circle, going from the positive square root of 4, i.e., from 2, to the negative square root of 4, i.e., −2. 0 is also a branch point of the natural logarithm. Since e0 is the same as e2i, both 0 and 2i are among the multiple values of ln(1). As z moves along a circle of radius 1 centered at 0, w = ln(z) goes from 0 to 2i. In trigonometry, since tan(/4) and tan (5/4) are both equal to 1, the two numbers /4 and 5/4 are among the multiple values of arctan(1). The imaginary units i and −i are branch points of the arctangent function arctan(z) = (1/2i)log[(i − z)/(i + z)]. This may be seen by observing that the derivative (d/dz) arctan(z) = 1/(1 + z2) has simple poles at those two points, since the denominator is zero at those points. If the derivative ƒ ' of a function ƒ has a simple pole at a point a, then ƒ has a logarithmic branch point at a. The converse is not true, since the function ƒ(z) = zα for irrational α has a logarithmic branch point, and its derivative is singular without being a pole. Branch cuts Roughly speaking, branch points are the points where the various sheets of a multiple valued function come together. The branches of the function are the various sheets of the function. For example, the function w = z1/2 has two branches: one where the square root comes in with a plus sign, and the other with a minus sign. A branch cut is a curve in the complex plane such that it is possible to define a single analytic branch of a multi-valued function on the plane minus that curve. Branch cuts are usually, but not always, taken between pairs of branch points. Branch cuts allow one to work with a collection of single-valued functions, "glued" together along the branch cut instead of a multivalued function. For example, to make the function single-valued, one makes a branch cut along the interval [0, 1] on the real axis, connecting the two branch points of the function. The same idea can be applied to the function ; but in that case one has to perceive that the point at infinity is the appropriate 'other' branch point to connect to from 0, for example along the whole negative real axis. The branch cut device may appear arbitrary (and it is); but it is very useful, for example in the theory of special functions. An invariant explanation of the branch phenomenon is developed in Riemann surface theory (of which it is historically the origin), and more generally in the ramification and monodromy theory of algebraic functions and differential equations. Complex logarithm The typical example of a branch cut is the complex logarithm. If a complex number is represented in polar form z = reiθ, then the logarithm of z is However, there is an obvious ambiguity in defining the angle θ: adding to θ any integer multiple of 2 will yield another possible angle. A branch of the logarithm is a continuous function L(z) giving a logarithm of z for all z in a connected open set in the complex plane. In particular, a branch of the logarithm exists in the complement of any ray from the origin to infinity: a branch cut. A common choice of branch cut is the negative real axis, although the choice is largely a matter of convenience. The logarithm has a jump discontinuity of 2i when crossing the branch cut. The logarithm can be made continuous by gluing together countably many copies, called sheets, of the complex plane along the branch cut. On each sheet, the value of the log differs from its principal value by a multiple of 2i. These surfaces are glued to each other along the branch cut in the unique way to make the logarithm continuous. Each time the variable goes around the origin, the logarithm moves to a different branch. Continuum of poles One reason that branch cuts are common features of complex analysis is that a branch cut can be thought of as a sum of infinitely many poles arranged along a line in the complex plane with infinitesimal residues. For example, is a function with a simple pole at z = a. Integrating over the location of the pole: defines a function u(z) with a cut from −1 to 1. The branch cut can be moved around, since the integration line can be shifted without altering the value of the integral so long as the line does not pass across the point z. Riemann surfaces The concept of a branch point is defined for a holomorphic function ƒ:X → Y from a compact connected Riemann surface X to a compact Riemann surface Y (usually the Riemann sphere). Unless it is constant, the function ƒ will be a covering map onto its image at all but a finite number of points. The points of X where ƒ fails to be a cover are the ramification points of ƒ, and the image of a ramification point under ƒ is called a branch point. For any point P ∈ X and Q = ƒ(P) ∈ Y, there are holomorphic local coordinates z for X near P and w for Y near Q in terms of which the function ƒ(z) is given by for some integer k. This integer is called the ramification index of P. Usually the ramification index is one. But if the ramification index is not equal to one, then P is by definition a ramification point, and Q is a branch point. If Y is just the Riemann sphere, and Q is in the finite part of Y, then there is no need to select special coordinates. The ramification index can be calculated explicitly from Cauchy's integral formula. Let γ be a simple rectifiable loop in X around P. The ramification index of ƒ at P is This integral is the number of times ƒ(γ) winds around the point Q. As above, P is a ramification point and Q is a branch point if eP > 1. Algebraic geometry In the context of algebraic geometry, the notion of branch points can be generalized to mappings between arbitrary algebraic curves. Let ƒ:X → Y be a morphism of algebraic curves. By pulling back rational functions on Y to rational functions on X, K(X) is a field extension of K(Y). The degree of ƒ is defined to be the degree of this field extension [K(X):K(Y)], and ƒ is said to be finite if the degree is finite. Assume that ƒ is finite. For a point P ∈ X, the ramification index eP is defined as follows. Let Q = ƒ(P) and let t be a local uniformizing parameter at P; that is, t is a regular function defined in a neighborhood of Q with t(Q) = 0 whose differential is nonzero. Pulling back t by ƒ defines a regular function on X. Then where vP is the valuation in the local ring of regular functions at P. That is, eP is the order to which vanishes at P. If eP > 1, then ƒ is said to be ramified at P. In that case, Q is called a branch point.
Mathematics
Complex analysis
null
483203
https://en.wikipedia.org/wiki/Lygodium
Lygodium
Lygodium (climbing fern) is a genus of about 40 species of ferns, native to tropical regions across the world, with a few temperate species in eastern Asia and eastern North America. It is the sole genus in the family Lygodiaceae in the Pteridophyte Phylogeny Group classification of 2016 (PPG I). Alternatively, the genus may be placed as the only genus in the subfamily Lygodioideae of a more broadly defined family Schizaeaceae, the family placement used in Plants of the World Online . Per recent molecular evidence, Lygodiaceae is thought to have diverged relatively early from the other members of the Schizaeales due to the relatively high level of synonymous sequence divergence between the families within the Schizaeales. Description Lygodium are unusual in that the rachis, or midrib, of the frond is thin, flexible, and long, the frond unrolling with indeterminate growth and the rachis twining around supports, so that each frond forms a distinct vine. The fronds may be from long, depending on the species. They are also easily identifiable by their possession of apical buds that lay dormant until damage to the rachis occurs, allowing them a high degree of endurance. Range Lygodium is a wide ranging genus with native populations existing in Asia, Australasia, Africa, and North and South America. The genus is largely pan-tropical, with the center of diversity being Pacific islands, such as Borneo, the Philippine islands, and New Guinea. There do exist several species tolerant of temperate climates such as Lygodium palmatum, which is endemic to the Appalachian region of eastern North America, and Lygodium japonicum, which is native to Japan, but highly invasive in the Southeastern United States. For more on this, refer to the "As invasive species" section below. The lack of extant Lygodium species in Europe is commonly attributed to the Pleistocene glaciation wiping them out. Similar extirpations did not occur in other high middle and high latitude areas, such as the United States and Japan that do have Lygodium populations at present. This discrepancy is thought to be due to the East-West orientation of the European Alps preventing southward migration of Lygodium members, among other extirpated species, while the relatively North-South orientations of the Appalachian mountains and Japanese Alps allowed such southward migration. Uses Lygodium species, known as nito, are used as a source of fibers in the Philippines. The fibers are used as material for weaving, most notably of traditional salakot headgear. As invasive species Some Lygodium species are now considered very problematic invasive weeds in the southeastern United States. Populations of Lygodium have increased more than 12-fold over the past decade, as noted by Florida's Institute of Food and Agricultural Sciences. Japanese climbing fern (Lygodium japonicum) was added to the Florida Noxious Weed List in 1999. It is also a major problem in pine plantations, causing contamination and harvesting problems for the pine straw industry. Old World climbing fern (Lygodium microphyllum) infests cypress swamps and other hydric sites, forming a monoculture. This massive infestation displaces all native flora and fauna, completely changing the ecosystem of the area. Plants in this genus have basal chromosome counts of n=28, 29, 30. Phylogeny
Biology and health sciences
Ferns
Plants
483852
https://en.wikipedia.org/wiki/Group%205%20element
Group 5 element
|- ! colspan=2 style="text-align:left;" | ↓ Period |- | 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} Group 5 is a group of elements in the periodic table. Group 5 contains vanadium (V), niobium (Nb), tantalum (Ta) and dubnium (Db). This group lies in the d-block of the periodic table. This group is sometimes called the vanadium group or vanadium family after its lightest member; however, the group itself has not acquired a trivial name because it belongs to the broader grouping of the transition metals. As is typical for early transition metals, niobium and tantalum have only the group oxidation state of +5 as a major one, and are quite electropositive (it is easy to donate electrons) and have a less rich coordination chemistry (the chemistry of metallic ions bound with molecules). Due to the effects of the lanthanide contraction, the decrease in ionic radii in the lanthanides, they are very similar in properties. Vanadium is somewhat distinct due to its smaller size: it has well-defined +2, +3 and +4 states as well (although +5 is more stable). The lighter three Group 5 elements occur naturally and share similar properties; all three are hard refractory metals under standard conditions. The fourth element, dubnium, has been synthesized in laboratories, but it has not been found occurring in nature, with half-life of the most stable isotope, dubnium-268, being only 16 hours, and other isotopes even more radioactive. History Group 5 is the new IUPAC name for this group; the old style name was group VB in the old US system (CAS) or group VA in the European system (old IUPAC). Group 5 must not be confused with the group with the old-style group crossed names of either VA (US system, CAS) or VB (European system, old IUPAC); that group is now called the pnictogens or group 15. Vanadium Vanadium was discovered in 1801 by the Spanish mineralogist Andrés Manuel del Río. Del Río extracted the element from a sample of Mexican "brown lead" ore, later named vanadinite. He found that its salts exhibit a wide variety of colors, and as a result he named the element panchromium (Greek: παγχρώμιο "all colors"). Later, Del Río renamed the element erythronium (Greek: ερυθρός "red") because most of the salts turned red upon heating. In 1805, French chemist Hippolyte Victor Collet-Descotils, backed by del Río's friend Baron Alexander von Humboldt, incorrectly declared that del Río's new element was an impure sample of chromium. Del Río accepted Collet-Descotils' statement and retracted his claim. In 1831 Swedish chemist Nils Gabriel Sefström rediscovered the element in a new oxide he found while working with iron ores. Later that year, Friedrich Wöhler confirmed del Río's earlier work. Sefström chose a name beginning with V, which had not yet been assigned to any element. He called the element vanadium after Old Norse Vanadís (another name for the Norse Vanir goddess Freyja, whose attributes include beauty and fertility), because of the many beautifully colored chemical compounds it produces. In 1831, the geologist George William Featherstonhaugh suggested that vanadium should be renamed rionium after del Río, but this suggestion was not followed. Niobium and tantalum Niobium was identified by English chemist Charles Hatchett in 1801. He found a new element in a mineral sample that had been sent to England from Connecticut, United States in 1734 by John Winthrop F.R.S. (grandson of John Winthrop the Younger) and named the mineral columbite and the new element columbium after Columbia, the poetic name for the United States. However, after the 15th Conference of the Union of Chemistry in Amsterdam in 1949, the name niobium was chosen for element 41. The columbium discovered by Hatchett was probably a mixture of the new element with tantalum, which was first discovered in 1802 by Anders Gustav Ekeberg. Subsequently, there was considerable confusion over the difference between columbium (niobium) and the closely related tantalum. In 1809, English chemist William Hyde Wollaston compared the oxides derived from both columbium—columbite, with a density 5.918 g/cm, and tantalum—tantalite, with a density over 8 g/cm, and concluded that the two oxides, despite the significant difference in density, were identical; thus he kept the name tantalum. This conclusion was disputed in 1846 by German chemist Heinrich Rose, who argued that there were two different elements in the tantalite sample, and named them after children of Tantalus: niobium (from Niobe) and pelopium (from Pelops). This confusion arose from the minimal observed differences between tantalum and niobium. The claimed new elements pelopium, ilmenium, and dianium were in fact identical to niobium or mixtures of niobium and tantalum. Pure tantalum was not produced until 1903. Dubnium The last element of the group, dubnium, does not occur naturally and so must be synthesized in a laboratory. The first reported detection was by a team at the Joint Institute for Nuclear Research (JINR), which in 1968 had produced the new element by bombarding an americium-243 target with a beam of neon-22 ions, and reported 9.4 MeV (with a half-life of 0.1–3 seconds) and 9.7 MeV (t1/2 > 0.05 s) alpha activities followed by alpha activities similar to those of either 256103 or 257103. Based on prior theoretical predictions, the two activity lines were assigned to 261105 and 260105, respectively. After observing the alpha decays of element 105, the researchers aimed to observe the spontaneous fission (SF) of the element and study the resulting fission fragments. They published a paper in February 1970, reporting multiple examples of two such activities, with half-lives of 14 ms and . They assigned the former activity to 242mfAm and ascribed the latter activity to an isotope of element 105. They suggested that it was unlikely that this activity could come from a transfer reaction instead of element 105, because the yield ratio for this reaction was significantly lower than that of the 242mfAm-producing transfer reaction, in accordance with theoretical predictions. To establish that this activity was not from a (22Ne,xn) reaction, the researchers bombarded a 243Am target with 18O ions; reactions producing 256103 and 257103 showed very little SF activity (matching the established data), and the reaction producing heavier 258103 and 259103 produced no SF activity at all, in line with theoretical data. The researchers concluded that the activities observed came from SF of element 105. JINR then attempted an experiment to create element 105, published in a report in May 1970. They claimed that they had synthesized more nuclei of element 105 and that the experiment confirmed their previous work. According to the paper, the isotope produced by JINR was probably 261105, or possibly 260105. This report included an initial chemical examination: the thermal gradient version of the gas-chromatography method was applied to demonstrate that the chloride of what had formed from the SF activity nearly matched that of niobium pentachloride, rather than hafnium tetrachloride. The team identified a 2.2-second SF activity in a volatile chloride portraying eka-tantalum properties, and inferred that the source of the SF activity must have been element 105. In June 1970, JINR made improvements on their first experiment, using a purer target and reducing the intensity of transfer reactions by installing a collimator before the catcher. This time, they were able to find 9.1 MeV alpha activities with daughter isotopes identifiable as either 256103 or 257103, implying that the original isotope was either 260105 or 261105. A controversy erupted on who had discovered the element, which each group suggesting its own name: the Dubna group named the element nielsbohrium after Niels Bohr, while the Berkeley group named it hahnium after Otto Hahn. Eventually a joint working party of IUPAC and IUPAP, the Transfermium Working Group, decided that credit for the discovery should be shared. After various compromises were attempted, where element 105 was called kurchatovium, joliotium and hahnium, in 1997 IUPAC officially named the element dubnium after Dubna, and nielsbohrium was eventually simplified to bohrium and used for element 107. Chemical properties Like other groups, the members of this family show patterns in its electron configuration, especially the outermost shells. (The expected 4d3 5s2 configuration for niobium is a very low-lying excited state at about 0.14 eV.) Most of the chemistry has been observed only for the first three members of the group (the chemistry of dubnium is not very established, but what is known appears to match expectations for a heavier congener of tantalum). All the elements of the group are reactive metals with a high melting points (1910 °C, 2477 °C, 3017 °C). The reactivity is not always obvious due to the rapid formation of a stable oxide layer, which prevents further reactions, similarly to trends in group 3 or group 4. The metals form different oxides: vanadium forms vanadium(II) oxide, vanadium(III) oxide, vanadium(IV) oxide and vanadium(V) oxide, niobium forms niobium(II) oxide, niobium(IV) oxide and niobium(V) oxide, but out of tantalum oxides only tantalum(V) oxide is characterized. Metal(V) oxides are generally nonreactive and act like acids rather than bases, but the lower oxides are less stable. They, however, have some unusual properties for oxides, such as high electric conductivity. All three elements form various inorganic compounds, generally in the oxidation state of +5. Lower oxidation states are also known, but in all elements other than vanadium, they are less stable, decreasing in stability with atomic mass increase. Compounds Oxides Vanadium forms oxides in the +2, +3, +4 and +5 oxidation states, forming vanadium(II) oxide (VO), vanadium(III) oxide (V2O3), vanadium(IV) oxide (VO2) and vanadium(V) oxide (V2O5). Vanadium(V) oxide or vanadium pentoxide is the most common, being precursor to most alloys and compounds of vanadium, and is also a widely used industrial catalyst. Niobium forms oxides in the oxidation states +5 (), +4 (), and the rarer oxidation state, +2 (NbO). Most common is the pentoxide, also being precursor to almost all niobium compounds and alloys. Tantalum pentoxide (Ta2O5) is the most important compound from the perspective of applications. Oxides of tantalum in lower oxidation states are numerous, including many defect structures, and are lightly studied or poorly characterized. Oxyanions In aqueous solution, vanadium(V) forms an extensive family of oxyanions as established by 51V NMR spectroscopy. The interrelationships in this family are described by the predominance diagram, which shows at least 11 species, depending on pH and concentration. The tetrahedral orthovanadate ion, , is the principal species present at pH 12–14. Similar in size and charge to phosphorus(V), vanadium(V) also parallels its chemistry and crystallography. Orthovanadate V is used in protein crystallography to study the biochemistry of phosphate. Beside that, this anion also has been shown to interact with activity of some specific enzymes. The tetrathiovanadate [VS4]3− is analogous to the orthovanadate ion. At lower pH values, the monomer [HVO4]2− and dimer [V2O7]4− are formed, with the monomer predominant at vanadium concentration of less than c. 10−2M (pV > 2, where pV is equal to the minus value of the logarithm of the total vanadium concentration/M). The formation of the divanadate ion is analogous to the formation of the dichromate ion. As the pH is reduced, further protonation and condensation to polyvanadates occur: at pH 4–6 [H2VO4]− is predominant at pV greater than ca. 4, while at higher concentrations trimers and tetramers are formed. Between pH 2–4 decavanadate predominates, though its formation from orthovanadate is optimized at pH 4–7, represented by this reaction: In decavanadate, each V(V) center is surrounded by six oxide ligands. Vanadic acid, H3VO4 exists only at very low concentrations because protonation of the tetrahedral species [H2VO4]− results in the preferential formation of the octahedral [VO2(H2O)4]+ species. In strongly acidic solutions, pH < 2, [VO2(H2O)4]+ is the predominant species, while the oxide V2O5 precipitates from solution at high concentrations. The oxide is formally the acid anhydride of vanadic acid. The structures of many vanadate compounds have been determined by X-ray crystallography. Vanadium(V) forms various peroxo complexes, most notably in the active site of the vanadium-containing bromoperoxidase enzymes. The species VO(O)2(H2O)4+ is stable in acidic solutions. In alkaline solutions, species with 2, 3 and 4 peroxide groups are known; the last forms violet salts with the formula M3V(O2)4 nH2O (M= Li, Na, etc.), in which the vanadium has an 8-coordinate dodecahedral structure. Niobates are generated by dissolving the pentoxide in basic hydroxide solutions or by melting it in alkali metal oxides. Examples are lithium niobate () and lanthanum niobate (). In the lithium niobate is a trigonally distorted perovskite-like structure, whereas the lanthanum niobate contains lone ions. Tantalates, compounds containing [TaO4]3− or [TaO3]− are numerous. Lithium tantalate (LiTaO3) adopts a perovskite structure. Lanthanum tantalate (LaTaO4) contains isolated tetrahedra. Halides and their derivatives Twelve binary halides, compounds with the formula VXn (n=2...5), are known. VI4, VCl5, VBr5, and VI5 do not exist or are extremely unstable; the only known pure V halide compound is . In combination with other reagents, VCl4 is used as a catalyst for polymerization of dienes. Like all binary halides, those of vanadium are Lewis acidic, especially those of V(IV) and V(V). Many of the halides form octahedral complexes with the formula VXnL6−n (X= halide; L= other ligand). Many vanadium oxyhalides (formula VOmXn) are known. The oxytrichloride and oxytrifluoride (VOCl3 and VOF3) are the most widely studied. Akin to POCl3, they are volatile, adopt tetrahedral structures in the gas phase, and are Lewis acidic. Niobium forms halides in the oxidation states of +5 and +4 as well as diverse substoichiometric compounds. The pentahalides () feature octahedral Nb centres. Niobium pentafluoride () is a white solid with a melting point of 79.0 °C and niobium pentachloride () is yellow (see image at left) with a melting point of 203.4 °C. Both are hydrolyzed to give oxides and oxyhalides, such as . The pentachloride is a versatile reagent used to generate the organometallic compounds, such as niobocene dichloride (). The tetrahalides () are dark-coloured polymers with Nb-Nb bonds; for example, the black hygroscopic niobium tetrafluoride () and dark violet niobium tetrachloride (). Anionic halide compounds of niobium are well known, owing in part to the Lewis acidity of the pentahalides. The most important is [NbF7]2−, an intermediate in the separation of Nb and Ta from the ores. This heptafluoride tends to form the oxopentafluoride more readily than does the tantalum compound. Other halide complexes include octahedral []: + 2 Cl → 2 [] As with other metals with low atomic numbers, a variety of reduced halide cluster ions is known, the prime example being []. Tantalum halides span the oxidation states of +5, +4, and +3. Tantalum pentafluoride (TaF5) is a white solid with a melting point of 97.0 °C. The anion [TaF7]2- is used for its separation from niobium. The chloride , which exists as a dimer, is the main reagent in synthesis of new Ta compounds. It hydrolyzes readily to an oxychloride. The lower halides and , feature Ta-Ta bonds. Physical properties The trends in group 5 follow those of the other early d-block groups and reflect the addition of a filled f-shell into the core in passing from the fifth to the sixth period. All the stable members of the group are silvery-blue refractory metals, though impurities of carbon, nitrogen, and oxygen make them brittle. They all crystallize in the body-centered cubic structure at room temperature, and dubnium is expected to do the same. The table below is a summary of the key physical properties of the group 5 elements. The question-marked value is predicted. Vanadium Vanadium is an average-hard, ductile, steel-blue metal. It is electrically conductive and thermally insulating. Some sources describe vanadium as "soft", perhaps because it is ductile, malleable, and not brittle. Vanadium is harder than most metals and steels (see Hardnesses of the elements (data page) and iron). It has good resistance to corrosion and it is stable against alkalis and sulfuric and hydrochloric acids. It is oxidized in air at about 933 K (660 °C, 1220 °F), although an oxide passivation layer forms even at room temperature. Niobium Niobium is a lustrous, grey, ductile, paramagnetic metal in group 5 of the periodic table (see table), with an electron configuration in the outermost shells atypical for group 5. Similarly atypical configurations occur in the neighborhood of ruthenium (44) and rhodium (45). Although it is thought to have a body-centered cubic crystal structure from absolute zero to its melting point, high-resolution measurements of the thermal expansion along the three crystallographic axes reveal anisotropies which are inconsistent with a cubic structure. Niobium becomes a superconductor at cryogenic temperatures. At atmospheric pressure, it has the highest critical temperature of the elemental superconductors at 9.2 K. Niobium has the greatest magnetic penetration depth of any element. In addition, it is one of the three elemental Type II superconductors, along with vanadium and technetium. The superconductive properties are strongly dependent on the purity of the niobium metal. When very pure, it is comparatively soft and ductile, but impurities make it harder. The metal has a low capture cross-section for thermal neutrons; thus it is used in the nuclear industries where neutron transparent structures are desired. Tantalum Tantalum is dark (blue-gray), dense, ductile, very hard, easily fabricated, and highly conductive of heat and electricity. The metal is renowned for its resistance to corrosion by acids; in fact, at temperatures below 150 °C tantalum is almost completely immune to attack by the normally aggressive aqua regia. It can be dissolved with hydrofluoric acid or acidic solutions containing the fluoride ion and sulfur trioxide, as well as with a solution of potassium hydroxide. Tantalum's high melting point of 3017 °C (boiling point 5458 °C) is exceeded among the elements only by tungsten, rhenium osmium, and carbon. Tantalum exists in two crystalline phases, alpha and beta. The alpha phase is relatively ductile and soft; it has body-centered cubic structure (space group Im3m, lattice constant a = 0.33058 nm), Knoop hardness 200–400 HN and electrical resistivity 15–60 μΩ⋅cm. The beta phase is hard and brittle; its crystal symmetry is tetragonal (space group P42/mnm, a = 1.0194 nm, c = 0.5313 nm), Knoop hardness is 1000–1300 HN and electrical resistivity is relatively high at 170–210 μΩ⋅cm. The beta phase is metastable and converts to the alpha phase upon heating to 750–775 °C. Bulk tantalum is almost entirely alpha phase, and the beta phase usually exists as thin films obtained by magnetron sputtering, chemical vapor deposition or electrochemical deposition from a eutectic molten salt solution. Dubnium A direct relativistic effect is that as the atomic numbers of elements increase, the innermost electrons begin to revolve faster around the nucleus as a result of an increase of electromagnetic attraction between an electron and a nucleus. Similar effects have been found for the outermost s orbitals (and p1/2 ones, though in dubnium they are not occupied): for example, the 7s orbital contracts by 25% in size and is stabilized by 2.6 eV. A more indirect effect is that the contracted s and p1/2 orbitals shield the charge of the nucleus more effectively, leaving less for the outer d and f electrons, which therefore move in larger orbitals. Dubnium is greatly affected by this: unlike the previous group 5 members, its 7s electrons are slightly more difficult to extract than its 6d electrons. Another effect is the spin–orbit interaction, particularly spin–orbit splitting, which splits the 6d subshell—the azimuthal quantum number ℓ of a d shell is 2—into two subshells, with four of the ten orbitals having their ℓ lowered to 3/2 and six raised to 5/2. All ten energy levels are raised; four of them are lower than the other six. (The three 6d electrons normally occupy the lowest energy levels, 6d3/2.) A single ionized atom of dubnium (Db+) should lose a 6d electron compared to a neutral atom; the doubly (Db2+) or triply (Db3+) ionized atoms of dubnium should eliminate 7s electrons, unlike its lighter homologs. Despite the changes, dubnium is still expected to have five valence electrons; 7p energy levels have not been shown to influence dubnium and its properties. As the 6d orbitals of dubnium are more destabilized than the 5d ones of tantalum, and Db3+ is expected to have two 6d, rather than 7s, electrons remaining, the resulting +3 oxidation state is expected to be unstable and even rarer than that of tantalum. The ionization potential of dubnium in its maximum +5 oxidation state should be slightly lower than that of tantalum and the ionic radius of dubnium should increase compared to tantalum; this has a significant effect on dubnium's chemistry. Atoms of dubnium in the solid state should arrange themselves in a body-centered cubic configuration, like the previous group 5 elements. The predicted density of dubnium is 21.6 g/cm3. Occurrence There are 160 parts per million of vanadium in the Earth's crust, making it the 19th most abundant element there. Soil contains on average 100 parts per million of vanadium, and seawater contains 1.5 parts per billion of vanadium. A typical human contains 285 parts per billion of vanadium. Over 60 vanadium ores are known, including vanadinite, patronite, and carnotite. There are 20 parts per million of niobium in the Earth's crust, making it the 33rd most abundant element there. Soil contains on average 24 parts per million of niobium, and seawater contains 900 parts per quadrillion of niobium. A typical human contains 21 parts per billion of niobium. Niobium is in the minerals columbite and pyrochlore. There are 2 parts per million of tantalum in the Earth's crust, making it the 51st most abundant element there. Soil contains on average 1 to 2 parts per billion of tantalum, and seawater contains 2 parts per trillion of tantalum. A typical human contains 2.9 parts per billion of tantalum. Tantalum is found in the minerals tantalite and pyrochlore. Dubnium does not occur naturally in the Earth's crust, as it has no stable isotopes. Production Vanadium Vanadium metal is obtained by a multistep process that begins with roasting crushed ore with NaCl or Na2CO3 at about 850 °C to give sodium metavanadate (NaVO3). An aqueous extract of this solid is acidified to produce "red cake", a polyvanadate salt, which is reduced with calcium metal. As an alternative for small-scale production, vanadium pentoxide is reduced with hydrogen or magnesium. Many other methods are also used, in all of which vanadium is produced as a byproduct of other processes. Purification of vanadium is possible by the crystal bar process developed by Anton Eduard van Arkel and Jan Hendrik de Boer in 1925. It involves the formation of the metal iodide, in this example vanadium(III) iodide, and the subsequent decomposition to yield pure metal: 2 V + 3 I2 2 VI3 Most vanadium is used as a component of a steel alloy called ferrovanadium. Ferrovanadium is produced directly by reducing a mixture of vanadium oxide, iron oxides and iron in an electric furnace. The vanadium ends up in pig iron produced from vanadium-bearing magnetite. Depending on the ore used, the slag contains up to 25% of vanadium. Approximately 70000 tonnes of vanadium ore are produced yearly, with 25000 t of vanadium ore being produced in Russia, 24000 in South Africa, 19000 in China, and 1000 in Kazakhstan. 7000 t of vanadium metal are produced each year. It is impossible to obtain vanadium by heating its ore with carbon. Instead, vanadium is produced by heating vanadium oxide with calcium in a pressure vessel. Very high-purity vanadium is produced from a reaction of vanadium trichloride with magnesium. Niobium and tantalum After the separation from the other minerals, the mixed oxides of tantalum and niobium are obtained. To produce niobium, the first step in the processing is the reaction of the oxides with hydrofluoric acid: The first industrial scale separation, developed by Swiss chemist de Marignac, exploits the differing solubilities of the complex niobium and tantalum fluorides, dipotassium oxypentafluoroniobate monohydrate () and dipotassium heptafluorotantalate () in water. Newer processes use the liquid extraction of the fluorides from aqueous solution by organic solvents like cyclohexanone. The complex niobium and tantalum fluorides are extracted separately from the organic solvent with water and either precipitated by the addition of potassium fluoride to produce a potassium fluoride complex, or precipitated with ammonia as the pentoxide: Followed by: Several methods are used for the reduction to metallic niobium. The electrolysis of a molten mixture of [] and sodium chloride is one; the other is the reduction of the fluoride with sodium. With this method, a relatively high purity niobium can be obtained. In large scale production, is reduced with hydrogen or carbon. In the aluminothermic reaction, a mixture of iron oxide and niobium oxide is reacted with aluminium: Small amounts of oxidizers like sodium nitrate are added to enhance the reaction. The result is aluminium oxide and ferroniobium, an alloy of iron and niobium used in steel production. Ferroniobium contains between 60 and 70% niobium. Without iron oxide, the aluminothermic process is used to produce niobium. Further purification is necessary to reach the grade for superconductive alloys. Electron beam melting under vacuum is the method used by the two major distributors of niobium. , CBMM from Brazil controlled 85 percent of the world's niobium production. The United States Geological Survey estimates that the production increased from 38,700 tonnes in 2005 to 44,500tonnes in 2006. Worldwide resources are estimated to be 4.4 million tonnes. During the ten-year period between 1995 and 2005, the production more than doubled, starting from 17,800 tonnes in 1995. Between 2009 and 2011, production was stable at 63,000tonnes per year, with a slight decrease in 2012 to only 50,000tonnes per year. 70,000tonnes of tantalum ore are produced yearly. Brazil produces 90% of tantalum ore, with Canada, Australia, China, and Rwanda also producing the element. The demand for tantalum is around 1,200tonnes per year. Dubnium and beyond Dubnium is produced synthetically by bombarding actinides with lighter elements. To date, no experiments in a supercollider have been conducted to synthesize the next member of the group, either unpentseptium (Ups) or unpentennium (Upe). As unpentseptium and unpentennium are both late period 8 elements, it is unlikely that these elements will be synthesized in the near future; current attempts have only been made on elements up to atomic number 127. Applications Vanadium's main application is in alloys, such as vanadium steel. Vanadium alloys are used in springs, tools, jet engines, armor plating, and nuclear reactors. Vanadium oxide gives ceramics a golden color, and other vanadium compounds are used as catalysts to produce polymers. Small amounts of niobium are added to stainless steel to improve its quality. Niobium alloys are also used in rocket nozzles because of niobium's high corrosion resistance. Tantalum has four main types of applications. Tantalum is added into objects exposed to high temperatures, in electronic devices, in surgical implants, and for handling corrosive substances. Dubnium has no applications due to the difficulty of its synthesis and the very short half-lives of even its longest-lived isotopes. Biological occurrences Out of the group 5 elements, only vanadium has been identified as playing a role in the biological chemistry of living systems, but even it plays a very limited role in biology, and is more important in ocean environments than on land. Vanadium, essential to ascidians and tunicates as vanabins, has been known in the blood cells of Ascidiacea (sea squirts) since 1911, in concentrations of vanadium in their blood more than 100 times higher than the concentration of vanadium in the seawater around them. Several species of macrofungi accumulate vanadium (up to 500 mg/kg in dry weight). Vanadium-dependent bromoperoxidase generates organobromine compounds in a number of species of marine algae. Rats and chickens are also known to require vanadium in very small amounts and deficiencies result in reduced growth and impaired reproduction. Vanadium is a relatively controversial dietary supplement, primarily for increasing insulin sensitivity and body-building. Vanadyl sulfate may improve glucose control in people with type 2 diabetes. In addition, decavanadate and oxovanadates are species that potentially have many biological activities and that have been successfully used as tools in the comprehension of several biochemical processes. Toxicity and precautions Pure vanadium is not known to be toxic. However, vanadium pentoxide causes severe irritation of the eyes, nose, and throat. Tetravalent VOSO4 has been reported to be at least 5 times more toxic than trivalent V2O3. The Occupational Safety and Health Administration has set an exposure limit of 0.05 mg/m3 for vanadium pentoxide dust and 0.1 mg/m3 for vanadium pentoxide fumes in workplace air for an 8-hour workday, 40-hour work week. The National Institute for Occupational Safety and Health has recommended that 35 mg/m3 of vanadium be considered immediately dangerous to life and health, that is, likely to cause permanent health problems or death. Vanadium compounds are poorly absorbed through the gastrointestinal system. Inhalation of vanadium and vanadium compounds results primarily in adverse effects on the respiratory system. Quantitative data are, however, insufficient to derive a subchronic or chronic inhalation reference dose. Other effects have been reported after oral or inhalation exposures on blood parameters, liver, neurological development, and other organs in rats. There is little evidence that vanadium or vanadium compounds are reproductive toxins or teratogens. Vanadium pentoxide was reported to be carcinogenic in male rats and in male and female mice by inhalation in an NTP study, although the interpretation of the results has recently been disputed. The carcinogenicity of vanadium has not been determined by the United States Environmental Protection Agency. Vanadium traces in diesel fuels are the main fuel component in high temperature corrosion. During combustion, vanadium oxidizes and reacts with sodium and sulfur, yielding vanadate compounds with melting points as low as 530 °C, which attack the passivation layer on steel and render it susceptible to corrosion. The solid vanadium compounds also abrade engine components. Niobium has no known biological role. While niobium dust is an eye and skin irritant and a potential fire hazard, elemental niobium on a larger scale is physiologically inert (and thus hypoallergenic) and harmless. It is often used in jewelry and has been tested for use in some medical implants. Niobium and its compounds thought to be slightly toxic. Short- and long-term exposure to niobates and niobium chloride, two water-soluble chemicals, have been tested in rats. Rats treated with a single injection of niobium pentachloride or niobates show a median lethal dose (LD) between 10 and 100 mg/kg. For oral administration the toxicity is lower; a study with rats yielded a LD after seven days of 940 mg/kg. Compounds containing tantalum are rarely encountered in the laboratory, and it and its compounds rarely cause injury, and when they do, the injuries are normally rashes. The metal is highly biocompatible and is used for body implants and coatings, therefore attention may be focused on other elements or the physical nature of the chemical compound. People can be exposed to tantalum in the workplace by breathing it in, skin contact, or eye contact. The Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for tantalum exposure in the workplace as 5 mg/m3 over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit of 5 mg/m3 over an 8-hour workday and a short-term limit of 10 mg/m3. At levels of 2500 mg/m3, tantalum is immediately dangerous to life and health.
Physical sciences
Group 5
Chemistry
483855
https://en.wikipedia.org/wiki/Cloud%20chamber
Cloud chamber
A cloud chamber, also known as a Wilson chamber, is a particle detector used for visualizing the passage of ionizing radiation. A cloud chamber consists of a sealed environment containing a supersaturated vapor of water or alcohol. An energetic charged particle (for example, an alpha or beta particle) interacts with the gaseous mixture by knocking electrons off gas molecules via electrostatic forces during collisions, resulting in a trail of ionized gas particles. The resulting ions act as condensation centers around which a mist-like trail of small droplets form if the gas mixture is at the point of condensation. These droplets are visible as a "cloud" track that persists for several seconds while the droplets fall through the vapor. These tracks have characteristic shapes. For example, an alpha particle track is thick and straight, while a beta particle track is wispy and shows more evidence of deflections by collisions. Cloud chambers were invented in the early 1900s by the Scottish physicist Charles Thomson Rees Wilson. They played a prominent role in experimental particle physics from the 1920s to the 1950s, until the advent of the bubble chamber. In particular, the discoveries of the positron in 1932 (see Fig. 1) and the muon in 1936, both by Carl Anderson (awarded a Nobel Prize in Physics in 1936), used cloud chambers. The Discovery of the kaon by George Rochester and Clifford Charles Butler in 1947 was made using a cloud chamber as the detector. In each of these cases, cosmic rays were the source of ionizing radiation. Yet they were also used with artificial sources of particles, for example in radiography applications as part of the Manhattan Project. Invention Charles Thomson Rees Wilson (1869–1959), a Scottish physicist, is credited with inventing the cloud chamber. Inspired by sightings of the Brocken spectre while working on the summit of Ben Nevis in 1894, he began to develop expansion chambers for studying cloud formation and optical phenomena in moist air. Very rapidly he discovered that ions could act as centers for water droplet formation in such chambers. He pursued the application of this discovery and perfected the first cloud chamber in 1911. In Wilson's original chamber, the air inside the sealed device was saturated with water vapor, then a diaphragm was used to expand the air inside the chamber (adiabatic expansion), cooling the air and starting to condense water vapor. Hence the name expansion cloud chamber is used. When an ionizing particle passes through the chamber, water vapor condenses on the resulting ions and the trail of the particle is visible in the vapor cloud. A cine film was used to record the images. Further developments were made by Patrick Blackett who utilised a stiff spring to expand and compress the chamber very rapidly, making the chamber sensitive to particles several times a second. This kind of chamber is also called a pulsed chamber because the conditions for operation are not continuously maintained. Wilson received half the Nobel Prize in Physics in 1927 for his work on the cloud chamber (the same year as Arthur Compton received half the prize for the Compton Effect). The diffusion cloud chamber was developed in 1936 by Alexander Langsdorf. This chamber differs from the expansion cloud chamber in that it is continuously sensitized to radiation, and in that the bottom must be cooled to a rather low temperature, generally colder than . Instead of water vapor, alcohol is used because of its lower freezing point. Cloud chambers cooled by dry ice or Peltier effect thermoelectric cooling are common demonstration and hobbyist devices; the alcohol used in them is commonly isopropyl alcohol or methylated spirit. Structure and operation Diffusion-type cloud chambers will be discussed here. A simple cloud chamber consists of the sealed environment, a warm top plate and a cold bottom plate (See Fig. 3). It requires a source of liquid alcohol at the warm side of the chamber where the liquid evaporates, forming a vapor that cools as it falls through the gas and condenses on the cold bottom plate. Some sort of ionizing radiation is needed. Isopropanol, methanol, or other alcohol vapor saturates the chamber. The alcohol falls as it cools down and the cold condenser provides a steep temperature gradient. The result is a supersaturated environment. As energetic charged particles pass through the gas they leave ionization trails. The alcohol vapor condenses around gaseous ion trails left behind by the ionizing particles. This occurs because alcohol and water molecules are polar, resulting in a net attractive force toward a nearby free charge (See Fig. 4). The result is a misty cloud-like formation, seen by the presence of droplets falling down to the condenser. When the tracks are emitted from a source, their point of origin can easily be determined. Fig. 5 shows an example of an alpha particle from a Pb-210 pin-type source undergoing Rutherford scattering. Just above the cold condenser plate there is a volume of the chamber which is sensitive to ionization tracks. The ion trail left by the radioactive particles provides an optimal trigger for condensation and cloud formation. This sensitive volume is increased in height by employing a steep temperature gradient, and stable conditions. A strong electric field is often used to draw cloud tracks down to the sensitive region of the chamber and increase the sensitivity of the chamber. The electric field can also serve to prevent large amounts of background "rain" from obscuring the sensitive region of the chamber, caused by condensation forming above the sensitive volume of the chamber, thereby obscuring tracks by constant precipitation. A black background makes it easier to observe cloud tracks, and typically a tangential light source is needed to illuminate the white droplets against the black background. Often the tracks are not apparent until a shallow pool of alcohol is formed at the condenser plate. If a magnetic field is applied across the cloud chamber, positively and negatively charged particles will curve in opposite directions, according to the Lorentz force law; strong-enough fields are difficult to achieve, however, with small hobbyist setups. This method was also used to prove the existence of the Positron in 1932, in accordance with Paul Dirac's theoretical proof, published in 1928. Other particle detectors The bubble chamber was invented by Donald A. Glaser of the United States in 1952, and for this, he was awarded the Nobel Prize in Physics in 1960. The bubble chamber similarly reveals the tracks of subatomic particles, but inverts the principle of the cloud chamber to detect them as trails of bubbles in a superheated liquid, usually liquid hydrogen, rather than as trails of drops in a supercritical vapor. Bubble chambers can be made physically larger than cloud chambers, and since they are filled with much-denser liquid material, they can reveal the tracks of much more energetic particles. These factors rapidly made the bubble chamber the predominant particle detector for a number of decades, so that cloud chambers were effectively superseded in fundamental research by the start of the 1960s. A spark chamber is an electrical device that uses a grid of uninsulated electric wires in a gas-filled chamber, with high voltages applied between the wires. Energetic charged particles cause ionization of the gas along the path of the particle in the same way as in the Wilson cloud chamber, but in this case the ambient electric fields are high enough to precipitate full-scale gas breakdown in the form of sparks at the position of the initial ionization. The presence and location of these sparks is then registered electrically, and the information is stored for later analysis, such as by a digital computer. Similar condensation effects can be observed as Wilson clouds, also called condensation clouds, at large explosions in humid air and other Prandtl–Glauert singularity effects. Gallery
Physical sciences
Particle physics: General
null
483861
https://en.wikipedia.org/wiki/Group%207%20element
Group 7 element
|- ! colspan=2 style="text-align:left;" | ↓ Period |- ! 4 | |- ! 5 | |- ! 6 | |- ! 7 | |- | colspan="2"| Legend |} Group 7, numbered by IUPAC nomenclature, is a group of elements in the periodic table. It contains manganese (Mn), technetium (Tc), rhenium (Re) and bohrium (Bh). This group lies in the d-block of the periodic table, and are hence transition metals. This group is sometimes called the manganese group or manganese family after its lightest member; however, the group itself has not acquired a trivial name because it belongs to the broader grouping of the transition metals. The group 7 elements tend to have a major group oxidation state (+7), although this trend is markedly less coherent than the previous groups. Like other groups, the members of this family show patterns in their electron configurations, especially the outermost shells resulting in trends in chemical behavior. In nature, manganese is a fairly common element, whereas rhenium is rare, technetium only occurs in trace quantities, and bohrium is entirely synthetic. Physical properties The trends in group 7 follow, although less noticeably, those of the other early d-block groups and reflect the addition of a filled f-shell into the core in passing from the fifth to the sixth period. All group 7 elements crystallize in the hexagonal close packed (hcp) structure except manganese, which crystallizes in the body centered cubic (bcc) structure. Bohrium is also expected to crystallize in the hcp structure. The table below is a summary of the key physical properties of the group 7 elements. The question-marked value is predicted. Chemical properties Like other groups, the members of this family show patterns in its electron configuration, especially the outermost shells: All the members of the group readily portray their group oxidation state of +7 and the state becomes more stable as the group is descended. Technetium also shows a stable +4 state whilst rhenium exhibits stable +4 and +3 states. Bohrium may therefore also show these lower states as well. The higher +7 oxidation state is more likely to exist in oxyanions, such as perbohrate, BhO4−, analogous to the lighter permanganate, pertechnetate, and perrhenate. Nevertheless, bohrium(VII) is likely to be unstable in aqueous solution, and would probably be easily reduced to the more stable bohrium(IV). Compounds Oxides Manganese Manganese forms a variety of oxides: MnO, Mn3O4, Mn2O3, MnO2, MnO3 and Mn2O7. Manganese(II) oxide is an inorganic compound that forms green crystals. Like many monoxides, MnO adopts the rock salt structure, where cations and anions are both octahedrally coordinated. Also like many oxides, manganese(II) oxide is often nonstoichiometric: its composition can vary from MnO to MnO1.045. Manganese(II,III) oxide is formed when any manganese oxide is heated in air above 1000 °C. Considerable research has centred on producing nanocrystalline Mn3O4 and various syntheses that involve oxidation of MnII or reduction of MnVI. Manganese(III) oxide is unlike many other transition metal oxides in that it does not adopt the corundum (Al2O3) structure. Two forms are generally recognized, α-Mn2O3 and γ-Mn2O3, although a high pressure form with the CaIrO3 structure has been reported too. Manganese(IV) oxide is a blackish or brown solid occurs naturally as the mineral pyrolusite, which is the main ore of manganese and a component of manganese nodules. The principal use for MnO2 is for dry-cell batteries, such as the alkaline battery and the zinc–carbon battery. Manganese(VII) oxide is dark green in its crystalline form. The liquid is green by reflected light and red by transmitted light. It is soluble in carbon tetrachloride, and decomposes when in contact with water. Technetium Technetium's main oxides are technetium(IV) oxide and technetium(VII) oxide. Technetium(IV) oxide was first produced in 1949 by electrolyzing a solution of ammonium pertechnetate under ammonium hydroxide. It has often been used to separate technetium from molybdenum and rhenium. More efficient ways are the reduction of ammonium pertechnetate by zinc metal and hydrochloric acid, stannous chloride, hydrazine, hydroxylamine, ascorbic acid, by the hydrolysis of potassium hexachlorotechnetate or by the decomposition of ammonium pertechnetate at 700 °C under an inert atmosphere. It reacts with oxygen to produce technetium(VII) oxide at 450 °C. Technetium(VII) oxide can be prepared directly by the oxidation of technetium at 450-500 °C. It is a rare example of a molecular binary metal oxide. Other examples are ruthenium(VIII) oxide and osmium(VIII) oxide. It adopts a centrosymmetric corner-shared bi-tetrahedral structure in which the terminal and bridging Tc−O bonds are 167pm and 184 pm respectively and the Tc−O−Tc angle is 180°. Rhenium Rhenium's main oxides are rhenium(IV) oxide and rhenium(VII) oxide. Rhenium(IV) oxide is a gray to black crystalline solid that can be formed by comproportionation. At high temperatures it undergoes disproportionation. It is a laboratory reagent that can be used as a catalyst. It adopts the rutile structure. It forms perrhenates with alkaline hydrogen peroxide and oxidizing acids. In molten sodium hydroxide it forms sodium rhenate: 2NaOH + ReO2 → Na2ReO3 + H2O Rhenium(VII) oxide can be formed when rhenium or its oxides or sulfides are oxidized a 500-700 °C in air. It dissolves in water to give perrhenic acid. Heating Re2O7 gives rhenium(IV) oxide, signalled by the appearance of the dark blue coloration. In its solid form, Re2O7 consists of alternating octahedral and tetrahedral Re centres. It is the raw material for all rhenium compounds, being the volatile fraction obtained upon roasting the host ore. Rhenium, in addition to the +4 and +7 oxidation states, also forms a trioxide. It can be formed by reducing rhenium(VII) oxide with carbon monoxide at 200 C or elemental rhenium at 4000 C. It can also be reduced with dioxane. It is a red solid with a metallic lustre that resembles copper in appearance, and is the only stable trioxide of the group 7 elements. Halides Manganese Manganese can form compounds in the +2, +3 and +4 oxidation states. The manganese(II) compounds are often light pink solids. Like some other metal difluorides, MnF2 crystallizes in the rutile structure, which features octahedral Mn centers. and it is used in the manufacture of special kinds of glass and lasers. Scacchite is the natural, anhydrous form of manganese(II) chloride. The only other currently known mineral systematized as manganese chloride is kempite - a representative of the atacamite group, a group of hydroxide-chlorides. It can be used in place of palladium in the Stille reaction, which couples two carbon atoms using an organotin compound. It can be used as a pink pigment or as a source of the manganese ion or iodide ion. It is often used in the lighting industry. Technetium The following binary (containing only two elements) technetium halides are known: TcF6, TcF5, TcCl4, TcBr4, TcBr3, α-TcCl3, β-TcCl3, TcI3, α-TcCl2, and β-TcCl2. The oxidation states range from Tc(VI) to Tc(II). Technetium halides exhibit different structure types, such as molecular octahedral complexes, extended chains, layered sheets, and metal clusters arranged in a three-dimensional network. These compounds are produced by combining the metal and halogen or by less direct reactions. TcCl4 is obtained by chlorination of Tc metal or Tc2O7 Upon heating, TcCl4 gives the corresponding Tc(III) and Tc(II) chlorides. TcCl4 → α-TcCl3 + 1/2 Cl2 TcCl3 → β-TcCl2 + 1/2 Cl2 The structure of TcCl4 is composed of infinite zigzag chains of edge-sharing TcCl6 octahedra. It is isomorphous to transition metal tetrachlorides of zirconium, hafnium, and platinum. Two polymorphs of technetium trichloride exist, α- and β-TcCl3. The α polymorph is also denoted as Tc3Cl9. It adopts a confacial bioctahedral structure. It is prepared by treating the chloro-acetate Tc2(O2CCH3)4Cl2 with HCl. Like Re3Cl9, the structure of the α-polymorph consists of triangles with short M-M distances. β-TcCl3 features octahedral Tc centers, which are organized in pairs, as seen also for molybdenum trichloride. TcBr3 does not adopt the structure of either trichloride phase. Instead it has the structure of molybdenum tribromide, consisting of chains of confacial octahedra with alternating short and long Tc—Tc contacts. TcI3 has the same structure as the high temperature phase of TiI3, featuring chains of confacial octahedra with equal Tc—Tc contacts. Several anionic technetium halides are known. The binary tetrahalides can be converted to the hexahalides [TcX6]2− (X = F, Cl, Br, I), which adopt octahedral molecular geometry. More reduced halides form anionic clusters with Tc–Tc bonds. The situation is similar for the related elements of Mo, W, Re. These clusters have the nuclearity Tc4, Tc6, Tc8, and Tc13. The more stable Tc6 and Tc8 clusters have prism shapes where vertical pairs of Tc atoms are connected by triple bonds and the planar atoms by single bonds. Every technetium atom makes six bonds, and the remaining valence electrons can be saturated by one axial and two bridging ligand halogen atoms such as chlorine or bromine. Rhenium The most common rhenium chlorides are ReCl6, ReCl5, ReCl4, and ReCl3. The structures of these compounds often feature extensive Re-Re bonding, which is characteristic of this metal in oxidation states lower than VII. Salts of [Re2Cl8]2− feature a quadruple metal-metal bond. Although the highest rhenium chloride features Re(VI), fluorine gives the d0 Re(VII) derivative rhenium heptafluoride. Bromides and iodides of rhenium are also well known. Like tungsten and molybdenum, with which it shares chemical similarities, rhenium forms a variety of oxyhalides. The oxychlorides are most common, and include ReOCl4, ReOCl3. Organometallic compounds Manganese Organomanganese compounds were first reported in 1937 by Gilman and Bailee who described the reaction of phenyllithium and manganese(II) iodide to form phenylmanganese iodide (PhMnI) and diphenylmanganese (Ph2Mn). Following this precedent, other organomanganese halides can be obtained by alkylation of manganese(II) chloride, manganese(II) bromide, and manganese(II) iodide. Manganese iodide is attractive because the anhydrous compound can be prepared in situ from manganese and iodine in ether. Typical alkylating agents are organolithium or organomagnesium compounds. The chemistry of organometallic compounds of Mn(II) are unusual among the transition metals due to the high ionic character of the Mn(II)-C bond. The reactivity of organomanganese compounds can be compared to that of organomagnesium and organozinc compounds. The electronegativity of Mn (1.55) is comparable to that of Mg (1.31) and Zn (1.65), making the carbon atom (EN = 2.55) nucleophilic. The reduction potential of Mn is also intermediate between Mg and Zn. Technetium Technetium forms a variety of coordination complexes with organic ligands. Many have been well-investigated because of their relevance to nuclear medicine. Technetium forms a variety of compounds with Tc–C bonds, i.e. organotechnetium complexes. Prominent members of this class are complexes with CO, arene, and cyclopentadienyl ligands. The binary carbonyl Tc2(CO)10 is a white volatile solid. In this molecule, two technetium atoms are bound to each other; each atom is surrounded by octahedra of five carbonyl ligands. The bond length between technetium atoms, 303 pm, is significantly larger than the distance between two atoms in metallic technetium (272 pm). Similar carbonyls are formed by technetium's congeners, manganese and rhenium. Interest in organotechnetium compounds has also been motivated by applications in nuclear medicine. Unusual for other metal carbonyls, Tc forms aquo-carbonyl complexes, prominent being [Tc(CO)3(H2O)3]+. Rhenium Dirhenium decacarbonyl is the most common entry to organorhenium chemistry. Its reduction with sodium amalgam gives Na[Re(CO)5] with rhenium in the formal oxidation state −1. Dirhenium decacarbonyl can be oxidised with bromine to bromopentacarbonylrhenium(I): Re2(CO)10 + Br2 → 2 Re(CO)5Br Reduction of this pentacarbonyl with zinc and acetic acid gives pentacarbonylhydridorhenium: Re(CO)5Br + Zn + HOAc → Re(CO)5H + ZnBr(OAc) Methylrhenium trioxide ("MTO"), CH3ReO3 is a volatile, colourless solid has been used as a catalyst in some laboratory experiments. It can be prepared by many routes, a typical method is the reaction of Re2O7 and tetramethyltin: Re2O7 + (CH3)4Sn → CH3ReO3 + (CH3)3SnOReO3 Analogous alkyl and aryl derivatives are known. MTO catalyses for the oxidations with hydrogen peroxide. Terminal alkynes yield the corresponding acid or ester, internal alkynes yield diketones, and alkenes give epoxides. MTO also catalyses the conversion of aldehydes and diazoalkanes into an alkene. Polyoxometalates The polyoxotechnetate (polyoxometalate of technetium) contains both Tc(V) and Tc(VII) in ratio 4: 16 and is obtained as the hydronium salt [H7O3]4[Tc20O68]·4H2O by concentrating an HTcO4 solution. The first empirically isolated polyoxorhenate was the white [Re4O15]2− and contained Re(VII) in both octahedral and tetrahedral coordination. History Manganese Manganese dioxide, which is abundant in nature, has long been used as a pigment. The cave paintings in Gargas that are 30,000 to 24,000 years old are made from the mineral form of MnO2 pigments. Manganese compounds were used by Egyptian and Roman glassmakers, either to add to, or remove, color from glass. Use as "glassmakers soap" continued through the Middle Ages until modern times and is evident in 14th-century glass from Venice. Technetium and rhenium Rhenium ( meaning: "Rhine") was the last-discovered of the elements that have a stable isotope (other new elements discovered in nature since then, such as francium, are radioactive). The existence of a yet-undiscovered element at this position in the periodic table had been first predicted by Dmitri Mendeleev. Other calculated information was obtained by Henry Moseley in 1914. In 1908, Japanese chemist Masataka Ogawa announced that he had discovered the 43rd element and named it nipponium (Np) after Japan (Nippon in Japanese). In fact, what he had was rhenium (element 75), not technetium. The symbol Np was later used for the element neptunium, and the name "nihonium", also named after Japan, along with symbol Nh, was later used for element 113. Element 113 was also discovered by a team of Japanese scientists and was named in respectful homage to Ogawa's work. Rhenium was rediscovered by Walter Noddack, Ida Noddack, and Otto Berg in Germany. In 1925 they reported that they had detected the element in platinum ore and in the mineral columbite. They also found rhenium in gadolinite and molybdenite. In 1928 they were able to extract 1 g of the element by processing 660 kg of molybdenite. It was estimated in 1968 that 75% of the rhenium metal in the United States was used for research and the development of refractory metal alloys. It took several years from that point before the superalloys became widely used. The discovery of element 43 was finally confirmed in a 1937 experiment at the University of Palermo in Sicily by Carlo Perrier and Emilio Segrè. In mid-1936, Segrè visited the United States, first Columbia University in New York and then the Lawrence Berkeley National Laboratory in California. He persuaded cyclotron inventor Ernest Lawrence to let him take back some discarded cyclotron parts that had become radioactive. Lawrence mailed him a molybdenum foil that had been part of the deflector in the cyclotron. Bohrium Two groups claimed discovery of the element bohrium. Evidence of bohrium was first reported in 1976 by a Soviet research team led by Yuri Oganessian, in which targets of bismuth-209 and lead-208 were bombarded with accelerated nuclei of chromium-54 and manganese-55 respectively. Two activities, one with a half-life of one to two milliseconds, and the other with an approximately five-second half-life, were seen. Since the ratio of the intensities of these two activities was constant throughout the experiment, it was proposed that the first was from the isotope bohrium-261 and that the second was from its daughter dubnium-257. Later, the dubnium isotope was corrected to dubnium-258, which indeed has a five-second half-life (dubnium-257 has a one-second half-life); however, the half-life observed for its parent is much shorter than the half-lives later observed in the definitive discovery of bohrium at Darmstadt in 1981. The IUPAC/IUPAP Transfermium Working Group (TWG) concluded that while dubnium-258 was probably seen in this experiment, the evidence for the production of its parent bohrium-262 was not convincing enough. In 1981, a German research team led by Peter Armbruster and Gottfried Münzenberg at the GSI Helmholtz Centre for Heavy Ion Research (GSI Helmholtzzentrum für Schwerionenforschung) in Darmstadt bombarded a target of bismuth-209 with accelerated nuclei of chromium-54 to produce five atoms of the isotope bohrium-262: + → + This discovery was further substantiated by their detailed measurements of the alpha decay chain of the produced bohrium atoms to previously known isotopes of fermium and californium. The IUPAC/IUPAP Transfermium Working Group (TWG) recognised the GSI collaboration as official discoverers in their 1992 report. Occurrence and production Manganese Manganese comprises about 1000 ppm (0.1%) of the Earth's crust and is the 12th most abundant element. Soil contains 7–9000 ppm of manganese with an average of 440 ppm. The atmosphere contains 0.01 μg/m3. Manganese occurs principally as pyrolusite (MnO2), braunite (Mn2+Mn3+6)(SiO12), psilomelane , and to a lesser extent as rhodochrosite (MnCO3). The most important manganese ore is pyrolusite (MnO2). Other economically important manganese ores usually show a close spatial relation to the iron ores, such as sphalerite. Land-based resources are large but irregularly distributed. About 80% of the known world manganese resources are in South Africa; other important manganese deposits are in Ukraine, Australia, India, China, Gabon and Brazil. According to 1978 estimate, the ocean floor has 500 billion tons of manganese nodules. Attempts to find economically viable methods of harvesting manganese nodules were abandoned in the 1970s. In South Africa, most identified deposits are located near Hotazel in the Northern Cape Province, with a 2011 estimate of 15 billion tons. In 2011 South Africa produced 3.4 million tons, topping all other nations. Manganese is mainly mined in South Africa, Australia, China, Gabon, Brazil, India, Kazakhstan, Ghana, Ukraine and Malaysia. For the production of ferromanganese, the manganese ore is mixed with iron ore and carbon, and then reduced either in a blast furnace or in an electric arc furnace. The resulting ferromanganese has a manganese content of 30 to 80%. Pure manganese used for the production of iron-free alloys is produced by leaching manganese ore with sulfuric acid and a subsequent electrowinning process. A more progressive extraction process involves directly reducing (a low grade) manganese ore in a heap leach. This is done by percolating natural gas through the bottom of the heap; the natural gas provides the heat (needs to be at least 850 °C) and the reducing agent (carbon monoxide). This reduces all of the manganese ore to manganese oxide (MnO), which is a leachable form. The ore then travels through a grinding circuit to reduce the particle size of the ore to between 150 and 250 μm, increasing the surface area to aid leaching. The ore is then added to a leach tank of sulfuric acid and ferrous iron (Fe2+) in a 1.6:1 ratio. The iron reacts with the manganese dioxide (MnO2) to form iron(III) oxide-hydroxide (FeO(OH)) and elemental manganese (Mn): This process yields approximately 92% recovery of the manganese. For further purification, the manganese can then be sent to an electrowinning facility. In 1972 the CIA's Project Azorian, through billionaire Howard Hughes, commissioned the ship Hughes Glomar Explorer with the cover story of harvesting manganese nodules from the sea floor. That triggered a rush of activity to collect manganese nodules, which was not actually practical. The real mission of Hughes Glomar Explorer was to raise a sunken Soviet submarine, the K-129, with the goal of retrieving Soviet code books. An abundant resource of manganese in the form of Mn nodules found on the ocean floor. These nodules, which are composed of 29% manganese, are located along the ocean floor and the potential impact of mining these nodules is being researched. Physical, chemical, and biological environmental impacts can occur due to this nodule mining disturbing the seafloor and causing sediment plumes to form. This suspension includes metals and inorganic nutrients, which can lead to contamination of the near-bottom waters from dissolved toxic compounds. Mn nodules are also the grazing grounds, living space, and protection for endo- and epifaunal systems. When theses nodules are removed, these systems are directly affected. Overall, this can cause species to leave the area or completely die off. Prior to the commencement of the mining itself, research is being conducted by United Nations affiliated bodies and state-sponsored companies in an attempt to fully understand environmental impacts in the hopes of mitigating these impacts. Technetium Technetium was created by bombarding molybdenum atoms with deuterons that had been accelerated by a device called a cyclotron. Technetium occurs naturally in the Earth's crust in minute concentrations of about 0.003 parts per trillion. Technetium is so rare because the half-lives of 97Tc and 98Tc are only 4.2 million years. More than a thousand of such periods have passed since the formation of the Earth, so the probability of survival of even one atom of primordial technetium is effectively zero. However, small amounts exist as spontaneous fission products in uranium ores. A kilogram of uranium contains an estimated 1 nanogram (10−9 g) equivalent to ten trillion atoms of technetium. Some red giant stars with the spectral types S-, M-, and N contain a spectral absorption line indicating the presence of technetium. These red giants are known informally as technetium stars. Rhenium Rhenium is one of the rarest elements in Earth's crust with an average concentration of 1 ppb; other sources quote the number of 0.5 ppb making it the 77th most abundant element in Earth's crust. Rhenium is probably not found free in nature (its possible natural occurrence is uncertain), but occurs in amounts up to 0.2% in the mineral molybdenite (which is primarily molybdenum disulfide), the major commercial source, although single molybdenite samples with up to 1.88% have been found. Chile has the world's largest rhenium reserves, part of the copper ore deposits, and was the leading producer as of 2005. It was only recently that the first rhenium mineral was found and described (in 1994), a rhenium sulfide mineral (ReS2) condensing from a fumarole on Kudriavy volcano, Iturup island, in the Kuril Islands. Kudriavy discharges up to 20–60 kg rhenium per year mostly in the form of rhenium disulfide. Named rheniite, this rare mineral commands high prices among collectors. Most of the rhenium extracted comes from porphyry molybdenum deposits. These ores typically contain 0.001% to 0.2% rhenium. Roasting the ore volatilizes rhenium oxides. Rhenium(VII) oxide and perrhenic acid readily dissolve in water; they are leached from flue dusts and gasses and extracted by precipitating with potassium or ammonium chloride as the perrhenate salts, and purified by recrystallization. Total world production is between 40 and 50 tons/year; the main producers are in Chile, the United States, Peru, and Poland. Recycling of used Pt-Re catalyst and special alloys allow the recovery of another 10 tons per year. Prices for the metal rose rapidly in early 2008, from $1000–$2000 per kg in 2003–2006 to over $10,000 in February 2008. The metal form is prepared by reducing ammonium perrhenate with hydrogen at high temperatures: 2 NH4ReO4 + 7 H2 → 2 Re + 8 H2O + 2 NH3 There are technologies for the associated extraction of rhenium from productive solutions of underground leaching of uranium ores. Bohrium Bohrium is a synthetic element that does not occur in nature. Very few atoms have been synthesized, and also due to its radioactivity, only limited research has been conducted. Bohrium is only produced in nuclear reactors and has never been isolated in pure form. Applications The facial isomer of both rhenium and manganese 2,2'-bipyridyl tricarbonyl halide complexes have been extensively researched as catalysts for electrochemical carbon dioxide reduction due to their high selectivity and stability. They are commonly abbreviated as M(R-bpy)(CO)3X where M = Mn, Re; R-bpy = 4,4'-disubstituted 2,2'-bipyridine; and X = Cl, Br. Manganese The rarity of rhenium has shifted research toward the manganese version of these catalysts as a more sustainable alternative. The first reports of catalytic activity of Mn(R-bpy)(CO)3Br towards CO2 reduction came from Chardon-Noblat and coworkers in 2011. Compared to Re analogs, Mn(R-bpy)(CO)3Br shows catalytic activity at lower overpotentials. The catalytic mechanism for Mn(R-bpy)(CO)3X is complex and depends on the steric profile of the bipyridine ligand. When R is not bulky, the catalyst dimerizes to form [Mn(R-bpy)(CO)3]2 before forming the active species. When R is bulky, however, the complex forms the active species without dimerizing, reducing the overpotential of CO2 reduction by 200-300 mV. Unlike Re(R-bpy)(CO)3X, Mn(R-bpy)(CO)3X only reduces CO2 in the presence of an acid. Technetium Technetium-99m ("m" indicates that this is a metastable nuclear isomer) is used in radioactive isotope medical tests. For example, Technetium-99m is a radioactive tracer that medical imaging equipment tracks in the human body. It is well suited to the role because it emits readily detectable 140 keV gamma rays, and its half-life is 6.01 hours (meaning that about 94% of it decays to technetium-99 in 24 hours). The chemistry of technetium allows it to be bound to a variety of biochemical compounds, each of which determines how it is metabolized and deposited in the body, and this single isotope can be used for a multitude of diagnostic tests. More than 50 common radiopharmaceuticals are based on technetium-99m for imaging and functional studies of the brain, heart muscle, thyroid, lungs, liver, gall bladder, kidneys, skeleton, blood, and tumors. Technetium-99m is also used in radioimaging. The longer-lived isotope, technetium-95m with a half-life of 61 days, is used as a radioactive tracer to study the movement of technetium in the environment and in plant and animal systems. Technetium-99 decays almost entirely by beta decay, emitting beta particles with consistent low energies and no accompanying gamma rays. Moreover, its long half-life means that this emission decreases very slowly with time. It can also be extracted to a high chemical and isotopic purity from radioactive waste. For these reasons, it is a National Institute of Standards and Technology (NIST) standard beta emitter, and is used for equipment calibration. Technetium-99 has also been proposed for optoelectronic devices and nanoscale nuclear batteries. Like rhenium and palladium, technetium can serve as a catalyst. In processes such as the dehydrogenation of isopropyl alcohol, it is a far more effective catalyst than either rhenium or palladium. However, its radioactivity is a major problem in safe catalytic applications. When steel is immersed in water, adding a small concentration (55 ppm) of potassium pertechnetate(VII) to the water protects the steel from corrosion, even if the temperature is raised to . For this reason, pertechnetate has been used as an anodic corrosion inhibitor for steel, although technetium's radioactivity poses problems that limit this application to self-contained systems. While (for example) can also inhibit corrosion, it requires a concentration ten times as high. In one experiment, a specimen of carbon steel was kept in an aqueous solution of pertechnetate for 20 years and was still uncorroded. The mechanism by which pertechnetate prevents corrosion is not well understood, but seems to involve the reversible formation of a thin surface layer (passivation). One theory holds that the pertechnetate reacts with the steel surface to form a layer of technetium dioxide which prevents further corrosion; the same effect explains how iron powder can be used to remove pertechnetate from water. The effect disappears rapidly if the concentration of pertechnetate falls below the minimum concentration or if too high a concentration of other ions is added. As noted, the radioactive nature of technetium (3 MBq/L at the concentrations required) makes this corrosion protection impractical in almost all situations. Nevertheless, corrosion protection by pertechnetate ions was proposed (but never adopted) for use in boiling water reactors. Rhenium The catalytic activity of Re(bpy)(CO)3Cl for carbon dioxide reduction was first studied by Lehn et al. and Meyer et al. in 1984 and 1985, respectively. Re(R-bpy)(CO)3X complexes exclusively produce CO from CO2 reduction with Faradaic efficiencies of close to 100% even in solutions with high concentrations of water or Brønsted acids. The catalytic mechanism of Re(R-bpy)(CO)3X involves reduction of the complex twice and loss of the X ligand to generate a five-coordinate active species which binds CO2. These complexes will reduce CO2 both with and without an additional acid present; however, the presence of an acid increases catalytic activity. The high selectivity of these complexes to CO2 reduction over the competing hydrogen evolution reaction has been shown by density functional theory studies to be related to the faster kinetics of CO2 binding compared to H+ binding. Bohrium Bohrium is a synthetic element and is too radioactive to be used in anything. Toxicity and precautions Manganese compounds are less toxic than those of other widespread metals, such as nickel and copper. However, exposure to manganese dusts and fumes should not exceed the ceiling value of 5 mg/m3 even for short periods because of its toxicity level. Manganese poisoning has been linked to impaired motor skills and cognitive disorders. Technetium has low chemical toxicity. For example, no significant change in blood formula, body and organ weights, and food consumption could be detected for rats which ingested up to 15 μg of technetium-99 per gram of food for several weeks. In the body, technetium quickly gets converted to the stable ion, which is highly water-soluble and quickly excreted. The radiological toxicity of technetium (per unit of mass) is a function of compound, type of radiation for the isotope in question, and the isotope's half-life. However, it is radioactive, so all isotopes must be handled carefully. The primary hazard when working with technetium is inhalation of dust; such radioactive contamination in the lungs can pose a significant cancer risk. For most work, careful handling in a fume hood is sufficient, and a glove box is not needed. Very little is known about the toxicity of rhenium and its compounds because they are used in very small amounts. Soluble salts, such as the rhenium halides or perrhenates, could be hazardous due to elements other than rhenium or due to rhenium itself. Only a few compounds of rhenium have been tested for their acute toxicity; two examples are potassium perrhenate and rhenium trichloride, which were injected as a solution into rats. The perrhenate had an LD50 value of 2800 mg/kg after seven days (this is very low toxicity, similar to that of table salt) and the rhenium trichloride showed LD50 of 280 mg/kg. Biological role Of the group 7 elements, only manganese has a role in the human body. It is an essential trace nutrient, with the body containing approximately 10 milligrams at any given time. It is present as a coenzyme in biological processes that include macronutrient metabolism, bone formation, and free radical defense systems. It is a critical component in dozens of proteins and enzymes. The manganese in the human body is mainly concentrated in the bones, and the soft tissue remainder is concentrated in the liver and kidneys. In the human brain, the manganese is bound to manganese metalloproteins, most notably glutamine synthetase in astrocytes. Technetium, rhenium, and bohrium have no known biological roles. Technetium is, however, used in radioimaging.
Physical sciences
Group 7
Chemistry
484061
https://en.wikipedia.org/wiki/Naked%20mole-rat
Naked mole-rat
The naked mole-rat (Heterocephalus glaber), also known as the sand puppy, is a burrowing rodent native to the Horn of Africa and parts of Kenya, notably in Somali regions. It is closely related to the blesmols and is the only species in the genus Heterocephalus. The naked mole-rat exhibits a highly unusual set of physiological and behavioral traits that allow it to thrive in a harsh underground environment; most notably its being the only mammalian thermoconformer with an almost entirely ectothermic (cold-blooded) form of body temperature regulation, as well as exhibiting a complex social structure split between reproductive and non-reproductive castes, making it and the closely related Damaraland mole-rat (Fukomys damarensis) the only widely recognized examples of eusociality (the highest classification of sociality) in mammals. The naked mole-rat lacks pain sensitivity in its skin, and has very low metabolic and respiratory rates. It is also remarkable for its longevity and its resistance to cancer and oxygen deprivation. While formerly considered to belong to the same family as other African mole-rats, Bathyergidae, more recent investigation places it in a separate family, Heterocephalidae. Description Typical individuals are long and weigh . Queens are larger and may weigh well over , the largest reaching . They are well adapted to their underground existence. Their eyes are quite small, and their visual acuity is poor. Their legs are thin and short; however, they are highly adept at moving underground and can move backward as fast as they can move forward. Their large, protruding teeth are used to dig and their lips are sealed just behind the teeth, preventing soil from filling their mouths while digging. About a quarter of their musculature is used in the closing of their jaws while they dig. They have little hair (hence the common name) and wrinkled pink or yellowish skin. They lack an insulating layer in the skin. Physiology Metabolism and respiration The naked mole-rat is well adapted to the limited availability of oxygen within the tunnels of its typical habitat. It has underdeveloped lungs and its hemoglobin has a high affinity for oxygen, increasing the efficiency of oxygen uptake. It has a very low respiration and metabolic rate for an animal of its size, about 70% that of a mouse, thus using oxygen minimally. In response to long periods of hunger, its metabolic rate can be reduced by up to 25 percent. The naked mole-rat survives for at least 5 hours in air that contains only 5% oxygen; it does not show any significant signs of distress and continues normal activity. It can live in an atmosphere of 80% and 20% oxygen. In zero-oxygen atmosphere, it can survive 18 minutes apparently without suffering any harm (but none survived a test of 30 minutes). During the anoxic period it loses consciousness, its heart rate drops from about 200 to 50 beats per minute, and breathing stops apart from sporadic breathing attempts. When deprived of oxygen, the animal uses fructose in its anaerobic glycolysis, producing lactic acid. This pathway is not inhibited by acidosis as happens with glycolysis of glucose. As of 2017, it was not known how the naked mole-rat survives acidosis without tissue damage. Thermoregulation The naked mole-rat does not regulate its body temperature in typical mammalian fashion. They are thermoconformers rather than thermoregulators in that, unlike other mammals, body temperature tracks ambient temperatures. However, it has also been claimed that "the Naked Mole-Rat has a distinct temperature and activity rhythm that is not coupled to environmental conditions." The relationship between oxygen consumption and ambient temperature switches from a typical poikilothermic pattern to a homeothermic mode when temperature is at 29 °C or higher. At lower temperatures, naked mole-rats can use behavioral thermoregulation. For example, cold naked mole-rats huddle together or seek shallow parts of the burrows that are warmed by the sun. Conversely, when they get too hot, naked mole-rats retreat to the deeper, cooler parts of the burrows. Pain insensitivity The skin of naked mole-rats lacks neurotransmitters in their cutaneous sensory fibers. As a result, the naked mole-rats feel no pain when they are exposed to acid or capsaicin. When they are injected with substance P, a type of neurotransmitter, the pain signaling works as it does in other mammals but only with capsaicin and not with acids. This is proposed to be an adaptation to the animal living in high levels of carbon dioxide due to poorly ventilated living spaces which would cause acid to build up in their body tissues. Naked mole-rats' substance P deficiency has also been tied to their lack of the histamine-induced itching and scratching behavior typical of rodents. Resistance to cancer Naked mole-rats have a high resistance to tumours, although it is likely that they are not entirely immune to related disorders. A potential mechanism that averts cancer is an "over-crowding" gene, p16, which prevents cell division once individual cells come into contact (known as "contact inhibition"). The cells of most mammals, including naked mole-rats, undergo contact inhibition via the gene p27 which prevents cellular reproduction at a much higher cell density than p16 does. The combination of p16 and p27 in naked mole-rat cells is a double barrier to uncontrolled cell proliferation, one of the hallmarks of cancer. In 2013, scientists reported that the reason naked mole-rats do not get cancer can be attributed to an "extremely high-molecular-mass hyaluronan" (HMW-HA) (a natural sugary substance), which is over "five times larger" than that in cancer-prone humans and cancer-susceptible laboratory animals. The scientific report was published a month later as the cover story of the journal Nature. A few months later, the same University of Rochester research team announced that naked mole-rats have ribosomes that produce extremely error-free proteins. Because of both of these discoveries, the journal Science named the naked mole-rat "Vertebrate of the Year" for 2013. In 2016, a report was published that recorded the first ever discovered malignancies in two naked mole-rats, in two individuals. However, both naked mole-rats were captive-born at zoos, and hence lived in an environment with 21% atmospheric oxygen compared to their natural 2–9%, which may have promoted tumorigenesis. The Golan Heights blind mole-rat (Spalax golani) and the Judean Mountains blind mole-rat (Spalax judaei) are also resistant to cancer, but by a different mechanism. In July 2023 a study reported the transference of the gene responsible for HMW-HA from a naked mole rat to mice leading to improved health and an approximate 4.4 percent increase in median lifespan for the mice. Longevity Naked mole-rats can live longer than any other rodent, with lifespans in excess of 37 years; the next longest-lived rodent is the African porcupine at 28 years. The mortality rate of the species does not increase with age, and thus does not conform to that of most mammals (as frequently defined by the Gompertz-Makeham law of mortality). Naked mole-rats are highly resistant to cancer and maintain healthy vascular function longer in their lifespan than shorter-living rats. Queens age more slowly than other naked mole rats. The reason for their longevity is debated, but is thought to be related to their ability to substantially reduce their metabolism in response to adverse conditions, and so prevent aging-induced damage from oxidative stress. This has been referred to as "living their life in pulses". Their longevity has also been attributed to "protein stability". Because of their extraordinary longevity, an international effort was put into place to sequence the genome of the naked mole-rat. A draft genome was made available in 2011 with an improved version released in 2014. Its somatic number is 2n = 60. Further transcriptome sequencing revealed that genes related to mitochondria and oxidation reduction are expressed more than they are in mice, which may contribute to their longevity. The DNA repair transcriptomes of the liver of humans, naked mole-rats, and mice were compared. The maximum lifespans of humans, naked mole-rats, and mice are respectively c. 120, 30 and 3 years. The longer-lived species, humans and naked mole-rats, expressed DNA repair genes, including core genes in several DNA repair pathways, at a higher level than did mice. In addition, several DNA repair pathways in humans and naked mole-rats were up-regulated compared with mice. These findings suggest that increased DNA repair facilitates greater longevity, and also are consistent with the DNA damage theory of aging. Size Reproducing females become the dominant female, usually by founding new colonies, fighting for the dominant position, or taking over once the reproducing female dies. These reproducing females tend to have longer bodies than that of their non-reproducing counterparts of the same skull width. The measurements of females before they became reproductive and after show significant increases in body size. It is believed that this trait does not occur due to pre-existing morphological differences but to the actual attainment of the dominant female position. As with the reproductive females, the reproductive males also appear to be larger than their non-reproducing counterparts but the difference is smaller than in females. These males also have visible outlines of the testes through the skin of their abdomens. Unlike the females, there are usually multiple reproducing males. Chronobiology The naked mole-rat's subterranean habitat imposes constraints on its circadian rhythm. Living in constant darkness, most individuals possess a free-running activity pattern and are active both day and night, sleeping for short periods several times in between. Ecology and behavior Distribution and habitat The naked mole-rat is native to the drier parts of the tropical grasslands of East Africa, predominantly southern Ethiopia, Kenya, and Somalia. Clusters averaging 75 to 80 individuals live together in complex systems of burrows in arid African deserts. The tunnel systems built by naked mole-rats can stretch up to three to five kilometres (2–3 mi) in cumulative length. Roles The naked mole-rat was the first mammal found to be eusocial. The Damaraland mole-rat (Cryptomys damarensis) is the only other eusocial mammal currently known. The social structure of naked mole-rats is similar to that in ants, termites, and some bees and wasps. Only one female (the queen) and one to three males reproduce, while the rest of the members of the colony function as workers. The queen and breeding males are able to breed at one year of age. Workers are sterile, with the smaller focusing on gathering food and maintaining the nest, while larger workers are more reactive in case of attack. The non-reproducing females appear to be reproductively suppressed, meaning the ovaries do not fully mature, and do not have the same levels of certain hormones as the reproducing females. On the other hand, there is little difference of hormone concentration between reproducing and non-reproducing males. In experiments where the reproductive female was removed or died, one of the non-reproducing females would take over and become sexually active. Non-reproducing members of the colony are involved in cooperative care of the pups produced by the reproducing female. This occurs through the workers keeping the pups from straying, foraging for food, grooming, contributing to extension of tunnels, and keeping them warm. Queen and gestation The relationships between the queen and the breeding males may last for many years; other females are temporarily sterile. Queens live from 13 to 18 years, and are extremely hostile to other females behaving like queens, or producing hormones for becoming queens. When the queen dies, another female takes her place, sometimes after a violent struggle with her competitors. Once established, the new queen's body expands the space between the vertebrae in her backbone to become longer and ready to bear pups. Gestation is about 70 days. A litter typically ranges from three to twelve pups, but may be as large as twenty-eight. The average litter size is eleven. In the wild, naked mole-rats usually breed once a year, if the litter survives. In captivity, they breed all year long and can produce a litter every 80 days. The young are born blind and weigh about . The queen nurses them for the first month, after which the other members of the colony feed them fecal pap until they are old enough to eat solid food. Workers Smaller workers focus on acquiring food and maintaining tunnels, while the larger workers are more reactive in case of attacks. As in certain bee species, the workers are divided along a continuum of different worker-caste behaviors instead of discrete groups. Some function primarily as tunnellers, expanding the large network of tunnels within the burrow system, and some primarily as soldiers, protecting the group from outside predators. There are two main types of worker, the "frequent workers" who frequently perform tasks such as foraging and nest building and "infrequent workers" that show role overlap with the "frequent workers" but perform at a much slower rate. Workers are sterile when there is no new reproductive role to fill. Dispersers Inbreeding is common among naked mole-rats within a colony. Prolonged inbreeding is usually associated with lower fitness. However, the discovery of a disperser role has revealed an outbreeding mechanism in addition to inbreeding avoidance. Dispersers are morphologically, physiologically and behaviorally distinct from colony members and actively seek to leave their burrow when an escape opportunity presents itself. These individuals are equipped with generous fat reserves for their journey. Though they possess high levels of luteinizing hormone, dispersers are only interested in mating with individuals from foreign colonies rather than their own colony's queen. They also show little interest in working cooperatively with colony members in their natal burrow. Hence, the disperser morph is well-prepared to promote the exchange of individuals as well as genetic material between two otherwise separate colonies. Colonies Colonies range in size from 20 to 300 individuals, with an average of 75. Female mate choice Reproductively active female naked mole rats tend to associate with unfamiliar males (usually non-kin), whereas reproductively inactive females do not discriminate. The preference of reproductively active females for unfamiliar males is interpreted as an adaptation for inbreeding avoidance. Inbreeding is avoided because it ordinarily leads to the expression of recessive deleterious alleles. Diet Naked mole-rats feed primarily on very large tubers (weighing as much as a thousand times the body weight of a typical mole-rat) that they find deep underground through their mining operations. A single tuber can provide a colony with a long-term source of food—lasting for months, or even years, as they eat the inside but leave the outside, allowing the tuber to regenerate. Symbiotic bacteria in their intestines ferment the fibres, allowing otherwise indigestible cellulose to be turned into volatile fatty acids. Naked mole-rats sometimes also eat their own feces. This may be part of their eusocial behavior and a means of sharing hormones from the queen. Predators Naked mole rats are primarily preyed upon by snakes—especially the Rufous beaked snake and Kenyan sand boa—as well as various raptors. They are at their most vulnerable when constructing mounds and ejecting soil to the surface. Conservation status Naked mole-rats are not threatened. They are widespread and numerous in the drier regions of East Africa. The Photo Ark A naked mole-rat living at the Lincoln Children's Zoo was the first animal to be photographed for the National Geographic project, The Photo Ark, which has the goal of photographing all species living in zoos and wildlife sanctuaries around the globe in order to inspire action to save wildlife.
Biology and health sciences
Rodents
Animals
484254
https://en.wikipedia.org/wiki/K%C3%B6ppen%20climate%20classification
Köppen climate classification
The Köppen climate classification divides Earth climates into five main climate groups, with each group being divided based on patterns of seasonal precipitation and temperature. The five main groups are A (tropical), B (arid), C (temperate), D (continental), and E (polar). Each group and subgroup is represented by a letter. All climates are assigned a main group (the first letter). All climates except for those in the E group are assigned a seasonal precipitation subgroup (the second letter). For example, Af indicates a tropical rainforest climate. The system assigns a temperature subgroup for all groups other than those in the A group, indicated by the third letter for climates in B, C, D, and the second letter for climates in E. For example, Cfb indicates an oceanic climate with warm summers as indicated by the ending b. Climates are classified based on specific criteria unique to each climate type. The Köppen climate classification is the most widely used climate classification scheme. It was first published by German-Russian climatologist Wladimir Köppen (1846–1940) in 1884, with several later modifications by Köppen, notably in 1918 and 1936. Later, German climatologist Rudolf Geiger (1894–1981) introduced some changes to the classification system in 1954 and 1961, which is thus sometimes called the Köppen–Geiger climate classification. As Köppen designed the system based on his experience as a botanist, his main climate groups represent a classification by vegetation type. In addition to identifying climates, the system can be used to analyze ecosystem conditions and identify the main types of vegetation within climates. Due to its association with the plant life of a given region, the system is useful in predicting future changes of plant life within that region. The Köppen climate classification system was modified further within the Trewartha climate classification system in 1966 (revised in 1980). The Trewartha system sought to create a more refined middle latitude climate zone, which was one of the criticisms of the Köppen system (the climate group C was too general). Overview The Köppen climate classification scheme divides climates into five main climate groups: A (tropical), B (arid), C (temperate), D (continental), and E (polar). The second letter indicates the seasonal precipitation type, while the third letter indicates the level of heat. Summers are defined as the six-month period that is warmer either from April to September and/or October to March, while winter is the six-month period that is cooler. Group A: Tropical climates Tropical climates have an average temperature of or higher every month of the year, with significant precipitation. Af = Tropical rainforest climate; average precipitation of at least in every month. Am = Tropical monsoon climate; driest month (which nearly always occurs at or soon after the "winter" solstice for that side of the equator) with precipitation less than , but at least . Aw or As = Tropical wet and dry or savanna climate; with the driest month having precipitation less than and less than . Group B: Desert and semi-arid climates Desert and semi-arid climates are defined by low precipitation in a region that does not fit the polar (EF or ET) criteria of no month with an average temperature greater than . The precipitation threshold in millimeters is determined by multiplying the average annual temperature in Celsius by 20, then adding: If the annual precipitation is less than 50% of this threshold, the classification is BW (arid: desert climate); if it is in the range of 50%–100% of the threshold, the classification is BS (semi-arid: steppe climate). A third letter can be included to indicate temperature. Here, h signifies low-latitude climates (average annual temperature above ) while k signifies middle-latitude climates (average annual temperature less than 18 °C). In addition, n is used to denote a climate characterized by frequent fog and H for high altitudes. BWh = Hot desert climate BWk = Cold desert climate BSh = Hot semi-arid climate BSk = Cold semi-arid climate Group C: Temperate climates Temperate climates have the coldest month averaging between (or ) and and at least one month averaging above . For the distribution of precipitation in locations that both satisfy a dry summer (Cs) and a dry winter (Cw), a location is considered to have a wet summer (Cw) when more precipitation falls within the summer months than the winter months while a location is considered to have a dry summer (Cs) when more precipitation falls within the winter months. This additional criterion applies to locations that satisfies both Ds and Dw as well. Cfa = Humid subtropical climate; coldest month averaging above (or ), at least one month's average temperature above , and at least four months averaging above . No significant precipitation difference between seasons (neither the abovementioned set of conditions fulfilled). Cfb = Temperate oceanic climate or subtropical highland climate; coldest month averaging above (or ), all months with average temperatures below , and at least four months averaging above . No significant precipitation difference between seasons (neither the abovementioned set of conditions fulfilled). Cfc = Subpolar oceanic climate; coldest month averaging above (or ) and 1–3 months averaging above . No significant precipitation difference between seasons (neither the abovementioned set of conditions fulfilled). Cwa = Monsoon-influenced humid subtropical climate; coldest month averaging above (or ), at least one month's average temperature above , and at least four months averaging above . At least ten times as much rain in the wettest month of summer as in the driest month of winter. Cwb = Subtropical highland climate or Monsoon-influenced temperate oceanic climate; coldest month averaging above (or ), all months with average temperatures below , and at least four months averaging above . At least ten times as much rain in the wettest month of summer as in the driest month of winter. Cwc = Cold subtropical highland climate or Monsoon-influenced subpolar oceanic climate; coldest month averaging above (or ) and 1–3 months averaging above . At least ten times as much rain in the wettest month of summer as in the driest month of winter. Csa = Hot-summer Mediterranean climate; coldest month averaging above (or ), at least one month's average temperature above , and at least four months averaging above . At least three times as much precipitation in the wettest month of winter as in the driest month of summer, and the driest month of summer receives less than . Csb = Warm-summer Mediterranean climate; coldest month averaging above (or ), all months with average temperatures below , and at least four months averaging above . At least three times as much precipitation in the wettest month of winter as in the driest month of summer, and the driest month of summer receives less than . Csc = Cold-summer Mediterranean climate; coldest month averaging above (or ) and 1–3 months averaging above . At least three times as much precipitation in the wettest month of winter as in the driest month of summer, and the driest month of summer receives less than . Group D: Continental climates Continental climates have at least one month averaging below (or ) and at least one month averaging above . Dfa = Hot-summer humid continental climate; coldest month averaging below (or ), at least one month's average temperature above , and at least four months averaging above . No significant precipitation difference between seasons (neither the abovementioned set of conditions fulfilled). Dfb = Warm-summer humid continental climate; coldest month averaging below (or ), all months with average temperatures below , and at least four months averaging above . No significant precipitation difference between seasons (neither the abovementioned set of conditions fulfilled). Dfc = Subarctic climate; coldest month averaging below (or ) and 1–3 months averaging above . No significant precipitation difference between seasons (neither the abovementioned set of conditions fulfilled). Dfd = Extremely cold subarctic climate; coldest month averaging below and 1–3 months averaging above . No significant precipitation difference between seasons (neither the abovementioned set of conditions fulfilled). Dwa = Monsoon-influenced hot-summer humid continental climate; coldest month averaging below (or ), at least one month's average temperature above , and at least four months averaging above . At least ten times as much rain in the wettest month of summer as in the driest month of winter. Dwb = Monsoon-influenced warm-summer humid continental climate; coldest month averaging below (or ), all months with average temperatures below , and at least four months averaging above . At least ten times as much rain in the wettest month of summer as in the driest month of winter. Dwc = Monsoon-influenced subarctic climate; coldest month averaging below (or ) and 1–3 months averaging above . At least ten times as much rain in the wettest month of summer as in the driest month of winter. Dwd = Monsoon-influenced extremely cold subarctic climate; coldest month averaging below and 1–3 months averaging above . At least ten times as much rain in the wettest month of summer as in the driest month of winter. Dsa = Mediterranean-influenced hot-summer humid continental climate; coldest month averaging below (or ), average temperature of the warmest month above and at least four months averaging above . At least three times as much precipitation in the wettest month of winter as in the driest month of summer, and the driest month of summer receives less than . Dsb = Mediterranean-influenced warm-summer humid continental climate; coldest month averaging below (or ), average temperature of the warmest month below and at least four months averaging above . At least three times as much precipitation in the wettest month of winter as in the driest month of summer, and the driest month of summer receives less than . Dsc = Mediterranean-influenced subarctic climate; coldest month averaging below (or ) and 1–3 months averaging above . At least three times as much precipitation in the wettest month of winter as in the driest month of summer, and the driest month of summer receives less than . Dsd = Mediterranean-influenced extremely cold subarctic climate; coldest month averaging below and 1–3 months averaging above . At least three times as much precipitation in the wettest month of winter as in the driest month of summer, and the driest month of summer receives less than . Group E: Polar and alpine climates Polar and alpine climates has every month of the year with an average temperature below . ET = Tundra climate; average temperature of warmest month between and . EF = Ice cap climate; eternal winter, with all 12 months of the year with average temperatures below . Group A: Tropical/megathermal climates Tropical climates are characterized by constant high temperatures (at sea level and low elevations); all 12 months of the year have average temperatures of 18 °C (64.4 °F) or higher; and generally high annual precipitation. They are subdivided as follows: Af: Tropical rainforest climate All 12 months have an average precipitation of at least . These climates usually occur within 10° latitude of the equator. This climate has no natural seasons in terms of thermal and moisture changes. When it is dominated most of the year by the doldrums low-pressure system due to the presence of the Intertropical Convergence Zone (ITCZ) and when there are no cyclones then the climate is qualified as equatorial. When the trade winds dominate most of the year, the climate is a tropical trade-wind rainforest climate. Examples Alofi, Niue, New Zealand Antalaha, Madagascar Apia, Samoa Atuona, Hiva Oa, French Polynesia Avarua, Cook Islands Bandar Seri Begawan, Brunei Bluefields, Nicaragua Bocas del Toro, Panama Boende, Democratic Republic of the Congo Buenaventura, Colombia Castries, Saint Lucia (bordering on Am) Changuinola, Panama Cocos Island, Costa Rica Colombo, Sri Lanka Davao, Philippines Easter Island, Chile Fort Lauderdale, Florida, United States (bordering on Am) Funafuti, Tuvalu Georgetown, Guyana Hagåtña, Guam Hamilton, Bermuda (bordering on Cfa) Higüey, Dominican Republic (bordering on Am) Hilo, Hawaii, United States Honiara, Solomon Islands Innisfail, Queensland, Australia Ipoh, Malaysia Iquitos, Peru Ishigaki, Japan Johor Bahru, Malaysia Kampala, Uganda Kisumu, Kenya Koror, Palau Kuala Lumpur, Malaysia Kuching, Malaysia Kurunegala, Sri Lanka (bordering on Am) La Ceiba, Honduras Lae, Papua New Guinea Majuro, Marshall Islands Manaus, Brazil Mata Utu, Wallis and Futuna, French Polynesia Medan, Indonesia Moroni, Comoros Nakhon Si Thammarat, Thailand Narathiwat, Thailand (bordering on Am) Nuku'alofa, Tonga Orchid Island, Taiwan Padang, Indonesia Pago Pago, American Samoa Palembang, Indonesia Palikir, Micronesia Paramaribo, Suriname Papeete, Tahiti, French Polynesia Pitcairn Island, United Kingdom Pointe-à-Pitre, Guadeloupe (bordering on Am) Polomolok, Philippines Port Antonio, Jamaica Port Vila, Vanuatu Puerto Barrios, Guatemala Punta Gorda, Belize Puyo, Ecuador Quibdó, Colombia Ratnapura, Sri Lanka Saint-Laurent-du-Maroni, French Guiana Salvador da Bahia, Brazil Santos, Brazil Singapore Sri Jayawardenepura Kotte, Sri Lanka (bordering on Am) St. George's, Grenada Suva, Fiji Tabubil, Papua New Guinea Tacloban, Philippines Tarawa, Kiribati Toamasina, Madagascar Tubuai, Austral Islands, France Victoria, Seychelles Villa Tunari, Bolivia West Palm Beach, Florida, United States (bordering on Am) Yaren, Nauru Some of the places with this climate are indeed uniformly and monotonously wet throughout the year (e.g., the northwest Pacific coast of South and Central America, from Ecuador to Costa Rica; see, for instance, Andagoya, Colombia), but in many cases, the period of higher sun and longer days is distinctly wettest (as at Palembang, Indonesia) or the time of lower sun and shorter days may have more rain (as at Sitiawan, Malaysia). Among these places, some have a pure equatorial climate (Balikpapan, Kuala Lumpur, Kuching, Lae, Medan, Paramaribo, Pontianak, and Singapore) with the dominant ITCZ aerological mechanism and no cyclones or a subequatorial climate with occasional hurricanes (Davao, Ratnapura, Victoria). (The term aseasonal refers to the lack in the tropical zone of large differences in daylight hours and mean monthly (or daily) temperature throughout the year. Annual cyclic changes occur in the tropics, but not as predictably as those in the temperate zone, albeit unrelated to temperature, but to water availability whether as rain, mist, soil, or groundwater. Plant response (e.g., phenology), animal (feeding, migration, reproduction, etc.), and human activities (plant sowing, harvesting, hunting, fishing, etc.) are tuned to this 'seasonality'. Indeed, in tropical South America and Central America, the 'rainy season' (and the 'high water season') is called (Spanish) or (Portuguese), though it could occur in the Northern Hemisphere summer; likewise, the 'dry season (and 'low water season') is called or , and can occur in the Northern Hemisphere winter). Am: Tropical monsoon climate This type of climate results from the monsoon winds which change direction according to the seasons. This climate has a driest month (which nearly always occurs at or soon after the "winter" solstice for that side of the equator) with rainfall less than , but at least of average monthly precipitation. Examples Alor Setar, Malaysia Aracaju, Brazil Baguio, Philippines (bordering on Cwb) Bandung, Indonesia (bordering on Af) Barrancabermeja, Colombia Basseterre, Saint Kitts and Nevis Bata, Equatorial Guinea Batticaloa, Sri Lanka (bordering on As) Belmopan, Belize Cà Mau, Vietnam Cali, Colombia Cairns, Queensland, Australia Cayenne, French Guiana (bordering on Af) Chichijima, Japan (bordering on Aw and Cfa) Chittagong, Bangladesh Christmas Island, Australia Coatzacoalcos, Veracruz, Mexico Conakry, Guinea Curepipe, Mauritius Da Nang, Vietnam David, Panama Douala, Cameroon Freetown, Sierra Leone Fort Myers, Florida, United States (bordering on Cfa) Guanare, Venezuela Hat Yai, Thailand (bordering on Aw) Huế, Thừa Thiên–Huế, Vietnam Jakarta, Indonesia Kisangani, Democratic Republic of the Congo Kochi, Kerala, India Ko Samui, Thailand (bordering on Af) Langkawi, Malaysia Libreville, Gabon Maceió, Brazil Makassar, Indonesia Managua, Nicaragua Malabo, Equatorial Guinea Malé, Maldives Mangalore, Karnataka, India Manila, Philippines Mérida, Venezuela Miami, Florida, United States Monrovia, Liberia Nassau, The Bahamas (bordering on Aw) Panama City, Panama Pattani, Thailand Phuntsholing, Bhutan (bordering on Cwa) Pingtung, Taiwan Port Harcourt, Rivers State, Nigeria Port of Spain, Trinidad and Tobago Pucallpa, Peru Puerto Ayacucho, Venezuela Puerto Maldonado, Peru Qionghai, China Quezon City, Philippines Recife, Pernambuco, Brazil Roseau, Dominica Saipan, Northern Mariana Islands, United States (bordering on Af) San Juan, Puerto Rico Santo Domingo, Dominican Republic Sihanoukville, Cambodia Sylhet, Bangladesh (bordering on Cwa) Taitung, Taiwan Thiruvananthapuram, Kerala, India Trinidad, Bolivia Villahermosa, Mexico Wanning, China Wenchang, China Yangon, Myanmar Zanzibar City, Tanzania Aw/As: Tropical savanna climate Aw: Tropical savanna climate with dry winters Aw climates have a pronounced dry season, with the driest month having precipitation less than and less than of average monthly precipitation. Examples Abidjan, Ivory Coast Abuja, Nigeria Bahir Dar, Ethiopia (bordering on Cwb) Bamako, Mali Bangkok, Thailand Bangui, Central African Republic Banjul, The Gambia Barranquilla, Colombia Belo Horizonte, Brazil Bhubaneswar, Odisha, India Bissau, Guinea-Bissau Bobo-Dioulasso, Burkina Faso Brasília, Brazil Brazzaville, Republic of the Congo Bridgetown, Barbados Bujumbura, Burundi Cancún, Quintana Roo, Mexico (bordering on Am) Caracas, Venezuela Cartagena, Colombia Chennai, Tamil Nadu, India Chipata, Zambia Chinandega, Nicaragua Cotonou, Benin Cuernavaca, Mexico (bordering on Cwa) Dar es Salaam, Tanzania Darwin, Northern Territory, Australia Dhaka, Bangladesh Dili, East Timor Dongfang, Hainan, China Guatemala City, Guatemala (bordering on Cwa) Guayaquil, Ecuador Haikou, Hainan, China (bordering on Cwa) Havana, Cuba (bordering on Af) Ho Chi Minh City, Vietnam Hyderabad, Telangana, India (bordering on BSh) Juba, South Sudan Kano, Nigeria Kaohsiung, Taiwan Key West, Florida, United States Khulna, Bangladesh Kigali, Rwanda Kingston, Jamaica (bordering on BSh) Kinshasa, Democratic Republic of Congo Kolkata, West Bengal, India Kumasi, Ghana Kupang, Indonesia Lagos, Nigeria Lomé, Togo Malanje, Angola (bordering on Cwa and Cwb) Managua, Nicaragua Mandalay, Myanmar (bordering on BSh) Maputo, Mozambique (bordering on BSh) Minamitorishima, Japan Moundou, Chad Mumbai, Maharashtra, India (bordering on Am) Naples, Florida, United States Phnom Penh, Cambodia Port-au-Prince, Haiti Port Louis, Mauritius Port Moresby, Papua New Guinea Porto-Novo, Benin Rio de Janeiro, Brazil (bordering on Am) San Pedro Sula, Honduras (bordering on Am) San Cristóbal Island, Ecuador San José, Costa Rica San Salvador, El Salvador Sansha, Hainan, China Santa Cruz de la Sierra, Bolivia (bordering on Af) Santiago de Cuba, Cuba Sanya, Hainan, China St. John's, Antigua and Barbuda Surabaya, Indonesia Tangail, Bangladesh Tegucigalpa, Honduras Townsville, Queensland, Australia Veracruz, Veracruz, Mexico Vientiane, Laos Wake Island, United States Yaoundé, Cameroon Ziguinchor, Senegal Most places that have this climate are found at the outer margins of the tropical zone from the low teens to the mid-20s latitudes, but occasionally an inner-tropical location (e.g., San Marcos, Antioquia, Colombia) also qualifies. The Caribbean coast, eastward from the Gulf of Urabá on the Colombia–Panama border to the Orinoco River delta, on the Atlantic Ocean (about ), have long dry periods (the extreme is the BWh climate (see below), characterized by very low, unreliable precipitation, present, for instance, in extensive areas in the Guajira, and Coro, western Venezuela, the northernmost peninsulas in South America, which receive < total annual precipitation, practically all in two or three months). This condition extends to the Lesser Antilles and Greater Antilles forming the circum-Caribbean dry belt. The length and severity of the dry season diminish inland (southward); at the latitude of the Amazon River—which flows eastward, just south of the equatorial line—the climate is Af. East from the Andes, between the dry, arid Caribbean and the ever-wet Amazon are the Orinoco River's Llanos or savannas, from where this climate takes its name. As: Tropical savanna climate with dry-summers Sometimes As is used in place of Aw if the dry season occurs during the time of higher sun and longer days (during summer). This is the case in parts of Hawaii, northwestern Dominican Republic, East Africa, southeast India and northeast Sri Lanka, and the Brazilian Northeastern Coast. In places that have this climate type, the dry season occurs during the time of lower sun and shorter days generally because of rain shadow effects during the 'high-sun' part of the year. Examples Cape Coast, Ghana (both Aw/As) Chennai, Tamil Nadu, India (bordering on Aw) Fortaleza, Brazil Jaffna, Sri Lanka Kapalua, Hawaii, United States Lanai City, Hawaii, United States Mombasa, Kenya Natal, Rio Grande do Norte, Brazil Nha Trang, Vietnam Nouméa, New Caledonia São Tomé, São Tomé and Principe Trincomalee, Sri Lanka Group B: Arid (desert and semi-arid) climates These climates are characterized by the amount of annual precipitation less than a threshold value that approximates the potential evapotranspiration. The threshold value (in millimeters) is calculated as follows: Multiply the average annual temperature in °C by 20, then add According to the modified Köppen classification system used by modern climatologists, total precipitation in the warmest six months of the year is taken as a reference instead of the total precipitation in the high-sun half of the year. If the annual precipitation is less than 50% of this threshold, the classification is BW (arid: desert climate); if it is in the range of 50%–100% of the threshold, the classification is BS (semi-arid: steppe climate). A third letter can be included to indicate temperature. Here, h signifies low-latitude climate (average annual temperature above 18 °C) while k signified middle-latitude climate (average annual temperature below 18 °C). Desert areas situated along the west coasts of continents at tropical or near-tropical locations characterized by frequent fog and low clouds, although these places rank among the driest on earth in terms of actual precipitation received, can be labeled BWn with the n denoting a climate characterized by frequent fog. An equivalent BSn category can be found in foggy coastal steppes. BW: Arid climates BWh: Hot deserts ʽAziziya, Jafara, Libya Aden, Yemen Agadez, Niger Ahvaz, Khuzestan, Iran Alice Springs, Northern Territory, Australia Almería, Andalusia, Spain (bordering on BSh) Arica, Chile Ascension Island, United Kingdom Baghdad, Iraq Biskra, Algeria Cairo, Egypt Cartagena, Murcia, Spain Coober Pedy, Australia Dallol, Ethiopia, location of the hottest average annual temperature on Earth Death Valley, California, United States, location of the hottest air temperature ever reliably recorded on Earth Djibouti City, Djibouti Doha, Qatar Dubai, United Arab Emirates Eilat, Southern District, Israel Faya-Largeau, Chad Fuerteventura, Canary Islands, Spain Gabès, Tunisia (bordering on BSh) Hermosillo, Sonora, Mexico (bordering on BSh) Iquique, Chile (bordering on BWk) Jalalabad, Nangarhar, Afghanistan Jamestown, Saint Helena, United Kingdom Jodhpur, Rajasthan, India Karachi, Pakistan Keetmanshoop, Namibia Khartoum, Sudan Kufra, Libya Kuwait City, Kuwait Laayoune, Western Sahara Lanzarote, Canary Islands, Spain Las Vegas, Nevada, United States Lima, Peru Luxor, Egypt Manama, Bahrain Mary, Turkmenistan Mecca, Makkah Region, Saudi Arabia Mexicali, Baja California, Mexico Moçâmedes, Angola Muscat, Oman Nouakchott, Mauritania Phoenix, Arizona, United States Praia, Cape Verde Punto Fijo, Venezuela Qom, Iran Riyadh, Saudi Arabia Sabha, Libya Semnan, Iran (bordering on BWk) Sharm El Sheikh, Egypt Tamanrasset, Algeria Trujillo, Peru Timbuktu, Mali Upington, Northern Cape, South Africa Yazd, Iran BWk: Cold deserts Aktau, Kazakhstan Antofagasta, Chile Aral, Kazakhstan Arequipa, Peru Ashgabat, Turkmenistan Bamyan, Afghanistan Ciudad Juárez, Chihuahua, Mexico (bordering on BWh) Dalanzadgad, Mongolia Damascus, Syria Golmud, Qinghai, China Isfahan, Iran Kerki, Uzbekistan (bordering on BWh) Kerman, Iran Khovd, Mongolia Kingman, Arizona, United States Kyzylorda, Kazakhstan Las Cruces, New Mexico, United States (bordering on BWh) Leh, India Lorca, Spain (bordering on BWh) Mendoza, Argentina Naâma, Algeria (bordering on BSk) Neuquén, Argentina Nukus, Karakalpakstan, Uzbekistan Ölgii, Mongolia St. George, Utah, United States (bordering on BWh) San Juan, Argentina (BWk/BWh) Sanaa, Yemen (bordering on BSk) Swakopmund, Namibia Tabernas, Spain (bordering on BWh) Turpan, Xinjiang, China Walvis Bay, Erongo Region, Namibia Yinchuan, Ningxia, China BS: Semi-arid (steppe) climates BSh: Hot semi-arid Ahmedabad, Gujarat, India (bordering on Aw) Airolaf, Djibouti Accra, Ghana (bordering on Aw) Aguascalientes (city), Mexico Alexandria, Egypt (bordering on BWh) Alicante, Spain Barquisimeto, Venezuela Broome, Western Australia, Australia Bulawayo, Zimbabwe Bushehr, Iran Coimbatore, Tamil Nadu, India Dakar, Senegal Dezful, Iran Gaborone, Botswana Hargeisa, Somaliland Honolulu, Hawaii, United States Kandahar, Afghanistan (bordering on BWh) Kimberley, Northern Cape, South Africa Kiritimati, Kiribati Kurnool, Andhra Pradesh, India Lahore, Punjab, Pakistan (bordering on Cwa) Lampedusa, Sicily, Italy Los Angeles, California, United States (bordering on Csa) Luanda, Angola Mafikeng, South Africa Malakal, South Sudan Maracaibo, Venezuela Marrakesh, Morocco Mogadishu, Somalia Monte Cristi, Dominican Republic (bordering on As) Monterrey, Mexico (bordering on Cfa) Mosul, Nineveh, Iraq (bordering on Csa) Mount Isa, Queensland, Australia Murcia, Spain N'Djamena, Chad Niamey, Niger Nicosia, Cyprus Odessa, Texas, United States (bordering on BSk) Oranjestad, Aruba Ouagadougou, Burkina Faso Patos, Paraíba, Brazil Petrolina, Pernambuco, Brazil Piraeus, Greece Polokwane, South Africa Querétaro City, Querétaro, Mexico Santiago del Estero, Argentina Sfax, Tunisia Shiraz, Iran (bordering on BSk) Toliara, Madagascar Tripoli, Libya Valencia, Spain (bordering on Csa) Windhoek, Namibia Yuanmou, Yunnan, China BSk: Cold semi-arid Albacete, Spain Aleppo, Syria Alexandra, New Zealand (bordering on Cfb) Amman, Jordan (bordering on BSh and Csa) Ankara, Turkey (bordering on Csa) Asmara, Eritrea Astrakhan, Russia Atyrau, Kazakhstan (bordering on BWk) Baku, Azerbaijan (bordering on BWk) Batna, Algeria Bloemfontein, South Africa Boise, Idaho, United States Choibalsan, Mongolia Cochabamba, Bolivia Comodoro Rivadavia, Argentina Daraa, Syria (bordering on BSh) Denver, Colorado, United States Dhamar, Yemen Essaouira, Morocco (bordering on BSh) Gevgelija, North Macedonia Herat, Afghanistan Kabul, Afghanistan Kalgoorlie, Western Australia, Australia (bordering on BSh/BWh/BWk) Kamloops, British Columbia, Canada Karaj, Iran Konya, Turkey Kyzyl, Tuva, Russia (bordering on Dwb) L'Agulhas, Western Cape, South Africa La Quiaca, Jujuy, Argentina Lethbridge, Alberta, Canada (bordering on Dfb) Lhasa, Tibet, China (bordering on Cwb and Dwb) Madrid, Spain Mashhad, Iran Mazar-i-Sharif, Balkh, Afghanistan (bordering on BSh/BWh/BWk) Medicine Hat, Alberta, Canada (bordering on Dfb) Mildura, Victoria, Australia (bordering on BSh) Mörön, Mongolia Navoiy, Uzbekistan (bordering on BWk) Pachuca, Hidalgo, Mexico Quetta, Pakistan Reno, Nevada, United States Saiq, Oman Samarkand, Uzbekistan Santiago, Chile Shijiazhuang, Hebei, China Skardu, Pakistan Sulina, Romania Tabriz, Iran (bordering on Dsa) Taraz, Kazakhstan Tehran, Iran (bordering on BSh and Csa) Thala, Tunisia (bordering on Csa) Thessaloniki, Greece (bordering on BSh/Cfa/Csa) Tianjin, China (bordering on Dwa) Turkistan, Kazakhstan Ulaanbaatar, Mongolia (bordering on Dwb and Dwc) Ulan-Ude, Buryatia, Russia (bordering on Dwb and Dwc) Viedma, Argentina Yerevan, Armenia (bordering on Dfa) Zacatecas City, Zacatecas, Mexico Zaragoza, Spain Group C: Temperate/mesothermal climates In the Köppen climate system, temperate climates are defined as having an average temperature above (or , as noted previously) in their coldest month but below . The average temperature of roughly coincides with the equatorward limit of frozen ground and snow cover lasting for a month or more. The second letter indicates the precipitation pattern—w indicates dry winters (driest winter month average precipitation less than one-tenth wettest summer month average precipitation). s indicates at least three times as much rain in the wettest month of winter as in the driest month of summer. f means significant precipitation in all seasons (neither above-mentioned set of conditions fulfilled). The third letter indicates the degree of summer heat—a indicates warmest month average temperature above while b indicates warmest month averaging below 22 °C but with at least four months averaging above , and c indicates one to three months averaging above . Cs: Mediterranean-type climates Csa: Hot-summer Mediterranean climates These climates usually occur on the western sides of continents between the latitudes of 30° and 45°. These climates are in the polar front region in winter, and thus have moderate temperatures and changeable, rainy weather. Summers are hot and dry, due to the domination of the subtropical high-pressure systems, except in the immediate coastal areas, where summers are milder due to the nearby presence of cold ocean currents that may bring fog but prevent rain. Examples Adelaide, Australia Algiers, Algeria Angra do Heroísmo, Terceira Island, Portugal (bordering on Csb/Cfa/Cfb) Antalya, Turkey Athens, Greece (bordering on BSh) Barcelona, Spain (bordering on Cfa) Beirut, Lebanon Casablanca, Morocco Chitral, Pakistan (bordering on BSk) Dubrovnik, Croatia Dushanbe, Tajikistan Erbil, Iraq Faro, Portugal Fez, Morocco Funchal, Portugal (bordering on As) Gibraltar Heraklion, Greece Homs, Syria Ilam, Iran Irbid, Jordan İzmir, Turkey Jerusalem, Israel Kardzhali, Bulgaria (bordering on Cfa) Kermanshah, Iran Latakia, Syria Lisbon, Portugal Marseille, France Maymana, Afghanistan Menorca, Balearic Islands, Spain Mersin, Turkey Monaco Naples, Italy (bordering on Cfa) Nice, France Novorossiysk, Krasnodar Krai, Russia (bordering on Cfa) Palermo, Italy Patras, Greece Perth, Western Australia, Australia Podgorica, Montenegro (bordering on Cfa) Prodromos, Cyprus Provo, Utah, United States (bordering on Dsa) Rome, Italy Sacramento, California, United States Seville, Spain Sanandaj, Iran (bordering on Dsa) Shkodër, Albania (bordering on Cfa) Split, Croatia Tangier, Morocco Tashkent, Uzbekistan (bordering on BSk) Tecate, Baja California, Mexico Tel Aviv, Israel Tlemcen, Algeria Tunis, Tunisia Urfa, Turkey Valletta, Malta Vatican City Walla Walla, Washington, United States Zhetisay, Kazakhstan (bordering on Dsa and BSk) Csb: Warm-summer Mediterranean climates Dry-summer climates sometimes extend to additional areas where the warmest month average temperatures do not reach , most often in the 40s latitudes. These climates are classified as Csb. Examples Albany, Western Australia, Australia Aluminé, Neuquén Province, Argentina Bayda, Libya Cape Town, South Africa (bordering on Csa) Concepción, Chile Guarda, Portugal Ibarra, Ecuador Ipiales, Colombia (bordering on Cfb) Karlskrona, Sweden (bordering on Cfb) Korçë, Albania (bordering on Dsb) Kütahya, Turkey (bordering on Dsb) León, Spain Linares, Chile Lonquimay, Araucanía Region, Chile (bordering on Cfb) Mount Gambier, South Australia, Australia Nakuru, Kenya Nanaimo, British Columbia, Canada Ohrid, North Macedonia Pasto, Colombia Port Lincoln, South Australia, Australia (bordering on Cfb) Portland, Oregon, United States Porto, Portugal Rieti, Italy Salamanca, Spain San Carlos de Bariloche, Argentina San Cristóbal de la Laguna, Spain San Francisco, California, United States San Martín de los Andes, Neuquén Province, Argentina (bordering on Cfb) Segovia, Spain Seattle, Washington, United States Siah Bisheh, Iran Sintra, Portugal Tulcán, Ecuador (bordering on Cfb) Valladolid, Spain Victoria, British Columbia, Canada Csc: Cold-summer Mediterranean climates Cold summer Mediterranean climates (Csc) exist in high-elevation areas adjacent to coastal Csb climate areas, where the strong maritime influence prevents the average winter monthly temperature from dropping below . This climate is rare and is predominantly found in climate fringes and isolated areas of the Cascades and Andes Mountains, as the dry-summer climate extends further poleward in the Americas than elsewhere. Rare instances of this climate can be found in some coastal locations in the North Atlantic and at high altitudes in Hawaii. Examples Balmaceda, Chile (bordering on Csb) Haleakalā Summit, Hawaii, United States Liawenee, Australia (bordering on Csb/Cfb/Cfc) Røst, Norway (bordering on Cfc) Spirit Lake, Washington, United States (bordering on Dsc) Cfa: Humid subtropical climates These climates usually occur on the eastern coasts and eastern sides of continents, usually in the high 20s and 30s latitudes. Unlike the dry summer Mediterranean climates, humid subtropical climates have a warm and wet flow from the tropics that creates warm and moist conditions in the summer months. As such, summer (not winter as is the case in Mediterranean climates) is often the wettest season. The flow out of the subtropical highs and the summer monsoon creates a southerly flow from the tropics that brings warm and moist air to the lower east sides of continents. This flow is often what brings the frequent and strong but short-lived summer thundershowers so typical of the more southerly subtropical climates like the southeast United States, southern China, and Japan. Examples Astara, Azerbaijan (bordering on Csa) Asunción, Paraguay (bordering on Aw) Atlanta, Georgia, United States Balbalan, Philippines (bordering on Am) Bandar-e Anzali, Gilan, Iran Belgrade, Serbia Bologna, Italy Bratislava, Slovakia (bordering on Cfb/Dfa/Dfb) Brisbane, Queensland, Australia Budapest, Hungary (bordering on Dfa) Buenos Aires, Argentina Chongqing, China (bordering on Cwa) Ciudad del Este, Paraguay Constanța, Romania (bordering on BSk) Corvo Island, Portugal Dallas, Texas, United States Dir, Pakistan Durban, KwaZulu-Natal, South Africa Florianópolis, Santa Catarina, Brazil Florina, Greece (bordering on Dfa) Geoje, South Korea (bordering on Cwa) Giresun, Turkey Girona, Spain (bordering on Csa) Huesca, Spain Ijevan, Tavush, Armenia (bordering on Dfa) Jeju, South Korea Juan Fernández Islands, Chile (bordering on Cfb/Csa/Csb) Koper, Slovenia Kozani, Greece Krasnodar, Russia (bordering on Dfa) Kutaisi, Georgia La Plata, Argentina Lugano, Ticino, Switzerland (bordering on Cfb) Lyon, France (bordering on Cfb) Matamoros, Tamaulipas, Mexico (bordering on Aw) Maykop, Adygea, Russia (bordering on Dfa) Milan, Italy Montevideo, Uruguay Mostar, Bosnia and Herzegovina (bordering on Csa) New York City, New York, United States (bordering on Dfa) Osaka, Japan Porto Alegre, Rio Grande do Sul, Brazil Prizren, Kosovo (bordering on Cfb/Dfa/Dfb) Raoul Island, New Zealand Rasht, Gilan, Iran Rijeka, Croatia Rosario, Argentina (bordering on Cwa) Samsun, Turkey San Marino Sari, Mazandaran, Iran São Paulo, Brazil (bordering on Cwa) Siguatepeque, Honduras (bordering on Cwa) Shanghai, China Simferopol, Ukraine (bordering on Dfa) Skopje, North Macedonia (bordering on Dfa and BSk) Sochi, Russia Srinagar, Jammu and Kashmir, India Sydney, New South Wales, Australia Taipei, Taiwan Tbilisi, Georgia (bordering on BSk) Tirana, Albania (bordering on Csa) Tokyo, Japan Toulouse, France Tulcea, Romania (bordering on Dfa) Ulsan, South Korea Varna, Bulgaria Valence, France (bordering on Cfb) Venice, Italy Vienna, Austria (bordering on Cfb/Dfa) Wagga Wagga, New South Wales, Australia Washington, D.C., United States Wuhan, Hubei, China Yalta, Ukraine (bordering on Csa) Yokohama, Japan Zaqatala, Azerbaijan Zonguldak, Turkey (bordering on Cfb) Cfb: Oceanic climates Marine west coast climate Cfb climates usually occur in the higher middle latitudes on the western sides of continents; they are typically situated immediately poleward of the Mediterranean climates in the 40s and 50s latitudes. However, in southeast Australia, southeast South America, and extreme southern Africa this climate is found immediately poleward of temperate climates, on places near the coast and at a somewhat lower latitude. In western Europe, this climate occurs in coastal areas up to 68°N in Norway. These climates are dominated all year round by the polar front, leading to changeable, often overcast weather. Summers are mild due to cool ocean currents. Winters are milder than other climates in similar latitudes, but usually very cloudy, and frequently wet. Cfb climates are also encountered at high elevations in certain subtropical and tropical areas, where the climate would be that of a subtropical/tropical rainforest if not for the altitude. These climates are called "highlands". Examples Amsterdam, North Holland, Netherlands Artvin, Turkey (bordering on Cfa/Csa/Csb) Auckland, New Zealand Baltiysk, Kaliningrad Oblast, Russia (bordering on Dfb) Belfast, Northern Ireland, United Kingdom Bergen, Vestland, Norway Berlin, Germany Bern, Switzerland (bordering on Dfb) Bilbao, Spain Block Island, Rhode Island, United States (bordering on Dfb) Bolu, Turkey Bordeaux, France (bordering on Cfa) Bornholm, Denmark Brussels, Belgium Caransebeş, Romania (bordering on Dfb) Cetinje, Montenegro (bordering on Dfb) Christchurch, New Zealand Copenhagen, Denmark Dublin, Ireland Forks, Washington, United States Frankfurt, Hesse, Germany Gdynia, Poland Geneva, Switzerland George, Western Cape, South Africa Gijón, Spain Glasgow, Scotland, United Kingdom Gothenburg, Sweden Győr, Hungary (bordering on Cfa/Dfa/Dfb) Hobart, Tasmania, Australia Île Amsterdam, French Southern and Antarctic Lands Île d'Yeu, France Ketchikan, Alaska, United States L'Aquila, Italy (bordering on Cfa) Ljubljana, Slovenia Lofoten, Nordland, Norway (bordering on Cfc/Dfb/Dfc) London, England, United Kingdom Luxembourg City, Luxembourg Malmö, Sweden Mar del Plata, Argentina Melbourne, Victoria, Australia (bordering on Cfa) Merano, Italy (bordering on Cfa) Munich, Bavaria, Germany Ørland, Trøndelag, Norway Osorno, Los Lagos Region, Chile Paris, France Port Elizabeth, South Africa Prague, Czech Republic (bordering on Dfb) Prince Rupert, British Columbia, Canada Puerto Montt, Los Lagos Region, Chile Puerto Natales, Chile (bordering on Cfc) Punta del Este, Uruguay (bordering on Cfa) Salzburg, Austria (bordering on Dfb) Santander, Spain Sarajevo, Bosnia and Herzegovina (bordering on Dfb) Skagen, Denmark Szczecin, Poland Vaduz, Liechtenstein Valdivia, Los Ríos Region, Chile Vancouver, British Columbia, Canada (bordering on Csb) Villa La Angostura, Neuquén Province, Argentina Wellington, New Zealand Wollongong, New South Wales, Australia (bordering on Cfa) Wrocław, Poland (bordering on Dfb) Zagreb, Croatia (bordering on Dfb) Zürich, Switzerland Subtropical highland climate with uniform rainfall Subtropical highland climates with uniform rainfall (Cfb) are a type of oceanic climate mainly found in the highlands of Australia, such as in or around the Great Dividing Range in the north of the state of New South Wales, and also sparsely in other continents, such as in South America, among others. Unlike a typical Cwb climate, they tend to have rainfall spread evenly throughout the year. They have characteristics of both the Cfb and Cfa climates, but unlike these climates, they have a high diurnal temperature variation and low humidity, owing to their inland location and relatively high elevation. Examples Andorra la Vella, Andorra Blue Mountains, Jamaica Bogotá, Colombia Briançon, France (bordering on Dfb) Brinchang, Malaysia Cameron Highlands, Malaysia Campos do Jordão, São Paulo, Brazil Canberra, Australian Capital Territory, Australia Chachapoyas, Peru Cobán, Guatemala (bordering on Cwb) Constanza, Dominican Republic Cuenca, Ecuador Curitiba, Paraná, Brazil Dullstroom, South Africa Eldoret, Kenya Goris, Syunik, Armenia (bordering on Dfb) Hengshan, China (bordering on Cfa/Dfa/Dfb) Kabale, Uganda Kodaikanal, Tamil Nadu, India Le Tampon, Réunion, France Lithgow, New South Wales, Australia La Esperanza, Honduras (bordering on Cwb) Manizales, Colombia Maseru, Lesotho (bordering on Cwb) Mthatha, South Africa Mucuchíes, Venezuela Mulia, Indonesia Nuwara Eliya, Sri Lanka Oruro, Bolivia Quito, Pichincha, Ecuador Riobamba, Ecuador (bordering on Csb) Sa Pa, Vietnam Soria, Spain Teresópolis, Rio de Janeiro, Brazil Trevico, Italy Tunja, Colombia Volcano, Hawaii, United States Wabag, Papua New Guinea Waynesville, North Carolina, United States Williams, Arizona, United States Xalapa, Veracruz, Mexico (bordering on Cfa) Cfc: Subpolar oceanic climate Subpolar oceanic climates (Cfc) occur poleward of or at higher elevations than the maritime temperate climates and are mostly confined either to narrow coastal strips on the western poleward margins of the continents, or, especially in the Northern Hemisphere, to islands off such coasts. They occur in both hemispheres, generally in the high 50s and 60s latitudes in the Northern Hemisphere and the 50s latitudes in the Southern Hemisphere. Examples Adak, Alaska, United States (bordering on Dfc) Auckland Islands, New Zealand Bø, Nordland, Norway (bordering on Cfb/Dfb/Dfc) Hafnarfjörður, Iceland (bordering on Dfc) Karlsøy, Norway (bordering on Dfc) Miena, Tasmania, Australia Mount Baw Baw, Australia Punta Arenas, Chile Porvenir, Chile Reykjavík, Iceland Río Grande, Tierra del Fuego, Argentina (bordering on BSk/Dfc/ET) Río Turbio, Santa Cruz, Argentina Stanley, Falkland Islands Tórshavn, Faroe Islands Unalaska, Alaska, United States Værøy, Norway (bordering on Csc) Vestmannaeyjar, Iceland (bordering on ET) Cw: Dry-winter subtropical climates Cwa: Dry-winter humid subtropical climate Cwa is a monsoonal influenced version of the humid subtropical climate, having the classic dry winter–wet summer pattern associated with tropical monsoonal climates. They are found at similar latitudes as the Cfa climates, except in regions where monsoons are more prevalent. These regions are in the Southern Cone of South America, the Gangetic Plain of South Asia, southeastern Africa, parts of East Asia and Mexico, and Northern Vietnam of Southeast Asia. Examples Antananarivo, Madagascar (bordering on Cwb) Birgunj, Nepal Busan, South Korea Changwon, South Korea (bordering on Cfa) Chengdu, Sichuan, China Chimoio, Mozambique Córdoba, Argentina Delhi, India (bordering on BSh) Dinajpur, Bangladesh Guadalajara, Jalisco, Mexico Guangzhou, Guangdong, China (bordering on Cfa) Guwahati, India (bordering on Aw) Hanoi, Vietnam Hong Kong Islamabad, Pakistan Kathmandu, Nepal León, Guanajuato, Mexico Lilongwe, Malawi Lubumbashi, Democratic Republic of Congo Lucknow, Uttar Pradesh, India Luena, Angola (bordering on Aw) Lusaka, Zambia Macau Mackay, Queensland, Australia (bordering on Aw) Mengla, Yunnan, China Ndola, Zambia Phonsavan, Laos Pokhara, Nepal Pretoria, Gauteng, South Africa Qingdao, Shandong, China (bordering on Dwa) Rangpur, Bangladesh Rawalpindi, Pakistan Saidpur, Bangladesh San Luis, Argentina (bordering on BSk) Santa Rosa de Copan, Honduras Sialkot, Pakistan Taunggyi, Myanmar Tucumán, Argentina Yeosu, South Korea Zapopan, Jalisco, Mexico Cwb: Dry-winter subtropical highland climate Dry-winter subtropical highland climate (Cwb) is a type of climate mainly found in highlands inside the tropics of Central America, South America, Africa, and South and Southeast Asia or areas in the subtropics. Winters are noticeable and dry, and summers can be very rainy. In the tropics, the monsoon is provoked by the tropical air masses and the dry winters by subtropical high pressure. Examples Addis Ababa, Ethiopia Arusha, Tanzania Batu, Indonesia Byumba, Rwanda Cajamarca, Peru Cherrapunji, India Cusco, Peru Da Lat, Vietnam Dali City, China Dedza, Malawi Diamantina, Brazil Dieng Plateau, Indonesia Fraijanes, Guatemala Gangtok, India Hakha, Myanmar Harare, Zimbabwe Huambo, Angola Huaraz, Peru Ixchiguán, Guatemala (bordering on Cwc) Jijiga, Ethiopia Johannesburg, South Africa Kenscoff, Haiti (bordering on Aw) Kunming, China La Paz (lower elevations), Bolivia La Trinidad, Philippines Lichinga, Mozambique Lukla, Nepal Mbabane, Eswatini Mbeya, Tanzania Mexico City, Mexico Mokhotlong, Lesotho Nairobi, Kenya Ndu, Cameroon Ooty, India Phongsali, Laos Puebla, Mexico Quetzaltenango, Guatemala Qujing, Yunnan, China (bordering on Cfb) Salta, Argentina Shimla, India Sucre, Bolivia Thimphu, Bhutan (bordering on Cwa) Toluca, Mexico Cwc: Dry-winter cold subtropical highland climate Dry-winter cold subtropical highland climates (Cwc) exist in high-elevation areas adjacent to Cwb climates. This climate is rare and is found mainly in isolated locations mostly in the Andes in Bolivia and Peru, as well as in sparse mountain locations in Southeast Asia. El Alto, Bolivia (bordering on ET) Juliaca, Peru (bordering on ET and Cwb) La Paz (high elevations), Bolivia (bordering on ET) Mount Pulag, Philippines (bordering on ET and Cwb) Potosí, Bolivia (bordering on ET and Cwb) Group D: Continental/microthermal climates These climates have an average temperature above in their warmest months, and the coldest month average below (or , as noted previously). These usually occur in the interiors of continents and on their upper east coasts, normally north of 40°N. In the Southern Hemisphere, group D climates are extremely rare due to the smaller land masses in the middle latitudes and the almost complete absence of land at 40–60°S, existing only in some highland locations. Dfa/Dwa/Dsa: Hot summer humid continental climates Dfa climates usually occur in the high 30s and low 40s latitudes, with a qualifying average temperature in the warmest month of greater than . In Europe, these climates tend to be much drier than in North America. Dsa exists at higher elevations adjacent to areas with hot summer Mediterranean (Csa) climates. These climates exist only in the Northern Hemisphere because the Southern Hemisphere has no large landmasses isolated from the moderating effects of the sea within the middle latitudes. Examples Aktobe, Kazakhstan Almaty, Kazakhstan Aomori, Japan (bordering on Cfa) Boston, Massachusetts, United States (bordering on Cfa) Bucharest, Romania (bordering on Cfa) Çankırı, Turkey (bordering on Cfa and BSk) Cheonan, South Korea (bordering on Dwa) Chicago, Illinois, United States Chișinău, Moldova Dnipro, Ukraine (bordering on Dfb) Donetsk, Ukraine Hakodate, Hokkaido, Japan (bordering on Dfb) Hamilton, Ontario, Canada (bordering on Dfb) Iași, Romania (bordering on Dfb) Kimchaek, North Korea (bordering on Dwa) Minneapolis, Minnesota, United States Odesa, Ukraine (bordering on Cfa and BSk) Oral, Kazakhstan (bordering on BSk) Pleven, Bulgaria Pogradec, Albania (bordering on Cfa/Cfb/Dfb) Qabala, Azerbaijan (bordering on Cfa/Cfb/Dfb) Rostov-on-Don, Russia Ruse, Bulgaria (bordering on Cfa) Sapporo, Hokkaido, Japan (bordering on Dfb) Saratov, Russia Szeged, Hungary (bordering on Cfa) Tanchon, North Korea (bordering on Dfb/Dwa/Dwb) Toronto, Ontario, Canada (bordering on Dfb) Ürümqi, Xinjiang, China (bordering on BSk) Volgograd, Russia (bordering on BSk) Windsor, Ontario, Canada Zaječar, Serbia (bordering on Cfa) In eastern Asia, Dwa climates extend further south into the mid-30s latitudes due to the influence of the Siberian high-pressure system, which also causes winters there to be dry, and summers can be very wet because of monsoon circulation. Examples Beijing, China (bordering on BSk) Blagoveshchensk, Amur Oblast, Russia (bordering on Dwb) Chongjin, North Korea Chuncheon, Gangwon Province, South Korea Harbin, Heilongjiang, China Incheon, South Korea Kaesong, North Korea Lesozavodsk, Primorsky Krai, Russia (bordering on Dwb) North Platte, Nebraska, United States (bordering on Dfa and BSk) Phillipsburg, Kansas, United States (bordering on Dfa) Pyongyang, North Korea Rapid City, South Dakota, United States (bordering on BSk) Seoul, South Korea Xi'an, Shaanxi, China (bordering on Cwa) Dsa exists only at higher elevations adjacent to areas with hot summer Mediterranean (Csa) climates. Examples Arak, Iran (bordering on BSk and Csa) Arys, Kazakhstan (bordering on BSk) Bishkek, Kyrgyzstan Bitlis, Turkey Cambridge, Idaho, United States Chirchiq, Uzbekistan (bordering on Csa) Fayzabad, Badakhshan, Afghanistan (bordering on Csa) Ghazni, Afghanistan Hakkâri, Turkey Hamedan, Iran (bordering on BSk) Isfara, Tajikistan Logan, Utah, United States Osh, Kyrgyzstan Puli Alam, Afghanistan Muş, Turkey Salt Lake City, Utah, United States (bordering on Csa) Saqqez, Iran Shymkent, Kazakhstan (bordering on Csa) Dfb/Dwb/Dsb: Warm summer humid continental/hemiboreal climates Dfb climates are immediately poleward of hot summer continental climates, generally in the high 40s and low 50s latitudes in North America and Asia, and also extending to higher latitudes into the high 50s and low 60s latitudes in central and eastern Europe, between the maritime temperate and continental subarctic climates. Examples Aetomilitsa, Greece (bordering on Cfb) Akhaltsikhe, Georgia Ardahan, Turkey Asahikawa, Hokkaido, Japan Astana, Kazakhstan Augsburg, Bavaria, Germany (bordering on Cfb) Badrinath, Uttarakhand, India (bordering on Cfb) Belluno, Italy (bordering on Cfb) Bitola, North Macedonia (bordering on Dfa) Briceni, Moldova Brno, Czech Republic Chamonix, France Cluj-Napoca, Romania Cortina d'Ampezzo, Italy Debrecen, Hungary (bordering on Cfa/Cfb/Dfa) Edmonton, Alberta, Canada El Pas de la Casa, Andorra (bordering on Dfc) Erzurum, Turkey Fairbanks, Alaska, United States (bordering on Dfc) Falls Creek, Victoria, Australia (bordering on Cfb/Cfc/Dfc) Falun, Dalarna, Sweden Görlitz, Saxony, Germany (bordering on Cfb) Gospić, Croatia (bordering on Cfb) Gyumri, Shirak, Armenia Helsinki, Finland Imilchil, Morocco (bordering on Cfb) Innsbruck, Austria Karaganda, Kazakhstan Karakol, Kyrgyzstan Kars, Turkey Klagenfurt, Austria Košice, Slovakia Kraków, Poland Kushiro, Hokkaido, Japan Kyiv, Ukraine La Brévine, Switzerland (bordering on Dfc) La Chaux-de-Fonds, Switzerland Lendava, Slovenia (bordering on Cfb) Lillehammer, Norway (bordering on Dfc) Livno, Bosnia and Herzegovina (bordering on Cfb) Lviv, Ukraine Marquette, Michigan, United States Miercurea Ciuc, Romania Minsk, Belarus Montreal, Quebec, Canada (bordering on Dfa) Moscow, Russia Mount Buller, Victoria, Australia (bordering on Cfb/Cfc/Dfc) Mouthe, France Mutsu, Aomori Prefecture, Japan (bordering on Cfb) Novosibirsk, Russia Oslo, Norway Ottawa, Ontario, Canada Pavlodar, Kazakhstan (bordering on Dfa) Perisher Valley, New South Wales, Australia (bordering on Cfb/Cfc/Dfc) Pljevlja, Montenegro Portland, Maine, United States Poznań, Poland (bordering on Cfb) Pristina, Kosovo Riga, Latvia Saint Petersburg, Russia Saint-Véran, France (bordering on Dfc) Schaan, Liechtenstein (bordering on Cfb) Sofia, Bulgaria (bordering on Cfb) Stockholm, Sweden Subotica, Serbia (bordering on BSk) Szombathely, Hungary (bordering on Cfb) Tallinn, Estonia Tampere, Finland (bordering on Dfc) Toblach, Italy Trondheim, Norway Turku, Finland Uppsala, Sweden Vanadzor, Armenia Vilnius, Lithuania Warsaw, Poland (bordering on Cfb) Žabljak, Montenegro Like with all Group D climates, Dwb climates mostly only occur in the northern hemisphere. Examples Baruunturuun, Mongolia (bordering on Dwc) Calgary, Alberta, Canada (bordering on Dfb and BSk) Darkhan, Mongolia (bordering on Dwc and BSk) Harrison, Nebraska, United States (bordering on Dfb) Heihe, Heilongjiang, China Hoeryong, North Korea Irkutsk, Russia (bordering on Dwc) Khabarovsk, Russia (bordering on Dwa) Pembina, North Dakota, United States (bordering on Dfb) Pyeongchang, South Korea Rason, North Korea Shigatse, Tibet, China Thief River Falls, Minnesota, United States (bordering on Dwa) Vladivostok, Primorsky Krai, Russia Yanji, Jilin, China (bordering on Dwa) Dsb arises from the same scenario as Dsa, but at even higher altitudes or latitudes, and chiefly in North America, since the Mediterranean climates extend further poleward than in Eurasia. Examples Abali, Iran (Dsb) Ağrı, Turkey Alto Río Senguer, Chubut Province, Argentina (bordering on BSk/Csb/Csc/Dsc) Castlegar, British Columbia, Canada (bordering on Dfb) Chaghcharan, Ghor, Afghanistan Dras, Ladakh, India Flagstaff, Arizona, United States (bordering on Csb) Jermuk, Vyots Dzor, Armenia (bordering on Dfb) Las Leñas, Mendoza Province, Argentina Maidan Shar, Afghanistan Puente del Inca, Mendoza Province, Argentina (bordering on Csb) Puerto de Navacerrada, Spain (bordering on Csb) Rocca di Mezzo, Italy (bordering on Csb and Cfb) Roghun, Tajikistan Sivas, Turkey Smolyan, Bulgaria South Lake Tahoe, California, United States (bordering on Csb) Spokane, Washington, United States (bordering on Csa/Csb/Dsa) Dfc/Dwc/Dsc: Subarctic/boreal climates Dfc, Dsc and Dwc climates occur poleward of the other group D climates, or at higher altitudes, generally in the 50s and 60s latitudes. Examples Dfc climates Alta, Norway Anchorage, Alaska, United States (bordering on Dfb) Arkhangelsk, Russia Brocken, Saxony-Anhalt, Germany Charlotte Pass, New South Wales, Australia Coma Pedrosa, Andorra Davos, Switzerland Feldberg, Baden-Württemberg, Germany Fraser, Colorado, United States Great Dun Fell, England, United Kingdom (bordering on ET and Cfc) Hikkim, Himachel Pradesh, India Horská Kvilda, Czech Republic Jyväskylä, Finland Kangerlussuaq, Greenland, Denmark (bordering on ET and BSk) Kiruna, Sweden Kopaonik, Serbia Labrador City, Newfoundland and Labrador, Canada Livigno, Italy Luleå, Sweden Lysá hora, Czech Republic Narsarsuaq, Greenland, Denmark (bordering on ET) Norilsk, Krasnoyarsk Krai, Russia Obergurgl, Austria Oulu, Finland (bordering on Dfb) Paganella, Italy (bordering on Dwc) Røros, Norway Saint Pierre and Miquelon, France (bordering on Dfb) St. Moritz, Grisons, Switzerland Šerák, Czech Republic Štrbské Pleso, Slovakia Tromsø, Norway Umeå, Sweden (bordering on Dfb) Vaasa, Finland Valle Nevado, Chile Whitehorse, Yukon, Canada Yakutsk, Sakha Republic, Russia (bordering on Dfd) Yellowknife, Northwest Territories, Canada Dwc climates Bulgan, Mongolia (bordering on Dwb) Delta Junction, Alaska, United States (bordering on BSk) Mohe, Heilongjiang, China Nagqu, Tibet, China (bordering on ET) Okhotsk, Khabarovsk Krai, Russia Samjiyon, North Korea (bordering on Dwb) Springbank Hill, Alberta, Canada (bordering on Dwb/Dfb/Dfc) Tsetserleg, Arkhangai Province, Mongolia Yushu City, Qinghai, China (bordering on Dwb) Dsc climates Akureyri, Iceland (bordering on Csc) Anadyr, Chukotka, Russia Crater Lake, Oregon, United States Dawson City, Yukon, Canada Inukjuak, Quebec, Canada Nyurba, Sakha Republic, Russia (bordering on Dfc) Skjåk, Norway (bordering on BSk) Soldotna, Alaska, United States Dfd/Dwd/Dsd: Subarctic/boreal climates with severe winters Places with this climate have severe winters, with the temperature in their coldest month lower than . These climates occur only in eastern Siberia, and are the second coldest, before EF. The coldest recorded temperatures in the Northern Hemisphere belonged to this climate. The names of some of the places with this climate have become veritable synonyms for the extreme, severe winter cold. Examples Dfd climates Okhotsky-Perevoz, Sakha Republic, Russia Oymyakon, Sakha Republic, Russia (bordering on Dwd) Dwd climates Delyankir, Sakha Republic, Russia Khonuu, Sakha Republic, Russia (bordering on Dfd) Dsd climates Verkhoyansk, Sakha Republic, Russia (bordering on Dfd) Group E: Polar climates In the Köppen climate system, polar climates are defined as the warmest temperature of any month being below . Polar climates are further divided into two types, tundra climates and icecap climates: ET: Tundra climate Tundra climate (ET): warmest month has an average temperature between and . These climates occur on the northern edges of the North American and Eurasian land masses (generally north of 70 °N although they may be found farther south depending on local conditions), and on nearby islands. ET climates are also found on some islands near the Antarctic Convergence, and at high elevations outside the polar regions, above the tree line. Examples Alert, Nunavut, Canada (bordering on EF) Amdo, Tibet, China Ben Nevis, Scotland, United Kingdom Bouvet Island Cairn Gorm, Scotland, United Kingdom Campbell Island, New Zealand Cerro de Pasco, Peru Crozet Islands Dikson Island, Russia Dingboche, Nepal El Aguilar, Argentina Esperanza Base, Antarctica Finse, Norway Ilulissat, Greenland, Denmark Iqaluit, Nunavut, Canada Ísafjörður, Iceland (bordering on Csc) Ittoqqortoormiit, Greenland, Denmark Jan Mayen, Norway Jungfraujoch, Switzerland (bordering on EF) Kasprowy Wierch, Poland Kerguelen Islands La Rinconada, Peru Lomnický štít, Prešov Region, Slovakia Longyearbyen, Svalbard, Norway Macquarie Island, Australia Mauna Loa, Hawaii, United States (bordering on Csc and Cfc) Möðrudalur, Iceland Mount Apo, Philippines (bordering on Cfc) Mount Aragats (slopes), Armenia (bordering on Dfc) Mount Fuji, Japan Mount Rainier (slopes), Washington, United States Mount Washington, New Hampshire, United States (bordering on Dfc) Mount Wellington, Tasmania, Australia Murghab, Tajikistan Musala, Bulgaria Mykines, Faroe Islands (bordering on Cfc) Mys Shmidta, Chukotka, Russia Nevado de Toluca, Mexico North Salang, Afghanistan (bordering on Dsc) Novaya Zemlya, Arkhangelsk Oblast, Russia Nuuk, Greenland, Denmark Parinacota, Chile Phari, China Piz Corvatsch, Switzerland Prince Edwards Islands Puerto Williams, Chile (bordering on Cfc) Qarabolaq, Afghanistan Sachs Harbour, Northwest Territories, Canada Shimshal, Pakistan Sněžka, Czech Republic (bordering on Dfc) Sonnblick, Austria South Georgia and the South Sandwich Islands Tanggulashan, Qinghai, China Tiksi, Sakha Republic, Russia Tolhuin, Tierra del Fuego, Argentina (bordering on Dfc) Trepalle, Italy Ushuaia, Tierra del Fuego, Argentina (bordering on Cfc) Utqiagvik, Alaska, United States Vârful Omu, Romania Vetas, Colombia Yu Shan, Taiwan Zugspitze, Bavaria, Germany EF: Ice cap climate Ice cap climate (EF): this climate is dominant in Antarctica, inner Greenland, and summits of many high mountains, even at lower latitudes. Monthly average temperatures never exceed . Examples Aconcagua, Chile/Argentina Amundsen–Scott Station, Antarctica Chimborazo, Ecuador Denali, Alaska, United States Dome Fuji, Antarctica Huascarán, Peru Ismoil Somoni Peak, Tajikistan Jengish Chokusu, China/Kyrgyzstan K2, China/Pakistan Kangchenjunga, India/Nepal Kilimanjaro, Tanzania Lhotse, Nepal Makalu, Nepal/China Mount Ararat, Turkey Mount Everest, China/Nepal Mount Logan, Canada Mount Rainier (summit), Washington, United States Ojos del Salado, Chile Pico de Orizaba, Mexico Puncak Jaya, Indonesia (bordering on ET) Sajama, Bolivia Summit Camp, Greenland, Denmark Ushakov Island, Russia (bordering on ET) Vostok Station, Antarctica Ecological significance Biomass The Köppen climate classification is based on the empirical relationship between climate and vegetation. This classification provides an efficient way to describe climatic conditions defined by temperature and precipitation and their seasonality with a single metric. Because climatic conditions identified by the Köppen classification are ecologically relevant, it has been widely used to map the geographic distribution of long-term climate and associated ecosystem conditions. Climate change Over recent years, there has been an increasing interest in using the classification to identify changes in climate and potential changes in vegetation over time. The most important ecological significance of the Köppen climate classification is that it helps to predict the dominant vegetation type based on the climatic data and vice versa. In 2015, a Nanjing University paper published in Scientific Reports analyzing climate classifications found that between 1950 and 2010, approximately 5.7% of all land area worldwide had moved from wetter and colder classifications to drier and hotter classifications. The authors also found that the change "cannot be explained as natural variations but are driven by anthropogenic factors". A 2018 study provides detailed maps for present and future Köppen-Geiger climate classification maps at 1-km resolution. Other Köppen climate maps All maps use the ≥ definition for the temperate-continental border.
Physical sciences
Climatology
null
23912155
https://en.wikipedia.org/wiki/Gauge%20theory
Gauge theory
In physics, a gauge theory is a type of field theory in which the Lagrangian, and hence the dynamics of the system itself, does not change under local transformations according to certain smooth families of operations (Lie groups). Formally, the Lagrangian is invariant under these transformations. The term gauge refers to any specific mathematical formalism to regulate redundant degrees of freedom in the Lagrangian of a physical system. The transformations between possible gauges, called gauge transformations, form a Lie group—referred to as the symmetry group or the gauge group of the theory. Associated with any Lie group is the Lie algebra of group generators. For each group generator there necessarily arises a corresponding field (usually a vector field) called the gauge field. Gauge fields are included in the Lagrangian to ensure its invariance under the local group transformations (called gauge invariance). When such a theory is quantized, the quanta of the gauge fields are called gauge bosons. If the symmetry group is non-commutative, then the gauge theory is referred to as non-abelian gauge theory, the usual example being the Yang–Mills theory. Many powerful theories in physics are described by Lagrangians that are invariant under some symmetry transformation groups. When they are invariant under a transformation identically performed at every point in the spacetime in which the physical processes occur, they are said to have a global symmetry. Local symmetry, the cornerstone of gauge theories, is a stronger constraint. In fact, a global symmetry is just a local symmetry whose group's parameters are fixed in spacetime (the same way a constant value can be understood as a function of a certain parameter, the output of which is always the same). Gauge theories are important as the successful field theories explaining the dynamics of elementary particles. Quantum electrodynamics is an abelian gauge theory with the symmetry group U(1) and has one gauge field, the electromagnetic four-potential, with the photon being the gauge boson. The Standard Model is a non-abelian gauge theory with the symmetry group U(1) × SU(2) × SU(3) and has a total of twelve gauge bosons: the photon, three weak bosons and eight gluons. Gauge theories are also important in explaining gravitation in the theory of general relativity. Its case is somewhat unusual in that the gauge field is a tensor, the Lanczos tensor. Theories of quantum gravity, beginning with gauge gravitation theory, also postulate the existence of a gauge boson known as the graviton. Gauge symmetries can be viewed as analogues of the principle of general covariance of general relativity in which the coordinate system can be chosen freely under arbitrary diffeomorphisms of spacetime. Both gauge invariance and diffeomorphism invariance reflect a redundancy in the description of the system. An alternative theory of gravitation, gauge theory gravity, replaces the principle of general covariance with a true gauge principle with new gauge fields. Historically, these ideas were first stated in the context of classical electromagnetism and later in general relativity. However, the modern importance of gauge symmetries appeared first in the relativistic quantum mechanics of electronsquantum electrodynamics, elaborated on below. Today, gauge theories are useful in condensed matter, nuclear and high energy physics among other subfields. History The concept and the name of gauge theory derives from the work of Hermann Weyl in 1918. Weyl, in an attempt to generalize the geometrical ideas of general relativity to include electromagnetism, conjectured that Eichinvarianz or invariance under the change of scale (or "gauge") might also be a local symmetry of general relativity. After the development of quantum mechanics, Weyl, Vladimir Fock and Fritz London replaced the simple scale factor with a complex quantity and turned the scale transformation into a change of phase, which is a U(1) gauge symmetry. This explained the electromagnetic field effect on the wave function of a charged quantum mechanical particle. Weyl's 1929 paper introduced the modern concept of gauge invariance subsequently popularized by Wolfgang Pauli in his 1941 review. In retrospect, James Clerk Maxwell's formulation, in 1864–65, of electrodynamics in "A Dynamical Theory of the Electromagnetic Field" suggested the possibility of invariance, when he stated that any vector field whose curl vanishes—and can therefore normally be written as a gradient of a function—could be added to the vector potential without affecting the magnetic field. Similarly unnoticed, David Hilbert had derived the Einstein field equations by postulating the invariance of the action under a general coordinate transformation. The importance of these symmetry invariances remained unnoticed until Weyl's work. Inspired by Pauli's descriptions of connection between charge conservation and field theory driven by invariance, Chen Ning Yang sought a field theory for atomic nuclei binding based on conservation of nuclear isospin. In 1954, Yang and Robert Mills generalized the gauge invariance of electromagnetism, constructing a theory based on the action of the (non-abelian) SU(2) symmetry group on the isospin doublet of protons and neutrons. This is similar to the action of the U(1) group on the spinor fields of quantum electrodynamics. The Yang–Mills theory became the prototype theory to resolve some of the confusion in elementary particle physics. This idea later found application in the quantum field theory of the weak force, and its unification with electromagnetism in the electroweak theory. Gauge theories became even more attractive when it was realized that non-abelian gauge theories reproduced a feature called asymptotic freedom. Asymptotic freedom was believed to be an important characteristic of strong interactions. This motivated searching for a strong force gauge theory. This theory, now known as quantum chromodynamics, is a gauge theory with the action of the SU(3) group on the color triplet of quarks. The Standard Model unifies the description of electromagnetism, weak interactions and strong interactions in the language of gauge theory. In the 1970s, Michael Atiyah began studying the mathematics of solutions to the classical Yang–Mills equations. In 1983, Atiyah's student Simon Donaldson built on this work to show that the differentiable classification of smooth 4-manifolds is very different from their classification up to homeomorphism. Michael Freedman used Donaldson's work to exhibit exotic R4s, that is, exotic differentiable structures on Euclidean 4-dimensional space. This led to an increasing interest in gauge theory for its own sake, independent of its successes in fundamental physics. In 1994, Edward Witten and Nathan Seiberg invented gauge-theoretic techniques based on supersymmetry that enabled the calculation of certain topological invariants (the Seiberg–Witten invariants). These contributions to mathematics from gauge theory have led to a renewed interest in this area. The importance of gauge theories in physics is exemplified in the success of the mathematical formalism in providing a unified framework to describe the quantum field theories of electromagnetism, the weak force and the strong force. This theory, known as the Standard Model, accurately describes experimental predictions regarding three of the four fundamental forces of nature, and is a gauge theory with the gauge group SU(3) × SU(2) × U(1). Modern theories like string theory, as well as general relativity, are, in one way or another, gauge theories. See Jackson and Okun for early history of gauge and Pickering for more about the history of gauge and quantum field theories. Description Global and local symmetries Global symmetry In physics, the mathematical description of any physical situation usually contains excess degrees of freedom; the same physical situation is equally well described by many equivalent mathematical configurations. For instance, in Newtonian dynamics, if two configurations are related by a Galilean transformation (an inertial change of reference frame) they represent the same physical situation. These transformations form a group of "symmetries" of the theory, and a physical situation corresponds not to an individual mathematical configuration but to a class of configurations related to one another by this symmetry group. This idea can be generalized to include local as well as global symmetries, analogous to much more abstract "changes of coordinates" in a situation where there is no preferred "inertial" coordinate system that covers the entire physical system. A gauge theory is a mathematical model that has symmetries of this kind, together with a set of techniques for making physical predictions consistent with the symmetries of the model. Example of global symmetry When a quantity occurring in the mathematical configuration is not just a number but has some geometrical significance, such as a velocity or an axis of rotation, its representation as numbers arranged in a vector or matrix is also changed by a coordinate transformation. For instance, if one description of a pattern of fluid flow states that the fluid velocity in the neighborhood of (, ) is 1 m/s in the positive x direction, then a description of the same situation in which the coordinate system has been rotated clockwise by 90 degrees states that the fluid velocity in the neighborhood of (, ) is 1 m/s in the negative y direction. The coordinate transformation has affected both the coordinate system used to identify the location of the measurement and the basis in which its value is expressed. As long as this transformation is performed globally (affecting the coordinate basis in the same way at every point), the effect on values that represent the rate of change of some quantity along some path in space and time as it passes through point P is the same as the effect on values that are truly local to P. Local symmetry Use of fiber bundles to describe local symmetries In order to adequately describe physical situations in more complex theories, it is often necessary to introduce a "coordinate basis" for some of the objects of the theory that do not have this simple relationship to the coordinates used to label points in space and time. (In mathematical terms, the theory involves a fiber bundle in which the fiber at each point of the base space consists of possible coordinate bases for use when describing the values of objects at that point.) In order to spell out a mathematical configuration, one must choose a particular coordinate basis at each point (a local section of the fiber bundle) and express the values of the objects of the theory (usually "fields" in the physicist's sense) using this basis. Two such mathematical configurations are equivalent (describe the same physical situation) if they are related by a transformation of this abstract coordinate basis (a change of local section, or gauge transformation). In most gauge theories, the set of possible transformations of the abstract gauge basis at an individual point in space and time is a finite-dimensional Lie group. The simplest such group is U(1), which appears in the modern formulation of quantum electrodynamics (QED) via its use of complex numbers. QED is generally regarded as the first, and simplest, physical gauge theory. The set of possible gauge transformations of the entire configuration of a given gauge theory also forms a group, the gauge group of the theory. An element of the gauge group can be parameterized by a smoothly varying function from the points of spacetime to the (finite-dimensional) Lie group, such that the value of the function and its derivatives at each point represents the action of the gauge transformation on the fiber over that point. A gauge transformation with constant parameter at every point in space and time is analogous to a rigid rotation of the geometric coordinate system; it represents a global symmetry of the gauge representation. As in the case of a rigid rotation, this gauge transformation affects expressions that represent the rate of change along a path of some gauge-dependent quantity in the same way as those that represent a truly local quantity. A gauge transformation whose parameter is not a constant function is referred to as a local symmetry; its effect on expressions that involve a derivative is qualitatively different from that on expressions that do not. (This is analogous to a non-inertial change of reference frame, which can produce a Coriolis effect.) Gauge fields The "gauge covariant" version of a gauge theory accounts for this effect by introducing a gauge field (in mathematical language, an Ehresmann connection) and formulating all rates of change in terms of the covariant derivative with respect to this connection. The gauge field becomes an essential part of the description of a mathematical configuration. A configuration in which the gauge field can be eliminated by a gauge transformation has the property that its field strength (in mathematical language, its curvature) is zero everywhere; a gauge theory is not limited to these configurations. In other words, the distinguishing characteristic of a gauge theory is that the gauge field does not merely compensate for a poor choice of coordinate system; there is generally no gauge transformation that makes the gauge field vanish. When analyzing the dynamics of a gauge theory, the gauge field must be treated as a dynamical variable, similar to other objects in the description of a physical situation. In addition to its interaction with other objects via the covariant derivative, the gauge field typically contributes energy in the form of a "self-energy" term. One can obtain the equations for the gauge theory by: starting from a naïve ansatz without the gauge field (in which the derivatives appear in a "bare" form); listing those global symmetries of the theory that can be characterized by a continuous parameter (generally an abstract equivalent of a rotation angle); computing the correction terms that result from allowing the symmetry parameter to vary from place to place; and reinterpreting these correction terms as couplings to one or more gauge fields, and giving these fields appropriate self-energy terms and dynamical behavior. This is the sense in which a gauge theory "extends" a global symmetry to a local symmetry, and closely resembles the historical development of the gauge theory of gravity known as general relativity. Physical experiments Gauge theories used to model the results of physical experiments engage in: limiting the universe of possible configurations to those consistent with the information used to set up the experiment, and then computing the probability distribution of the possible outcomes that the experiment is designed to measure. We cannot express the mathematical descriptions of the "setup information" and the "possible measurement outcomes", or the "boundary conditions" of the experiment, without reference to a particular coordinate system, including a choice of gauge. One assumes an adequate experiment isolated from "external" influence that is itself a gauge-dependent statement. Mishandling gauge dependence calculations in boundary conditions is a frequent source of anomalies, and approaches to anomaly avoidance classifies gauge theories. Continuum theories The two gauge theories mentioned above, continuum electrodynamics and general relativity, are continuum field theories. The techniques of calculation in a continuum theory implicitly assume that: given a completely fixed choice of gauge, the boundary conditions of an individual configuration are completely described given a completely fixed gauge and a complete set of boundary conditions, the least action determines a unique mathematical configuration and therefore a unique physical situation consistent with these bounds fixing the gauge introduces no anomalies in the calculation, due either to gauge dependence in describing partial information about boundary conditions or to incompleteness of the theory. Determination of the likelihood of possible measurement outcomes proceed by: establishing a probability distribution over all physical situations determined by boundary conditions consistent with the setup information establishing a probability distribution of measurement outcomes for each possible physical situation convolving these two probability distributions to get a distribution of possible measurement outcomes consistent with the setup information These assumptions have enough validity across a wide range of energy scales and experimental conditions to allow these theories to make accurate predictions about almost all of the phenomena encountered in daily life: light, heat, and electricity, eclipses, spaceflight, etc. They fail only at the smallest and largest scales due to omissions in the theories themselves, and when the mathematical techniques themselves break down, most notably in the case of turbulence and other chaotic phenomena. Quantum field theories Other than these classical continuum field theories, the most widely known gauge theories are quantum field theories, including quantum electrodynamics and the Standard Model of elementary particle physics. The starting point of a quantum field theory is much like that of its continuum analog: a gauge-covariant action integral that characterizes "allowable" physical situations according to the principle of least action. However, continuum and quantum theories differ significantly in how they handle the excess degrees of freedom represented by gauge transformations. Continuum theories, and most pedagogical treatments of the simplest quantum field theories, use a gauge fixing prescription to reduce the orbit of mathematical configurations that represent a given physical situation to a smaller orbit related by a smaller gauge group (the global symmetry group, or perhaps even the trivial group). More sophisticated quantum field theories, in particular those that involve a non-abelian gauge group, break the gauge symmetry within the techniques of perturbation theory by introducing additional fields (the Faddeev–Popov ghosts) and counterterms motivated by anomaly cancellation, in an approach known as BRST quantization. While these concerns are in one sense highly technical, they are also closely related to the nature of measurement, the limits on knowledge of a physical situation, and the interactions between incompletely specified experimental conditions and incompletely understood physical theory. The mathematical techniques that have been developed in order to make gauge theories tractable have found many other applications, from solid-state physics and crystallography to low-dimensional topology. Classical gauge theory Classical electromagnetism In electrostatics, one can either discuss the electric field, E, or its corresponding electric potential, V. Knowledge of one makes it possible to find the other, except that potentials differing by a constant, , correspond to the same electric field. This is because the electric field relates to changes in the potential from one point in space to another, and the constant C would cancel out when subtracting to find the change in potential. In terms of vector calculus, the electric field is the gradient of the potential, . Generalizing from static electricity to electromagnetism, we have a second potential, the vector potential A, with The general gauge transformations now become not just but where f is any twice continuously differentiable function that depends on position and time. The electromagnetic fields remain the same under the gauge transformation. Example: scalar O(n) gauge theory The remainder of this section requires some familiarity with classical or quantum field theory, and the use of Lagrangians. Definitions in this section: gauge group, gauge field, interaction Lagrangian, gauge boson. The following illustrates how local gauge invariance can be "motivated" heuristically starting from global symmetry properties, and how it leads to an interaction between originally non-interacting fields. Consider a set of non-interacting real scalar fields, with equal masses m. This system is described by an action that is the sum of the (usual) action for each scalar field The Lagrangian (density) can be compactly written as by introducing a vector of fields The term is the partial derivative of along dimension . It is now transparent that the Lagrangian is invariant under the transformation whenever G is a constant matrix belonging to the n-by-n orthogonal group O(n). This is seen to preserve the Lagrangian, since the derivative of transforms identically to and both quantities appear inside dot products in the Lagrangian (orthogonal transformations preserve the dot product). This characterizes the global symmetry of this particular Lagrangian, and the symmetry group is often called the gauge group; the mathematical term is structure group, especially in the theory of G-structures. Incidentally, Noether's theorem implies that invariance under this group of transformations leads to the conservation of the currents where the Ta matrices are generators of the SO(n) group. There is one conserved current for every generator. Now, demanding that this Lagrangian should have local O(n)-invariance requires that the G matrices (which were earlier constant) should be allowed to become functions of the spacetime coordinates x. In this case, the G matrices do not "pass through" the derivatives, when G = G(x), The failure of the derivative to commute with "G" introduces an additional term (in keeping with the product rule), which spoils the invariance of the Lagrangian. In order to rectify this we define a new derivative operator such that the derivative of again transforms identically with This new "derivative" is called a (gauge) covariant derivative and takes the form where g is called the coupling constant; a quantity defining the strength of an interaction. After a simple calculation we can see that the gauge field A(x) must transform as follows The gauge field is an element of the Lie algebra, and can therefore be expanded as There are therefore as many gauge fields as there are generators of the Lie algebra. Finally, we now have a locally gauge invariant Lagrangian Pauli uses the term gauge transformation of the first type to mean the transformation of , while the compensating transformation in is called a gauge transformation of the second type. The difference between this Lagrangian and the original globally gauge-invariant Lagrangian is seen to be the interaction Lagrangian This term introduces interactions between the n scalar fields just as a consequence of the demand for local gauge invariance. However, to make this interaction physical and not completely arbitrary, the mediator A(x) needs to propagate in space. That is dealt with in the next section by adding yet another term, , to the Lagrangian. In the quantized version of the obtained classical field theory, the quanta of the gauge field A(x) are called gauge bosons. The interpretation of the interaction Lagrangian in quantum field theory is of scalar bosons interacting by the exchange of these gauge bosons. Yang–Mills Lagrangian for the gauge field The picture of a classical gauge theory developed in the previous section is almost complete, except for the fact that to define the covariant derivatives D, one needs to know the value of the gauge field at all spacetime points. Instead of manually specifying the values of this field, it can be given as the solution to a field equation. Further requiring that the Lagrangian that generates this field equation is locally gauge invariant as well, one possible form for the gauge field Lagrangian is where the are obtained from potentials , being the components of , by and the are the structure constants of the Lie algebra of the generators of the gauge group. This formulation of the Lagrangian is called a Yang–Mills action. Other gauge invariant actions also exist (e.g., nonlinear electrodynamics, Born–Infeld action, Chern–Simons model, theta term, etc.). In this Lagrangian term there is no field whose transformation counterweighs the one of . Invariance of this term under gauge transformations is a particular case of a priori classical (geometrical) symmetry. This symmetry must be restricted in order to perform quantization, the procedure being denominated gauge fixing, but even after restriction, gauge transformations may be possible. The complete Lagrangian for the gauge theory is now Example: electrodynamics As a simple application of the formalism developed in the previous sections, consider the case of electrodynamics, with only the electron field. The bare-bones action that generates the electron field's Dirac equation is The global symmetry for this system is The gauge group here is U(1), just rotations of the phase angle of the field, with the particular rotation determined by the constant . "Localising" this symmetry implies the replacement of by . An appropriate covariant derivative is then Identifying the "charge" (not to be confused with the mathematical constant e in the symmetry description) with the usual electric charge (this is the origin of the usage of the term in gauge theories), and the gauge field with the four-vector potential of the electromagnetic field results in an interaction Lagrangian where is the electric current four vector in the Dirac field. The gauge principle is therefore seen to naturally introduce the so-called minimal coupling of the electromagnetic field to the electron field. Adding a Lagrangian for the gauge field in terms of the field strength tensor exactly as in electrodynamics, one obtains the Lagrangian used as the starting point in quantum electrodynamics. Mathematical formalism Gauge theories are usually discussed in the language of differential geometry. Mathematically, a gauge is just a choice of a (local) section of some principal bundle. A gauge transformation is just a transformation between two such sections. Although gauge theory is dominated by the study of connections (primarily because it's mainly studied by high-energy physicists), the idea of a connection is not central to gauge theory in general. In fact, a result in general gauge theory shows that affine representations (i.e., affine modules) of the gauge transformations can be classified as sections of a jet bundle satisfying certain properties. There are representations that transform covariantly pointwise (called by physicists gauge transformations of the first kind), representations that transform as a connection form (called by physicists gauge transformations of the second kind, an affine representation)—and other more general representations, such as the B field in BF theory. There are more general nonlinear representations (realizations), but these are extremely complicated. Still, nonlinear sigma models transform nonlinearly, so there are applications. If there is a principal bundle P whose base space is space or spacetime and structure group is a Lie group, then the sections of P form a principal homogeneous space of the group of gauge transformations. Connections (gauge connection) define this principal bundle, yielding a covariant derivative ∇ in each associated vector bundle. If a local frame is chosen (a local basis of sections), then this covariant derivative is represented by the connection form A, a Lie algebra-valued 1-form, which is called the gauge potential in physics. This is evidently not an intrinsic but a frame-dependent quantity. The curvature form F, a Lie algebra-valued 2-form that is an intrinsic quantity, is constructed from a connection form by where d stands for the exterior derivative and stands for the wedge product. ( is an element of the vector space spanned by the generators , and so the components of do not commute with one another. Hence the wedge product does not vanish.) Infinitesimal gauge transformations form a Lie algebra, which is characterized by a smooth Lie-algebra-valued scalar, ε. Under such an infinitesimal gauge transformation, where is the Lie bracket. One nice thing is that if , then where D is the covariant derivative Also, , which means transforms covariantly. Not all gauge transformations can be generated by infinitesimal gauge transformations in general. An example is when the base manifold is a compact manifold without boundary such that the homotopy class of mappings from that manifold to the Lie group is nontrivial. See instanton for an example. The Yang–Mills action is now given by where is the Hodge star operator and the integral is defined as in differential geometry. A quantity which is gauge-invariant (i.e., invariant under gauge transformations) is the Wilson loop, which is defined over any closed path, γ, as follows: where χ is the character of a complex representation ρ and represents the path-ordered operator. The formalism of gauge theory carries over to a general setting. For example, it is sufficient to ask that a vector bundle have a metric connection; when one does so, one finds that the metric connection satisfies the Yang–Mills equations of motion. Quantization of gauge theories Gauge theories may be quantized by specialization of methods which are applicable to any quantum field theory. However, because of the subtleties imposed by the gauge constraints (see section on Mathematical formalism, above) there are many technical problems to be solved which do not arise in other field theories. At the same time, the richer structure of gauge theories allows simplification of some computations: for example Ward identities connect different renormalization constants. Methods and aims The first gauge theory quantized was quantum electrodynamics (QED). The first methods developed for this involved gauge fixing and then applying canonical quantization. The Gupta–Bleuler method was also developed to handle this problem. Non-abelian gauge theories are now handled by a variety of means. Methods for quantization are covered in the article on quantization. The main point to quantization is to be able to compute quantum amplitudes for various processes allowed by the theory. Technically, they reduce to the computations of certain correlation functions in the vacuum state. This involves a renormalization of the theory. When the running coupling of the theory is small enough, then all required quantities may be computed in perturbation theory. Quantization schemes intended to simplify such computations (such as canonical quantization) may be called perturbative quantization schemes. At present some of these methods lead to the most precise experimental tests of gauge theories. However, in most gauge theories, there are many interesting questions which are non-perturbative. Quantization schemes suited to these problems (such as lattice gauge theory) may be called non-perturbative quantization schemes. Precise computations in such schemes often require supercomputing, and are therefore less well-developed currently than other schemes. Anomalies Some of the symmetries of the classical theory are then seen not to hold in the quantum theory; a phenomenon called an anomaly. Among the most well known are: The scale anomaly, which gives rise to a running coupling constant. In QED this gives rise to the phenomenon of the Landau pole. In quantum chromodynamics (QCD) this leads to asymptotic freedom. The chiral anomaly in either chiral or vector field theories with fermions. This has close connection with topology through the notion of instantons. In QCD this anomaly causes the decay of a pion to two photons. The gauge anomaly, which must cancel in any consistent physical theory. In the electroweak theory this cancellation requires an equal number of quarks and leptons. Pure gauge A pure gauge is the set of field configurations obtained by a gauge transformation on the null-field configuration, i.e., a gauge transform of zero. So it is a particular "gauge orbit" in the field configuration's space. Thus, in the abelian case, where , the pure gauge is just the set of field configurations for all .
Physical sciences
Basics_6
null
19757699
https://en.wikipedia.org/wiki/Local%20Void
Local Void
The Local Void is a vast, empty region of space, lying adjacent to the Local Group. Discovered by Brent Tully and Rick Fisher in 1987, the Local Void is now known to be composed of three separate sectors, separated by bridges of "wispy filaments". The precise extent of the void is unknown, but it is at least 45 Mpc (150 million light-years) across, and possibly 150 to 300 Mpc. The Local Void appears to have significantly fewer galaxies than expected from standard cosmology. Location and dimensions Voids are affected by the way gravity causes matter in the universe to "clump together", herding galaxies into clusters and chains, which are separated by regions mostly devoid of galaxies, yet the exact mechanisms are subject to scientific debate. Astronomers have previously noticed that the Milky Way sits in a large, flat array of galaxies called the Local Sheet, which bounds the Local Void. The Local Void extends approximately , beginning at the edge of the Local Group. It is believed that the distance from Earth to the centre of the Local Void must be at least . The size of the Local Void was calculated due to an isolated dwarf galaxy known as ESO 461-36 located inside it. The bigger and emptier the void, the weaker its gravity, and the faster the dwarf should be fleeing the void towards concentrations of matter, yet discrepancies give room for competing theories. Dark energy has been suggested as one alternative explanation for the speedy expulsion of the dwarf galaxy. An earlier "Hubble Bubble" model, based on measured velocities of Type 1a supernovae, proposed a relative void centred on the Milky Way. Recent analysis of that data, however, suggested that interstellar dust had resulted in misleading measurements. Several authors have shown that the local universe up to 300 Mpc from the Milky Way is less dense than surrounding areas – by 15–50%. This has been called the Local Void or Local Hole. Some media reports have dubbed it the KBC Void, although this name has not been taken up in other publications. Effect on surroundings Scientists believe that the Local Void is growing and that the Local Sheet, which makes up one wall of the void, is rushing away from the void's centre at . Concentrations of matter normally pull together, creating a larger void where matter is rushing away. The Local Void is surrounded uniformly by matter in all directions, except for one sector in which there is nothing, which has the effect of taking more matter away from that sector. The effect on the nearby galaxy is astonishingly large. The Milky Way's velocity away from the Local Void is . List of void galaxies Several void galaxies have been found within the Local Void. These include:
Physical sciences
Notable patches of universe
Astronomy
19759220
https://en.wikipedia.org/wiki/Banach%E2%80%93Tarski%20paradox
Banach–Tarski paradox
The Banach–Tarski paradox is a theorem in set-theoretic geometry, which states the following: Given a solid ball in three-dimensional space, there exists a decomposition of the ball into a finite number of disjoint subsets, which can then be put back together in a different way to yield two identical copies of the original ball. Indeed, the reassembly process involves only moving the pieces around and rotating them, without changing their original shape. However, the pieces themselves are not "solids" in the traditional sense, but infinite scatterings of points. The reconstruction can work with as few as five pieces. An alternative form of the theorem states that given any two "reasonable" solid objects (such as a small ball and a huge ball), the cut pieces of either one can be reassembled into the other. This is often stated informally as "a pea can be chopped up and reassembled into the Sun" and called the "pea and the Sun paradox". The theorem is called a paradox because it contradicts basic geometric intuition. "Doubling the ball" by dividing it into parts and moving them around by rotations and translations, without any stretching, bending, or adding new points, seems to be impossible, since all these operations ought, intuitively speaking, to preserve the volume. The intuition that such operations preserve volumes is not mathematically absurd and it is even included in the formal definition of volumes. However, this is not applicable here because in this case it is impossible to define the volumes of the considered subsets. Reassembling them reproduces a set that has a volume, which happens to be different from the volume at the start. Unlike most theorems in geometry, the mathematical proof of this result depends on the choice of axioms for set theory in a critical way. It can be proven using the axiom of choice, which allows for the construction of non-measurable sets, i.e., collections of points that do not have a volume in the ordinary sense, and whose construction requires an uncountable number of choices. It was shown in 2005 that the pieces in the decomposition can be chosen in such a way that they can be moved continuously into place without running into one another. As proved independently by Leroy and Simpson, the Banach–Tarski paradox does not violate volumes if one works with locales rather than topological spaces. In this abstract setting, it is possible to have subspaces without point but still nonempty. The parts of the paradoxical decomposition do intersect a lot in the sense of locales, so much that some of these intersections should be given a positive mass. Allowing for this hidden mass to be taken into account, the theory of locales permits all subsets (and even all sublocales) of the Euclidean space to be satisfactorily measured. Banach and Tarski publication In a paper published in 1924, Stefan Banach and Alfred Tarski gave a construction of such a paradoxical decomposition, based on earlier work by Giuseppe Vitali concerning the unit interval and on the paradoxical decompositions of the sphere by Felix Hausdorff, and discussed a number of related questions concerning decompositions of subsets of Euclidean spaces in various dimensions. They proved the following more general statement, the strong form of the Banach–Tarski paradox: Given any two bounded subsets and of a Euclidean space in at least three dimensions, both of which have a nonempty interior, there are partitions of and into a finite number of disjoint subsets, , (for some integer k), such that for each (integer) between and , the sets and are congruent. Now let be the original ball and be the union of two translated copies of the original ball. Then the proposition means that the original ball can be divided into a certain number of pieces and then be rotated and translated in such a way that the result is the whole set , which contains two copies of . The strong form of the Banach–Tarski paradox is false in dimensions one and two, but Banach and Tarski showed that an analogous statement remains true if countably many subsets are allowed. The difference between dimensions 1 and 2 on the one hand, and 3 and higher on the other hand, is due to the richer structure of the group of Euclidean motions in 3 dimensions. For the group is solvable, but for it contains a free group with two generators. John von Neumann studied the properties of the group of equivalences that make a paradoxical decomposition possible, and introduced the notion of amenable groups. He also found a form of the paradox in the plane which uses area-preserving affine transformations in place of the usual congruences. Tarski proved that amenable groups are precisely those for which no paradoxical decompositions exist. Since only free subgroups are needed in the Banach–Tarski paradox, this led to the long-standing von Neumann conjecture, which was disproved in 1980. Formal treatment The Banach–Tarski paradox states that a ball in the ordinary Euclidean space can be doubled using only the operations of partitioning into subsets, replacing a set with a congruent set, and reassembling. Its mathematical structure is greatly elucidated by emphasizing the role played by the group of Euclidean motions and introducing the notions of equidecomposable sets and a paradoxical set. Suppose that is a group acting on a set . In the most important special case, is an -dimensional Euclidean space (for integral n), and consists of all isometries of , i.e. the transformations of into itself that preserve the distances, usually denoted . Two geometric figures that can be transformed into each other are called congruent, and this terminology will be extended to the general -action. Two subsets and of are called -equidecomposable, or equidecomposable with respect to , if and can be partitioned into the same finite number of respectively -congruent pieces. This defines an equivalence relation among all subsets of . Formally, if there exist non-empty sets , such that and there exist elements such that then it can be said that and are -equidecomposable using pieces. If a set has two disjoint subsets and such that and , as well as and , are -equidecomposable, then is called paradoxical. Using this terminology, the Banach–Tarski paradox can be reformulated as follows: A three-dimensional Euclidean ball is equidecomposable with two copies of itself. In fact, there is a sharp result in this case, due to Raphael M. Robinson: doubling the ball can be accomplished with five pieces, and fewer than five pieces will not suffice. The strong version of the paradox claims: Any two bounded subsets of 3-dimensional Euclidean space with non-empty interiors are equidecomposable. While apparently more general, this statement is derived in a simple way from the doubling of a ball by using a generalization of the Bernstein–Schroeder theorem due to Banach that implies that if is equidecomposable with a subset of and is equidecomposable with a subset of , then and are equidecomposable. The Banach–Tarski paradox can be put in context by pointing out that for two sets in the strong form of the paradox, there is always a bijective function that can map the points in one shape into the other in a one-to-one fashion. In the language of Georg Cantor's set theory, these two sets have equal cardinality. Thus, if one enlarges the group to allow arbitrary bijections of , then all sets with non-empty interior become congruent. Likewise, one ball can be made into a larger or smaller ball by stretching, or in other words, by applying similarity transformations. Hence, if the group is large enough, -equidecomposable sets may be found whose "size"s vary. Moreover, since a countable set can be made into two copies of itself, one might expect that using countably many pieces could somehow do the trick. On the other hand, in the Banach–Tarski paradox, the number of pieces is finite and the allowed equivalences are Euclidean congruences, which preserve the volumes. Yet, somehow, they end up doubling the volume of the ball. While this is certainly surprising, some of the pieces used in the paradoxical decomposition are non-measurable sets, so the notion of volume (more precisely, Lebesgue measure) is not defined for them, and the partitioning cannot be accomplished in a practical way. In fact, the Banach–Tarski paradox demonstrates that it is impossible to find a finitely-additive measure (or a Banach measure) defined on all subsets of a Euclidean space of three (and greater) dimensions that is invariant with respect to Euclidean motions and takes the value one on a unit cube. In his later work, Tarski showed that, conversely, non-existence of paradoxical decompositions of this type implies the existence of a finitely-additive invariant measure. The heart of the proof of the "doubling the ball" form of the paradox presented below is the remarkable fact that by a Euclidean isometry (and renaming of elements), one can divide a certain set (essentially, the surface of a unit sphere) into four parts, then rotate one of them to become itself plus two of the other parts. This follows rather easily from a -paradoxical decomposition of , the free group with two generators. Banach and Tarski's proof relied on an analogous fact discovered by Hausdorff some years earlier: the surface of a unit sphere in space is a disjoint union of three sets and a countable set such that, on the one hand, are pairwise congruent, and on the other hand, is congruent with the union of and . This is often called the Hausdorff paradox. Connection with earlier work and the role of the axiom of choice Banach and Tarski explicitly acknowledge Giuseppe Vitali's 1905 construction of the set bearing his name, Hausdorff's paradox (1914), and an earlier (1923) paper of Banach as the precursors to their work. Vitali's and Hausdorff's constructions depend on Zermelo's axiom of choice ("AC"), which is also crucial to the Banach–Tarski paper, both for proving their paradox and for the proof of another result: Two Euclidean polygons, one of which strictly contains the other, are not equidecomposable. They remark: (The role this axiom plays in our reasoning seems to us to deserve attention) They point out that while the second result fully agrees with geometric intuition, its proof uses AC in an even more substantial way than the proof of the paradox. Thus Banach and Tarski imply that AC should not be rejected solely because it produces a paradoxical decomposition, for such an argument also undermines proofs of geometrically intuitive statements. However, in 1949, A. P. Morse showed that the statement about Euclidean polygons can be proved in ZF set theory and thus does not require the axiom of choice. In 1964, Paul Cohen proved that the axiom of choice is independent from ZF – that is, choice cannot be proved from ZF. A weaker version of an axiom of choice is the axiom of dependent choice, DC, and it has been shown that DC is sufficient for proving the Banach–Tarski paradox, that is, The Banach–Tarski paradox is not a theorem of ZF, nor of ZF+DC, assuming their consistency. Large amounts of mathematics use AC. As Stan Wagon points out at the end of his monograph, the Banach–Tarski paradox has been more significant for its role in pure mathematics than for foundational questions: it motivated a fruitful new direction for research, the amenability of groups, which has nothing to do with the foundational questions. In 1991, using then-recent results by Matthew Foreman and Friedrich Wehrung, Janusz Pawlikowski proved that the Banach–Tarski paradox follows from ZF plus the Hahn–Banach theorem. The Hahn–Banach theorem does not rely on the full axiom of choice but can be proved using a weaker version of AC called the ultrafilter lemma. A sketch of the proof Here a proof is sketched which is similar but not identical to that given by Banach and Tarski. Essentially, the paradoxical decomposition of the ball is achieved in four steps: Find a paradoxical decomposition of the free group in two generators. Find a group of rotations in 3-d space isomorphic to the free group in two generators. Use the paradoxical decomposition of that group and the axiom of choice to produce a paradoxical decomposition of the hollow unit sphere. Extend this decomposition of the sphere to a decomposition of the solid unit ball. These steps are discussed in more detail below. Step 1 The free group with two generators a and b consists of all finite strings that can be formed from the four symbols a, a−1, b and b−1 such that no a appears directly next to an a−1 and no b appears directly next to a b−1. Two such strings can be concatenated and converted into a string of this type by repeatedly replacing the "forbidden" substrings with the empty string. For instance: abab−1a−1 concatenated with abab−1a yields abab−1a−1abab−1a, which contains the substring a−1a, and so gets reduced to abab−1bab−1a, which contains the substring b−1b, which gets reduced to . One can check that the set of those strings with this operation forms a group with identity element the empty string e. This group may be called F2. The group can be "paradoxically decomposed" as follows: Let S(a) be the subset of consisting of all strings that start with a, and define S(a−1), S(b) and S(b−1) similarly. Clearly, but also and where the notation aS(a−1) means take all the strings in S(a−1) and concatenate them on the left with a. This is at the core of the proof. For example, there may be a string in the set which, because of the rule that must not appear next to , reduces to the string . Similarly, contains all the strings that start with (for example, the string which reduces to ). In this way, contains all the strings that start with , and , as well as the empty string . Group F2 has been cut into four pieces (plus the singleton {e}), then two of them "shifted" by multiplying with a or b, then "reassembled" as two pieces to make one copy of and the other two to make another copy of . That is exactly what is intended to do to the ball. Step 2 In order to find a free group of rotations of 3D space, i.e. that behaves just like (or "is isomorphic to") the free group F2, two orthogonal axes are taken (e.g. the x and z axes). Then, A is taken to be a rotation of about the x axis, and B to be a rotation of about the z axis (there are many other suitable pairs of irrational multiples of π that could be used here as well). The group of rotations generated by A and B will be called H. Let be an element of H that starts with a positive rotation about the z axis, that is, an element of the form with . It can be shown by induction that maps the point to , for some . Analyzing and modulo 3, one can show that . The same argument repeated (by symmetry of the problem) is valid when starts with a negative rotation about the z axis, or a rotation about the x axis. This shows that if is given by a non-trivial word in A and B, then . Therefore, the group H is a free group, isomorphic to F2. The two rotations behave just like the elements a and b in the group F2: there is now a paradoxical decomposition of H. This step cannot be performed in two dimensions since it involves rotations in three dimensions. If two nontrivial rotations are taken about the same axis, the resulting group is either (if the ratio between the two angles is rational) or the free abelian group over two elements; either way, it does not have the property required in step 1. An alternative arithmetic proof of the existence of free groups in some special orthogonal groups using integral quaternions leads to paradoxical decompositions of the rotation group. Step 3 The unit sphere S2 is partitioned into orbits by the action of our group H: two points belong to the same orbit if and only if there is a rotation in H which moves the first point into the second. (Note that the orbit of a point is a dense set in S2.) The axiom of choice can be used to pick exactly one point from every orbit; collect these points into a set M. The action of H on a given orbit is free and transitive and so each orbit can be identified with H. In other words, every point in S2 can be reached in exactly one way by applying the proper rotation from H to the proper element from M. Because of this, the paradoxical decomposition of H yields a paradoxical decomposition of S2 into four pieces A1, A2, A3, A4 as follows: where we define and likewise for the other sets, and where we define (The five "paradoxical" parts of F2 were not used directly, as they would leave M as an extra piece after doubling, owing to the presence of the singleton {e}.) The (majority of the) sphere has now been divided into four sets (each one dense on the sphere), and when two of these are rotated, the result is double of what was had before: Step 4 Finally, connect every point on S2 with a half-open segment to the origin; the paradoxical decomposition of S2 then yields a paradoxical decomposition of the solid unit ball minus the point at the ball's center. (This center point needs a bit more care; see below.) N.B. This sketch glosses over some details. One has to be careful about the set of points on the sphere which happen to lie on the axis of some rotation in H. However, there are only countably many such points, and like the case of the point at the center of the ball, it is possible to patch the proof to account for them all. (See below.) Some details, fleshed out In Step 3, the sphere was partitioned into orbits of our group H. To streamline the proof, the discussion of points that are fixed by some rotation was omitted; since the paradoxical decomposition of F2 relies on shifting certain subsets, the fact that some points are fixed might cause some trouble. Since any rotation of S2 (other than the null rotation) has exactly two fixed points, and since H, which is isomorphic to F2, is countable, there are countably many points of S2 that are fixed by some rotation in H. Denote this set of fixed points as D. Step 3 proves that S2 − D admits a paradoxical decomposition. What remains to be shown is the Claim: S2 − D is equidecomposable with S2. Proof. Let λ be some line through the origin that does not intersect any point in D. This is possible since D is countable. Let J be the set of angles, α, such that for some natural number n, and some P in D, r(nα)P is also in D, where r(nα) is a rotation about λ of nα. Then J is countable. So there exists an angle θ not in J. Let ρ be the rotation about λ by θ. Then ρ acts on S2 with no fixed points in D, i.e., ρn(D) is disjoint from D, and for natural m<n, ρn(D) is disjoint from ρm(D). Let E be the disjoint union of ρn(D) over n = 0, 1, 2, ... . Then S2 = E ∪ (S2 − E) ~ ρ(E) ∪ (S2 − E) = (E − D) ∪ (S2 − E) = S2 − D, where ~ denotes "is equidecomposable to". For step 4, it has already been shown that the ball minus a point admits a paradoxical decomposition; it remains to be shown that the ball minus a point is equidecomposable with the ball. Consider a circle within the ball, containing the point at the center of the ball. Using an argument like that used to prove the Claim, one can see that the full circle is equidecomposable with the circle minus the point at the ball's center. (Basically, a countable set of points on the circle can be rotated to give itself plus one more point.) Note that this involves the rotation about a point other than the origin, so the Banach–Tarski paradox involves isometries of Euclidean 3-space rather than just SO(3). Use is made of the fact that if A ~ B and B ~ C, then A ~ C. The decomposition of A into C can be done using number of pieces equal to the product of the numbers needed for taking A into B and for taking B into C. The proof sketched above requires 2 × 4 × 2 + 8 = 24 pieces - a factor of 2 to remove fixed points, a factor 4 from step 1, a factor 2 to recreate fixed points, and 8 for the center point of the second ball. But in step 1 when moving {e} and all strings of the form an into S(a−1), do this to all orbits except one. Move {e} of this last orbit to the center point of the second ball. This brings the total down to 16 + 1 pieces. With more algebra, one can also decompose fixed orbits into 4 sets as in step 1. This gives 5 pieces and is the best possible. Obtaining infinitely many balls from one Using the Banach–Tarski paradox, it is possible to obtain k copies of a ball in the Euclidean n-space from one, for any integers n ≥ 3 and k ≥ 1, i.e. a ball can be cut into k pieces so that each of them is equidecomposable to a ball of the same size as the original. Using the fact that the free group F2 of rank 2 admits a free subgroup of countably infinite rank, a similar proof yields that the unit sphere Sn−1 can be partitioned into countably infinitely many pieces, each of which is equidecomposable (with two pieces) to the Sn−1 using rotations. By using analytic properties of the rotation group SO(n), which is a connected analytic Lie group, one can further prove that the sphere Sn−1 can be partitioned into as many pieces as there are real numbers (that is, pieces), so that each piece is equidecomposable with two pieces to Sn−1 using rotations. These results then extend to the unit ball deprived of the origin. A 2010 article by Valeriy Churkin gives a new proof of the continuous version of the Banach–Tarski paradox. Von Neumann paradox in the Euclidean plane In the Euclidean plane, two figures that are equidecomposable with respect to the group of Euclidean motions are necessarily of the same area, and therefore, a paradoxical decomposition of a square or disk of Banach–Tarski type that uses only Euclidean congruences is impossible. A conceptual explanation of the distinction between the planar and higher-dimensional cases was given by John von Neumann: unlike the group SO(3) of rotations in three dimensions, the group E(2) of Euclidean motions of the plane is solvable, which implies the existence of a finitely-additive measure on E(2) and R2 which is invariant under translations and rotations, and rules out paradoxical decompositions of non-negligible sets. Von Neumann then posed the following question: can such a paradoxical decomposition be constructed if one allows a larger group of equivalences? It is clear that if one permits similarities, any two squares in the plane become equivalent even without further subdivision. This motivates restricting one's attention to the group SA2 of area-preserving affine transformations. Since the area is preserved, any paradoxical decomposition of a square with respect to this group would be counterintuitive for the same reasons as the Banach–Tarski decomposition of a ball. In fact, the group SA2 contains as a subgroup the special linear group SL(2,R), which in its turn contains the free group F2 with two generators as a subgroup. This makes it plausible that the proof of Banach–Tarski paradox can be imitated in the plane. The main difficulty here lies in the fact that the unit square is not invariant under the action of the linear group SL(2, R), hence one cannot simply transfer a paradoxical decomposition from the group to the square, as in the third step of the above proof of the Banach–Tarski paradox. Moreover, the fixed points of the group present difficulties (for example, the origin is fixed under all linear transformations). This is why von Neumann used the larger group SA2 including the translations, and he constructed a paradoxical decomposition of the unit square with respect to the enlarged group (in 1929). Applying the Banach–Tarski method, the paradox for the square can be strengthened as follows: Any two bounded subsets of the Euclidean plane with non-empty interiors are equidecomposable with respect to the area-preserving affine maps. As von Neumann notes: "Infolgedessen gibt es bereits in der Ebene kein nichtnegatives additives Maß (wo das Einheitsquadrat das Maß 1 hat), das gegenüber allen Abbildungen von A2 invariant wäre." "In accordance with this, already in the plane there is no non-negative additive measure (for which the unit square has a measure of 1), which is invariant with respect to all transformations belonging to A2 [the group of area-preserving affine transformations]." To explain further, the question of whether a finitely additive measure (that is preserved under certain transformations) exists or not depends on what transformations are allowed. The Banach measure of sets in the plane, which is preserved by translations and rotations, is not preserved by non-isometric transformations even when they do preserve the area of polygons. The points of the plane (other than the origin) can be divided into two dense sets which may be called A and B. If the A points of a given polygon are transformed by a certain area-preserving transformation and the B points by another, both sets can become subsets of the A points in two new polygons. The new polygons have the same area as the old polygon, but the two transformed sets cannot have the same measure as before (since they contain only part of the A points), and therefore there is no measure that "works". The class of groups isolated by von Neumann in the course of study of Banach–Tarski phenomenon turned out to be very important for many areas of Mathematics: these are amenable groups, or groups with an invariant mean, and include all finite and all solvable groups. Generally speaking, paradoxical decompositions arise when the group used for equivalences in the definition of equidecomposability is not amenable. Recent progress 2000: Von Neumann's paper left open the possibility of a paradoxical decomposition of the interior of the unit square with respect to the linear group SL(2,R) (Wagon, Question 7.4). In 2000, Miklós Laczkovich proved that such a decomposition exists. More precisely, let A be the family of all bounded subsets of the plane with non-empty interior and at a positive distance from the origin, and B the family of all planar sets with the property that a union of finitely many translates under some elements of SL(2, R) contains a punctured neighborhood of the origin. Then all sets in the family A are SL(2, R)-equidecomposable, and likewise for the sets in B. It follows that both families consist of paradoxical sets. 2003: It had been known for a long time that the full plane was paradoxical with respect to SA2, and that the minimal number of pieces would equal four provided that there exists a locally commutative free subgroup of SA2. In 2003 Kenzi Satô constructed such a subgroup, confirming that four pieces suffice. 2011: Laczkovich's paper left open the possibility that there exists a free group F of piecewise linear transformations acting on the punctured disk D \ {(0,0)} without fixed points. Grzegorz Tomkowicz constructed such a group, showing that the system of congruences A ≈ B ≈ C ≈ B U C can be realized by means of F and D \ {(0,0)}. 2017: It has been known for a long time that there exists in the hyperbolic plane H2 a set E that is a third, a fourth and ... and a -th part of H2. The requirement was satisfied by orientation-preserving isometries of H2. Analogous results were obtained by John Frank Adams and Jan Mycielski who showed that the unit sphere S2 contains a set E that is a half, a third, a fourth and ... and a -th part of S2. Grzegorz Tomkowicz showed that Adams and Mycielski construction can be generalized to obtain a set E of H2 with the same properties as in S2. 2017: Von Neumann's paradox concerns the Euclidean plane, but there are also other classical spaces where the paradoxes are possible. For example, one can ask if there is a Banach–Tarski paradox in the hyperbolic plane H2. This was shown by Jan Mycielski and Grzegorz Tomkowicz. Tomkowicz proved also that most of the classical paradoxes are an easy consequence of a graph theoretical result and the fact that the groups in question are rich enough. 2018: In 1984, Jan Mycielski and Stan Wagon constructed a paradoxical decomposition of the hyperbolic plane H2 that uses Borel sets. The paradox depends on the existence of a properly discontinuous subgroup of the group of isometries of H2. A similar paradox was obtained in 2018 by Grzegorz Tomkowicz, who constructed a free properly discontinuous subgroup G of the affine group SA(3,Z). The existence of such a group implies the existence of a subset E of Z3 such that for any finite F of Z3 there exists an element of such that , where denotes the symmetric difference of and . 2019: Banach–Tarski paradox uses finitely many pieces in the duplication. In the case of countably many pieces, any two sets with non-empty interiors are equidecomposable using translations. But allowing only Lebesgue measurable pieces one obtains: If A and B are subsets of Rn with non-empty interiors, then they have equal Lebesgue measures if and only if they are countably equidecomposable using Lebesgue measurable pieces. Jan Mycielski and Grzegorz Tomkowicz extended this result to finite dimensional Lie groups and second countable locally compact topological groups that are totally disconnected or have countably many connected components. 2024: Robert Samuel Simon and Grzegorz Tomkowicz introduced a colouring rule of points in a Cantor space that links paradoxical decompositions with optimisation. This allows one to find an application of paradoxical decompositions in economics. 2024: Grzegorz Tomkowicz proved that in the case of non-supramenable connected Lie groups acting continuously and transitively on a metric space, bounded paradoxical sets are generic.
Mathematics
Axiomatic systems
null
19762116
https://en.wikipedia.org/wiki/Fourier-transform%20infrared%20spectroscopy
Fourier-transform infrared spectroscopy
Fourier transform infrared spectroscopy (FTIR) is a technique used to obtain an infrared spectrum of absorption or emission of a solid, liquid, or gas. An FTIR spectrometer simultaneously collects high-resolution spectral data over a wide spectral range. This confers a significant advantage over a dispersive spectrometer, which measures intensity over a narrow range of wavelengths at a time. The term Fourier transform infrared spectroscopy originates from the fact that a Fourier transform (a mathematical process) is required to convert the raw data into the actual spectrum. Conceptual introduction The goal of absorption spectroscopy techniques (FTIR, ultraviolet-visible ("UV-vis") spectroscopy, etc.) is to measure how much light a sample absorbs at each wavelength. The most straightforward way to do this, the "dispersive spectroscopy" technique, is to shine a monochromatic light beam at a sample, measure how much of the light is absorbed, and repeat for each different wavelength. (This is how some UV–vis spectrometers work, for example.) Fourier transform spectroscopy is a less intuitive way to obtain the same information. Rather than shining a monochromatic beam of light (a beam composed of only a single wavelength) at the sample, this technique shines a beam containing many frequencies of light at once and measures how much of that beam is absorbed by the sample. Next, the beam is modified to contain a different combination of frequencies, giving a second data point. This process is rapidly repeated many times over a short time span. Afterwards, a computer takes all this data and works backward to infer what the absorption is at each wavelength. The beam described above is generated by starting with a broadband light source—one containing the full spectrum of wavelengths to be measured. The light shines into a Michelson interferometer—a certain configuration of mirrors, one of which is moved by a motor. As this mirror moves, each wavelength of light in the beam is periodically blocked, transmitted, blocked, transmitted, by the interferometer, due to wave interference. Different wavelengths are modulated at different rates, so that at each moment or mirror position the beam coming out of the interferometer has a different spectrum. As mentioned, computer processing is required to turn the raw data (light absorption for each mirror position) into the desired result (light absorption for each wavelength). The processing required turns out to be a common algorithm called the Fourier transform. The Fourier transform converts one domain (in this case displacement of the mirror in cm) into its inverse domain (wavenumbers in cm−1). The raw data is called an "interferogram". History The first low-cost spectrophotometer capable of recording an infrared spectrum was the Perkin-Elmer Infracord produced in 1957. This instrument covered the wavelength range from 2.5 μm to 15 μm (wavenumber range 4,000 cm−1 to 660 cm−1). The lower wavelength limit was chosen to encompass the highest known vibration frequency due to a fundamental molecular vibration. The upper limit was imposed by the fact that the dispersing element was a prism made from a single crystal of rock-salt (sodium chloride), which becomes opaque at wavelengths longer than about 15 μm; this spectral region became known as the rock-salt region. Later instruments used potassium bromide prisms to extend the range to 25 μm (400 cm−1) and caesium iodide 50 μm (200 cm−1). The region beyond 50 μm (200 cm−1) became known as the far-infrared region; at very long wavelengths it merges into the microwave region. Measurements in the far infrared needed the development of accurately ruled diffraction gratings to replace the prisms as dispersing elements, since salt crystals are opaque in this region. More sensitive detectors than the bolometer were required because of the low energy of the radiation. One such was the Golay detector. An additional issue is the need to exclude atmospheric water vapour because water vapour has an intense pure rotational spectrum in this region. Far-infrared spectrophotometers were cumbersome, slow and expensive. The advantages of the Michelson interferometer were well-known, but considerable technical difficulties had to be overcome before a commercial instrument could be built. Also an electronic computer was needed to perform the required Fourier transform, and this only became practicable with the advent of minicomputers, such as the PDP-8, which became available in 1965. Digilab pioneered the world's first commercial FTIR spectrometer (Model FTS-14) in 1969. Digilab FTIRs are now a part of Agilent Technologies's molecular product line after Agilent acquired spectroscopy business from Varian. Michelson interferometer In a Michelson interferometer adapted for FTIR, light from the polychromatic infrared source, approximately a black-body radiator, is collimated and directed to a beam splitter. Ideally 50% of the light is refracted towards the fixed mirror and 50% is transmitted towards the moving mirror. Light is reflected from the two mirrors back to the beam splitter and some fraction of the original light passes into the sample compartment. There, the light is focused on the sample. On leaving the sample compartment the light is refocused on to the detector. The difference in optical path length between the two arms to the interferometer is known as the retardation or optical path difference (OPD). An interferogram is obtained by varying the retardation and recording the signal from the detector for various values of the retardation. The form of the interferogram when no sample is present depends on factors such as the variation of source intensity and splitter efficiency with wavelength. This results in a maximum at zero retardation, when there is constructive interference at all wavelengths, followed by series of "wiggles". The position of zero retardation is determined accurately by finding the point of maximum intensity in the interferogram. When a sample is present the background interferogram is modulated by the presence of absorption bands in the sample. Commercial spectrometers use Michelson interferometers with a variety of scanning mechanisms to generate the path difference. Common to all these arrangements is the need to ensure that the two beams recombine exactly as the system scans. The simplest systems have a plane mirror that moves linearly to vary the path of one beam. In this arrangement the moving mirror must not tilt or wobble as this would affect how the beams overlap as they recombine. Some systems incorporate a compensating mechanism that automatically adjusts the orientation of one mirror to maintain the alignment. Arrangements that avoid this problem include using cube corner reflectors instead of plane mirrors as these have the property of returning any incident beam in a parallel direction regardless of orientation. Systems where the path difference is generated by a rotary movement have proved very successful. One common system incorporates a pair of parallel mirrors in one beam that can be rotated to vary the path without displacing the returning beam. Another is the double pendulum design where the path in one arm of the interferometer increases as the path in the other decreases. A quite different approach involves moving a wedge of an IR-transparent material such as KBr into one of the beams. Increasing the thickness of KBr in the beam increases the optical path because the refractive index is higher than that of air. One limitation of this approach is that the variation of refractive index over the wavelength range limits the accuracy of the wavelength calibration. Measuring and processing the interferogram The interferogram has to be measured from zero path difference to a maximum length that depends on the resolution required. In practice the scan can be on either side of zero resulting in a double-sided interferogram. Mechanical design limitations may mean that for the highest resolution the scan runs to the maximum OPD on one side of zero only. The interferogram is converted to a spectrum by Fourier transformation. This requires it to be stored in digital form as a series of values at equal intervals of the path difference between the two beams. To measure the path difference a laser beam is sent through the interferometer, generating a sinusoidal signal where the separation between successive maxima is equal to the wavelength of the laser (typically a 633 nm HeNe laser is used). This can trigger an analog-to-digital converter to measure the IR signal each time the laser signal passes through zero. Alternatively, the laser and IR signals can be measured synchronously at smaller intervals with the IR signal at points corresponding to the laser signal zero crossing being determined by interpolation. This approach allows the use of analog-to-digital converters that are more accurate and precise than converters that can be triggered, resulting in lower noise. The result of Fourier transformation is a spectrum of the signal at a series of discrete wavelengths. The range of wavelengths that can be used in the calculation is limited by the separation of the data points in the interferogram. The shortest wavelength that can be recognized is twice the separation between these data points. For example, with one point per wavelength of a HeNe reference laser at () the shortest wavelength would be (). Because of aliasing, any energy at shorter wavelengths would be interpreted as coming from longer wavelengths and so has to be minimized optically or electronically. The spectral resolution, i.e. the separation between wavelengths that can be distinguished, is determined by the maximum OPD. The wavelengths used in calculating the Fourier transform are such that an exact number of wavelengths fit into the length of the interferogram from zero to the maximum OPD as this makes their contributions orthogonal. This results in a spectrum with points separated by equal frequency intervals. For a maximum path difference adjacent wavelengths and will have and cycles, respectively, in the interferogram. The corresponding frequencies are ν1 and ν2: {| | d = nλ1 || and d = (n+1)λ2 |- | λ1 = d/n || and λ2 =d/(n+1) |- | ν1 = 1/λ1 || and ν2 = 1/λ2 |- | ν1 = n/d || and ν2 = (n+1)/d |- |colspan=2| ν2 − ν1 = 1/d |} The separation is the inverse of the maximum OPD. For example, a maximum OPD of 2 cm results in a separation of . This is the spectral resolution in the sense that the value at one point is independent of the values at adjacent points. Most instruments can be operated at different resolutions by choosing different OPD's. Instruments for routine analyses typically have a best resolution of around , while spectrometers have been built with resolutions as high as , corresponding to a maximum OPD of 10 m. The point in the interferogram corresponding to zero path difference has to be identified, commonly by assuming it is where the maximum signal occurs. This so-called centerburst is not always symmetrical in real world spectrometers so a phase correction may have to be calculated. The interferogram signal decays as the path difference increases, the rate of decay being inversely related to the width of features in the spectrum. If the OPD is not large enough to allow the interferogram signal to decay to a negligible level there will be unwanted oscillations or sidelobes associated with the features in the resulting spectrum. To reduce these sidelobes the interferogram is usually multiplied by a function that approaches zero at the maximum OPD. This so-called apodization reduces the amplitude of any sidelobes and also the noise level at the expense of some reduction in resolution. For rapid calculation the number of points in the interferogram has to equal a power of two. A string of zeroes may be added to the measured interferogram to achieve this. More zeroes may be added in a process called zero filling to improve the appearance of the final spectrum although there is no improvement in resolution. Alternatively, interpolation after the Fourier transform gives a similar result. Advantages There are three principal advantages for an FT spectrometer compared to a scanning (dispersive) spectrometer. The multiplex or Fellgett's advantage (named after Peter Fellgett). This arises from the fact that information from all wavelengths is collected simultaneously. It results in a higher signal-to-noise ratio for a given scan-time for observations limited by a fixed detector noise contribution (typically in the thermal infrared spectral region where a photodetector is limited by generation-recombination noise). For a spectrum with m resolution elements, this increase is equal to the square root of m. Alternatively, it allows a shorter scan-time for a given resolution. In practice multiple scans are often averaged, increasing the signal-to-noise ratio by the square root of the number of scans. The throughput or Jacquinot's advantage (named after Pierre Jacquinot). This results from the fact that in a dispersive instrument, the monochromator has entrance and exit slits which restrict the amount of light that passes through it. The interferometer throughput is determined only by the diameter of the collimated beam coming from the source. Although no slits are needed, FTIR spectrometers do require an aperture to restrict the convergence of the collimated beam in the interferometer. This is because convergent rays are modulated at different frequencies as the path difference is varied. Such an aperture is called a Jacquinot stop. For a given resolution and wavelength this circular aperture allows more light through than a slit, resulting in a higher signal-to-noise ratio. The wavelength accuracy or Connes' advantage (named after Janine Connes). The wavelength scale is calibrated by a laser beam of known wavelength that passes through the interferometer. This is much more stable and accurate than in dispersive instruments where the scale depends on the mechanical movement of diffraction gratings. In practice, the accuracy is limited by the divergence of the beam in the interferometer which depends on the resolution. Another minor advantage is less sensitivity to stray light, that is radiation of one wavelength appearing at another wavelength in the spectrum. In dispersive instruments, this is the result of imperfections in the diffraction gratings and accidental reflections. In FT instruments there is no direct equivalent as the apparent wavelength is determined by the modulation frequency in the interferometer. Resolution The interferogram belongs in the length dimension. Fourier transform (FT) inverts the dimension, so the FT of the interferogram belongs in the reciprocal length dimension([L−1]), that is the dimension of wavenumber. The spectral resolution in cm−1 is equal to the reciprocal of the maximal retardation in cm. Thus a 4 cm−1 resolution will be obtained if the maximal retardation is 0.25 cm; this is typical of the cheaper FTIR instruments. Much higher resolution can be obtained by increasing the maximal retardation. This is not easy, as the moving mirror must travel in a near-perfect straight line. The use of corner-cube mirrors in place of the flat mirrors is helpful, as an outgoing ray from a corner-cube mirror is parallel to the incoming ray, regardless of the orientation of the mirror about axes perpendicular to the axis of the light beam. A spectrometer with 0.001 cm−1 resolution is now available commercially. The throughput advantage is important for high-resolution FTIR, as the monochromator in a dispersive instrument with the same resolution would have very narrow entrance and exit slits. In 1966 Janine Connes measured the temperature of the atmosphere of Venus by recording the vibration-rotation spectrum of Venusian CO2 at 0.1 cm−1 resolution. Michelson himself attempted to resolve the hydrogen Hα emission band in the spectrum of a hydrogen atom into its two components by using his interferometer. p25 Motivation FTIR is a method of measuring infrared absorption and emission spectra. For a discussion of why people measure infrared absorption and emission spectra, i.e. why and how substances absorb and emit infrared light, see the article: Infrared spectroscopy. Components IR sources FTIR spectrometers are mostly used for measurements in the mid and near IR regions. For the mid-IR region, 2−25 μm (5,000–400 cm−1), the most common source is a silicon carbide (SiC) element heated to about (Globar). The output is similar to a blackbody. Shorter wavelengths of the near-IR, 1−2.5 μm (10,000–4,000 cm−1), require a higher temperature source, typically a tungsten-halogen lamp. The long wavelength output of these is limited to about 5 μm (2,000 cm−1) by the absorption of the quartz envelope. For the far-IR, especially at wavelengths beyond 50 μm (200 cm−1) a mercury discharge lamp gives higher output than a thermal source. Detectors Far-IR spectrometers commonly use pyroelectric detectors that respond to changes in temperature as the intensity of IR radiation falling on them varies. The sensitive elements in these detectors are either deuterated triglycine sulfate (DTGS) or lithium tantalate (LiTaO3). These detectors operate at ambient temperatures and provide adequate sensitivity for most routine applications. To achieve the best sensitivity the time for a scan is typically a few seconds. Cooled photoelectric detectors are employed for situations requiring higher sensitivity or faster response. Liquid nitrogen cooled mercury cadmium telluride (MCT) detectors are the most widely used in the mid-IR. With these detectors an interferogram can be measured in as little as 10 milliseconds. Uncooled indium gallium arsenide photodiodes or DTGS are the usual choices in near-IR systems. Very sensitive liquid-helium-cooled silicon or germanium bolometers are used in the far-IR where both sources and beamsplitters are inefficient. Beam splitter An ideal beam-splitter transmits and reflects 50% of the incident radiation. However, as any material has a limited range of optical transmittance, several beam-splitters may be used interchangeably to cover a wide spectral range. In a simple Michelson interferometer, one beam passes twice through the beamsplitter but the other passes through only once. To correct for this, an additional compensator plate of equal thickness is incorporated. For the mid-IR region, the beamsplitter is usually made of KBr with a germanium-based coating that makes it semi-reflective. KBr absorbs strongly at wavelengths beyond 25 μm (400 cm−1), so CsI or KRS-5 are sometimes used to extend the range to about 50 μm (200 cm−1). ZnSe is an alternative where moisture vapour can be a problem, but is limited to about 20 μm (500 cm−1). CaF2 is the usual material for the near-IR, being both harder and less sensitive to moisture than KBr, but cannot be used beyond about 8 μm (1,200 cm−1). Far-IR beamsplitters are mostly based on polymer films, and cover a limited wavelength range. Attenuated total reflectance Attenuated total reflectance (ATR) is one accessory of FTIR spectrophotometer to measure surface properties of solid or thin film samples rather than their bulk properties. Generally, ATR has a penetration depth of around 1 or 2 micrometers depending on sample conditions. Fourier transform The interferogram in practice consists of a set of intensities measured for discrete values of retardation. The difference between successive retardation values is constant. Thus, a discrete Fourier transform is needed. The fast Fourier transform (FFT) algorithm is used. Spectral range Far-infrared The first FTIR spectrometers were developed for far-infrared range. The reason for this has to do with the mechanical tolerance needed for good optical performance, which is related to the wavelength of the light being used. For the relatively long wavelengths of the far infrared, ~10 μm tolerances are adequate, whereas for the rock-salt region tolerances have to be better than 1 μm. A typical instrument was the cube interferometer developed at the NPL and marketed by Grubb Parsons. It used a stepper motor to drive the moving mirror, recording the detector response after each step was completed. Mid-infrared With the advent of cheap microcomputers it became possible to have a computer dedicated to controlling the spectrometer, collecting the data, doing the Fourier transform and presenting the spectrum. This provided the impetus for the development of FTIR spectrometers for the rock-salt region. The problems of manufacturing ultra-high precision optical and mechanical components had to be solved. A wide range of instruments are now available commercially. Although instrument design has become more sophisticated, the basic principles remain the same. Nowadays, the moving mirror of the interferometer moves at a constant velocity, and sampling of the interferogram is triggered by finding zero-crossings in the fringes of a secondary interferometer lit by a helium–neon laser. In modern FTIR systems the constant mirror velocity is not strictly required, as long as the laser fringes and the original interferogram are recorded simultaneously with higher sampling rate and then re-interpolated on a constant grid, as pioneered by James W. Brault. This confers very high wavenumber accuracy on the resulting infrared spectrum and avoids wavenumber calibration errors. Near-infrared The near-infrared region spans the wavelength range between the rock-salt region and the start of the visible region at about 750 nm. Overtones of fundamental vibrations can be observed in this region. It is used mainly in industrial applications such as process control and chemical imaging. Applications FTIR can be used in all applications where a dispersive spectrometer was used in the past (see external links). In addition, the improved sensitivity and speed have opened up new areas of application. Spectra can be measured in situations where very little energy reaches the detector. Fourier transform infrared spectroscopy is used in geology, chemistry, materials, botany and biology research fields. Nano and biological materials FTIR is also used to investigate various nanomaterials and proteins in hydrophobic membrane environments. Studies show the ability of FTIR to directly determine the polarity at a given site along the backbone of a transmembrane protein. The bond features involved with various organic and inorganic nanomaterials and their quantitative analysis can be done with the help of FTIR. Microscopy and imaging An infrared microscope allows samples to be observed and spectra measured from regions as small as 5 microns across. Images can be generated by combining a microscope with linear or 2-D array detectors. The spatial resolution can approach 5 microns with tens of thousands of pixels. The images contain a spectrum for each pixel and can be viewed as maps showing the intensity at any wavelength or combination of wavelengths. This allows the distribution of different chemical species within the sample to be seen. This technique has been applied in various biological applications including the analysis of tissue sections as an alternative to conventional histopathology, examining the homogeneity of pharmaceutical tablets, and for differentiating morphologically-similar pollen grains. Nanoscale and spectroscopy below the diffraction limit The spatial resolution of FTIR can be further improved below the micrometer scale by integrating it into scanning near-field optical microscopy platform. The corresponding technique is called nano-FTIR and allows for performing broadband spectroscopy on materials in ultra-small quantities (single viruses and protein complexes) and with 10 to 20 nm spatial resolution. FTIR as detector in chromatography The speed of FTIR allows spectra to be obtained from compounds as they are separated by a gas chromatograph. However this technique is little used compared to GC-MS (gas chromatography-mass spectrometry) which is more sensitive. The GC-IR method is particularly useful for identifying isomers, which by their nature have identical masses. Liquid chromatography fractions are more difficult because of the solvent present. One notable exception is to measure chain branching as a function of molecular size in polyethylene using gel permeation chromatography, which is possible using chlorinated solvents that have no absorption in the area in question. TG-IR (thermogravimetric analysis-infrared spectrometry) Measuring the gas evolved as a material is heated allows qualitative identification of the species to complement the purely quantitative information provided by measuring the weight loss. Water content determination in plastics and composites FTIR analysis is used to determine water content in fairly thin plastic and composite parts, more commonly in the laboratory setting. Such FTIR methods have long been used for plastics, and became extended for composite materials in 2018, when the method was introduced by Krauklis, Gagani and Echtermeyer. FTIR method uses the maxima of the absorbance band at about 5,200 cm−1 which correlates with the true water content in the material.
Physical sciences
Spectroscopy
Chemistry
1268562
https://en.wikipedia.org/wiki/Purified%20water
Purified water
Purified water is water that has been mechanically filtered or processed to remove impurities and make it suitable for use. Distilled water was, formerly, the most common form of purified water, but, in recent years, water is more frequently purified by other processes including capacitive deionization, reverse osmosis, carbon filtering, microfiltration, ultrafiltration, ultraviolet oxidation, or electrodeionization. Combinations of a number of these processes have come into use to produce ultrapure water of such high purity that its trace contaminants are measured in parts per billion (ppb) or parts per trillion (ppt). Purified water has many uses, largely in the production of medications, in science and engineering laboratories and industries, and is produced in a range of purities. It is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain product consistency. It can be produced on-site for immediate use or purchased in containers. Purified water in colloquial English can also refer to water that has been treated ("rendered potable") to neutralize, but not necessarily remove contaminants considered harmful to humans or animals. Parameters of water purity Purified water is usually produced by the purification of drinking water or ground water. The impurities that may need to be removed are: inorganic ions (typically monitored as electrical conductivity or resistivity or specific tests) organic compounds (typically monitored as TOC or by specific tests) bacteria (monitored by total viable counts or epifluorescence) endotoxins and nucleases (monitored by LAL or specific enzyme tests) particulates (typically controlled by filtration) gases (typically managed by degassing when required) Purification methods Distillation Distilled water is produced by a process of distillation. Distillation involves boiling the water and then condensing the vapor into a clean container, leaving solid contaminants behind. Distillation produces very pure water. A white or yellowish mineral scale is left in the distillation apparatus, which requires regular cleaning. Distilled water, like all purified water, must be stored in a sterilized container to guarantee the absence of bacteria. For many procedures, more economical alternatives are available, such as deionized water, and are used in place of distilled water. Double distillation Double-distilled water (abbreviated "ddH2O", "Bidest. water" or "DDW") is prepared by slow boiling the uncontaminated condensed water vapor from a prior slow boiling. Historically, it was the de facto standard for highly purified laboratory water for biochemistry and used in laboratory trace analysis until combination purification methods of water purification became widespread. Deionization Deionized water (DI water, DIW or de-ionized water), often synonymous with demineralized water / DM water, is water that has had almost all of its mineral ions removed, such as cations like sodium, calcium, iron, and copper, and anions such as chloride and sulfate. Deionization is a chemical process that uses specially manufactured ion-exchange resins, which exchange hydrogen and hydroxide ions for dissolved minerals, and then recombine to form water. Because most non-particulate water impurities are dissolved salts, deionization produces highly pure water that is generally similar to distilled water, with the advantage that the process is quicker and does not build up scale. However, deionization does not significantly remove uncharged organic molecules, viruses, or bacteria, except by incidental trapping in the resin. Specially made strong base anion resins can remove Gram-negative bacteria. Deionization can be done continuously and inexpensively using electrodeionization. Three types of deionization exist: co-current, counter-current, and mixed bed. Co-current deionization Co-current deionization refers to the original downflow process where both input water and regeneration chemicals enter at the top of an ion-exchange column and exit at the bottom. Co-current operating costs are comparatively higher than counter-current deionization because of the additional usage of regenerants. Because regenerant chemicals are dilute when they encounter the bottom or finishing resins in an ion-exchange column, the product quality is lower than a similarly sized counter-flow column. The process is still used, and can be maximized with the fine-tuning of the flow of regenerants within the ion exchange column. Counter-current deionization Counter-current deionization comes in two forms, each requiring engineered internals: Upflow columns where input water enters from the bottom and regenerants enter from the top of the ion exchange column. Upflow regeneration where water enters from the top and regenerants enter from the bottom. In both cases, separate distribution headers (input water, input regenerant, exit water, and exit regenerant) must be tuned to: the input water quality and flow, the time of operation between regenerations, and the desired product water analysis. Counter-current deionization is the more attractive method of ion exchange. Chemicals (regenerants) flow in the opposite direction to the service flow. Less time for regeneration is required when compared to cocurrent columns. The quality of the finished product can be as low as .5 parts per million. The main advantage of counter-current deionization is the low operating cost, due to the low usage of regenerants during the regeneration process. Mixed bed deionization Mixed bed deionization is a 40/60 mixture of cation and anion resin combined in a single ion-exchange column. With proper pretreatment, product water purified from a single pass through a mixed bed ion exchange column is the purest that can be made. Most commonly, mixed bed demineralizers are used for final water polishing to clean the last few ions within water prior to use. Small mixed bed deionization units have no regeneration capability. Commercial mixed bed deionization units have elaborate internal water and regenerant distribution systems for regeneration. A control system operates pumps and valves for the regenerants of spent anions and cations resins within the ion exchange column. Each is regenerated separately, then remixed during the regeneration process. Because of the high quality of product water achieved, and because of the expense and difficulty of regeneration, mixed bed demineralizers are used only when the highest purity water is required. Softening Softening consists in preventing the possible precipitation of poorly soluble minerals from natural water due to changes occurring in the physico-chemical conditions (such as pCO2, pH, and Eh). It is applied when poorly soluble ions present in water might precipitate as insoluble salts (e.g., , ...), or interact with a chemical process. The water is "softened" by exchanging poorly soluble divalent cations (mainly , and ) with the soluble cation. Softened water has therefore a higher electrical conductivity than deionized water. Softened water cannot be considered as truly demineralized water, but does no longer contain cations responsible for the hardness of water and causing the formation of limescale, a hard chalky deposit essentially consisting of CaCO3, building up inside kettles, hot water boilers, and pipework. Demineralization In the strict sense, the term demineralization should imply removing all dissolved mineral species from water. Thus not only removing dissolved salt as obtained by simple deionization, but also neutral dissolved species such as dissolved iron hydroxides () or dissolved silica (), two solutes often present in water. In this way, demineralized water has the same electrical conductivity as deionized water, but is purer because it does not contain non-ionized substances, i.e. neutral solutes. However, demineralized water is often used interchangeably with deionized water and can be also confused with softened water, depending on the exact definition used: removing only the cations susceptible to precipitate as insoluble minerals (from there, "demineralization"), or removing all the "mineral species" present in water, and thus not only dissolved ions but also neutral solute species. So, the term demineralized water is vague and deionized water or softened water should often be preferred in its place for more clarity. Other processes Other processes are also used to purify water, including reverse osmosis, carbon filtration, microporous filtration, ultrafiltration, ultraviolet oxidation, or electrodialysis. These are used in place of, or in addition to, the processes listed above. Processes rendering water potable but not necessarily closer to being pure H2O / hydroxide + hydronium ions include the use of dilute sodium hypochlorite, ozone, mixed-oxidants (electro-catalyzed H2O + NaCl), and iodine; See discussion regarding potable water treatments under "Health effects" below. Uses Purified water is suitable for many applications, including autoclaves, hand-pieces, laboratory testing, laser cutting, and automotive use. Purification removes contaminants that may interfere with processes, or leave residues on evaporation. Although water is generally considered to be a good electrical conductor—for example, domestic electrical systems are considered particularly hazardous to people if they may be in contact with wet surfaces—pure water is a poor conductor. The conductivity of water is measured in Siemens per meter (S/m). Sea-water is typically 5 S/m, drinking water is typically in the range of 5-50 mS/m, while highly purified water can be as low as 5.5 μS/m (0.055 μS/cm), a ratio of about 1,000,000:1,000:1. Purified water is used in the pharmaceutical industry. Water of this grade is widely used as a raw material, ingredient, and solvent in the processing, formulation, and manufacture of pharmaceutical products, active pharmaceutical ingredients (APIs) and intermediates, compendial articles, and analytical reagents. The microbiological content of the water is of importance and the water must be regularly monitored and tested to show that it remains within microbiological control. Purified water is also used in the commercial beverage industry as the primary ingredient of any given trademarked bottling formula, in order to maintain critical consistency of taste, clarity, and color. This guarantees the consumer reliably safe and satisfying drinking. In the process prior to filling and sealing, individual bottles are always rinsed with deionised water to remove any particles that could cause a change in taste. Deionised and distilled water are used in lead–acid batteries to prevent erosion of the cells, although deionised water is the better choice as more impurities are removed from the water in the creation process. Laboratory use Technical standards on water quality have been established by a number of professional organizations, including the American Chemical Society (ACS), ASTM International, the U.S. National Committee for Clinical Laboratory Standards (NCCLS) which is now CLSI, and the U.S. Pharmacopeia (USP). The ASTM, NCCLS, and ISO 3696 or the International Organization for Standardization classify purified water into Grade 1–3 or Types I–IV depending on the level of purity. These organizations have similar, although not identical, parameters for highly purified water. Note that the European Pharmacopeia uses Highly Purified Water (HPW) as a definition for water meeting the quality of Water For Injection, without however having undergone distillation. In the laboratory context, highly purified water is used to denominate various qualities of water having been "highly" purified. Regardless of which organization's water quality norm is used, even Type I water may require further purification depending on the specific laboratory application. For example, water that is being used for molecular-biology experiments needs to be DNase or RNase-free, which requires special additional treatment or functional testing. Water for microbiology experiments needs to be completely sterile, which is usually accomplished by autoclaving. Water used to analyze trace metals may require the elimination of trace metals to a standard beyond that of the Type I water norm. Criticism A member of the ASTM D19 (Water) Committee, Erich L. Gibbs, criticized ASTM Standard D1193, by saying "Type I water could be almost anything – water that meets some or all of the limits, part or all of the time, at the same or different points in the production process." Electrical conductivity Completely de-gassed ultrapure water has a conductivity of 1.2 × 10−4 S/m, whereas on equilibration to the atmosphere it is 7.5 × 10−5 S/m due to dissolved CO2 in it. The highest grades of ultrapure water should not be stored in glass or plastic containers because these container materials leach (release) contaminants at very low concentrations. Storage vessels made of silica are used for less-demanding applications and vessels of ultrapure tin are used for the highest-purity applications. It is worth noting that, although electrical conductivity only indicates the presence of ions, the majority of common contaminants found naturally in water ionize to some degree. This ionization is a good measure of the efficacy of a filtration system, and more expensive systems incorporate conductivity-based alarms to indicate when filters should be refreshed or replaced. For comparison, seawater has a conductivity of perhaps 5 S/m (53 mS/cm is quoted), while normal un-purified tap water may have conductivity of 5 × 10−3 S/m (50 μS/cm) (to within an order of magnitude), which is still about 2 or 3 orders of magnitude higher than the output from a well-functioning demineralizing or distillation mechanism, so low levels of contamination or declining performance are easily detected. Industrial uses Some industrial processes, notably in the semiconductor and pharmaceutical industries, need large amounts of very pure water. In these situations, feedwater is first processed into purified water and then further processed to produce ultrapure water. Another class of ultrapure water used for pharmaceutical industries is called Water-For-Inject (WFI), typically generated by multiple distillation or compressed-vaporation process of DI water or RO-DI water. It has a tighter bacteria requirement as 10 CFU per 100 mL, instead of the 100 CFU per mL per USP. Other uses Distilled or deionized water is commonly used to top up the lead–acid batteries used in cars and trucks and for other applications. The presence of foreign ions commonly found in tap water will drastically shorten the lifespan of a lead–acid battery. Distilled or deionized water is preferable to tap water for use in automotive cooling systems. Using deionised or distilled water in appliances that evaporate water, such as steam irons and humidifiers, can reduce the build-up of mineral scale, which shortens appliance life. Some appliance manufacturers say that deionised water is no longer necessary. Purified water is used in freshwater and marine aquariums. Since it does not contain impurities such as copper and chlorine, it helps to keep fish free from diseases and avoids the build-up of algae on aquarium plants due to its lack of phosphate and silicate. Deionized water should be re-mineralized before use in aquaria since it lacks many macro- and micro-nutrients needed by plants and fish. Water (sometimes mixed with methanol) has been used to extend the performance of aircraft engines. In piston engines, it acts to delay the onset of engine knocking. In turbine engines, it allows more fuel flow for a given turbine temperature limit and increases mass flow. As an example, it was used on early Boeing 707 models. Advanced materials and engineering have since rendered such systems obsolete for new designs; however, spray-cooling of incoming air-charge is still used to a limited extent with off-road turbo-charged engines (road-race track cars). Deionized water is very often used as an ingredient in many cosmetics and pharmaceuticals. "Aqua" is the standard name for water in the International Nomenclature of Cosmetic Ingredients standard, which is mandatory on product labels in some countries. Because of its high relative dielectric constant (~80), deionized water is also used (for short durations, when the resistive losses are acceptable) as a high voltage dielectric in many pulsed power applications, such as the Sandia National Laboratories Z Machine. Distilled water can be used in PC water-cooling systems and Laser Marking Systems. The lack of impurity in the water means that the system stays clean and prevents a buildup of bacteria and algae. Also, the low conductance reduces the risk of electrical damage in the event of a leak. However, deionized water has been known to cause cracks in brass and copper fittings. When used as a rinse after washing cars, windows, and similar applications, purified water dries without leaving spots caused by dissolved solutes. Deionized water is used in water-fog fire-extinguishing systems used in sensitive environments, such as where high-voltage electrical and sensitive electronic equipment is used. The 'sprinkler' nozzles use much finer spray jets than other systems and operate at up 35 MPa (350 bar; 5,000 psi) of pressure. The extremely fine mist produced takes the heat out of fire rapidly, and the fine droplets of water are nonconducting (when deionized) and are less likely to damage sensitive equipment. Deionized water, however, is inherently acidic, and contaminants (such as copper, dust, stainless and carbon steel, and many other common materials) rapidly supply ions, thus re-ionizing the water. It is not generally considered acceptable to spray water on electrical circuits that are powered, and it is generally considered undesirable to use water in electrical contexts. Distilled or purified water is used in humidors to prevent cigars from collecting bacteria, mold, and contaminants, as well as to prevent residue from forming on the humidifier material. Window cleaners using water-fed pole systems also use purified water because it enables the windows to dry by themselves leaving no stains or smears. The use of purified water from water-fed poles also prevents the need for using ladders and therefore ensure compliance with Work at Height Legislation in the UK. Mineral consumption Distillation removes all minerals from water, and the membrane methods of reverse osmosis and nanofiltration remove most, or virtually all, minerals. This results in demineralized water, which has not been proven to be healthier than drinking water. The World Health Organization investigated the health effects of demineralized water in 1980, and found that demineralized water increased diuresis and the elimination of electrolytes, with decreased serum potassium concentration. Magnesium, calcium and other nutrients in water may help to protect against nutritional deficiency. Recommendations for magnesium have been put at a minimum of 10 mg/L with 20–30 mg/L optimum; for calcium a 20 mg/L minimum and a 40–80 mg/L optimum, and a total water hardness (adding magnesium and calcium) of 2–4 mmol/L. For fluoride, the concentration recommended for dental health is 0.5–1.0 mg/L, with a maximum guideline value of 1.5 mg/L to avoid dental fluorosis. Municipal water supplies often add or have trace impurities at levels that are regulated to be safe for consumption. Much of these additional impurities, such as volatile organic compounds, fluoride, and an estimated 75,000+ other chemical compounds are not removed through conventional filtration; however, distillation and reverse osmosis eliminate nearly all of these impurities. Artificial seawater Atmospheric water generator Electrodeionization Heavy water Hydrogen production Milli-Q water Ultrapure water Water for injection Water ionizer Water softening Water purification
Physical sciences
Water
Chemistry
1268939
https://en.wikipedia.org/wiki/General-purpose%20computing%20on%20graphics%20processing%20units
General-purpose computing on graphics processing units
General-purpose computing on graphics processing units (GPGPU, or less often GPGP) is the use of a graphics processing unit (GPU), which typically handles computation only for computer graphics, to perform computation in applications traditionally handled by the central processing unit (CPU). The use of multiple video cards in one computer, or large numbers of graphics chips, further parallelizes the already parallel nature of graphics processing. Essentially, a GPGPU pipeline is a kind of parallel processing between one or more GPUs and CPUs that analyzes data as if it were in image or other graphic form. While GPUs operate at lower frequencies, they typically have many times the number of cores. Thus, GPUs can process far more pictures and graphical data per second than a traditional CPU. Migrating data into graphical form and then using the GPU to scan and analyze it can create a large speedup. GPGPU pipelines were developed at the beginning of the 21st century for graphics processing (e.g. for better shaders). These pipelines were found to fit scientific computing needs well, and have since been developed in this direction. The most known GPGPUs are Nvidia Tesla that are used for Nvidia DGX, alongside AMD Instinct and Intel Gaudi. History In principle, any arbitrary Boolean function, including addition, multiplication, and other mathematical functions, can be built up from a functionally complete set of logic operators. In 1987, Conway's Game of Life became one of the first examples of general-purpose computing using an early stream processor called a blitter to invoke a special sequence of logical operations on bit vectors. General-purpose computing on GPUs became more practical and popular after about 2001, with the advent of both programmable shaders and floating point support on graphics processors. Notably, problems involving matrices and/or vectors especially two-, three-, or four-dimensional vectors were easy to translate to a GPU, which acts with native speed and support on those types. A significant milestone for GPGPU was the year 2003 when two research groups independently discovered GPU-based approaches for the solution of general linear algebra problems on GPUs that ran faster than on CPUs. These early efforts to use GPUs as general-purpose processors required reformulating computational problems in terms of graphics primitives, as supported by the two major APIs for graphics processors, OpenGL and DirectX. This cumbersome translation was obviated by the advent of general-purpose programming languages and APIs such as Sh/RapidMind, Brook and Accelerator. These were followed by Nvidia's CUDA, which allowed programmers to ignore the underlying graphical concepts in favor of more common high-performance computing concepts. Newer, hardware-vendor-independent offerings include Microsoft's DirectCompute and Apple/Khronos Group's OpenCL. This means that modern GPGPU pipelines can leverage the speed of a GPU without requiring full and explicit conversion of the data to a graphical form. Mark Harris, the founder of GPGPU.org, coined the term GPGPU. Implementations Any language that allows the code running on the CPU to poll a GPU shader for return values, can create a GPGPU framework. Programming standards for parallel computing include OpenCL (vendor-independent), OpenACC, OpenMP and OpenHMPP. , OpenCL is the dominant open general-purpose GPU computing language, and is an open standard defined by the Khronos Group. OpenCL provides a cross-platform GPGPU platform that additionally supports data parallel compute on CPUs. OpenCL is actively supported on Intel, AMD, Nvidia, and ARM platforms. The Khronos Group has also standardised and implemented SYCL, a higher-level programming model for OpenCL as a single-source domain specific embedded language based on pure C++11. The dominant proprietary framework is Nvidia CUDA. Nvidia launched CUDA in 2006, a software development kit (SDK) and application programming interface (API) that allows using the programming language C to code algorithms for execution on GeForce 8 series and later GPUs. ROCm, launched in 2016, is AMD's open-source response to CUDA. It is, as of 2022, on par with CUDA with regards to features, and still lacking in consumer support. OpenVIDIA was developed at University of Toronto between 2003–2005, in collaboration with Nvidia. Altimesh Hybridizer created by Altimesh compiles Common Intermediate Language to CUDA binaries. It supports generics and virtual functions. Debugging and profiling is integrated with Visual Studio and Nsight. It is available as a Visual Studio extension on Visual Studio Marketplace. Microsoft introduced the DirectCompute GPU computing API, released with the DirectX 11 API. , created by QuantAlea, introduces native GPU computing capabilities for the Microsoft .NET languages F# and C#. Alea GPU also provides a simplified GPU programming model based on GPU parallel-for and parallel aggregate using delegates and automatic memory management. MATLAB supports GPGPU acceleration using the Parallel Computing Toolbox and MATLAB Distributed Computing Server, and third-party packages like Jacket. GPGPU processing is also used to simulate Newtonian physics by physics engines, and commercial implementations include Havok Physics, FX and PhysX, both of which are typically used for computer and video games. C++ Accelerated Massive Parallelism (C++ AMP) is a library that accelerates execution of C++ code by exploiting the data-parallel hardware on GPUs. Mobile computers Due to a trend of increasing power of mobile GPUs, general-purpose programming became available also on the mobile devices running major mobile operating systems. Google Android 4.2 enabled running RenderScript code on the mobile device GPU. Renderscript has since been deprecated in favour of first OpenGL compute shaders and later Vulkan Compute. OpenCL is available on may Android devices, but is not officially supported by Android. Apple introduced the proprietary Metal API for iOS applications, able to execute arbitrary code through Apple's GPU compute shaders. Hardware support Computer video cards are produced by various vendors, such as Nvidia, AMD. Cards from such vendors differ on implementing data-format support, such as integer and floating-point formats (32-bit and 64-bit). Microsoft introduced a Shader Model standard, to help rank the various features of graphic cards into a simple Shader Model version number (1.0, 2.0, 3.0, etc.). Integer numbers Pre-DirectX 9 video cards only supported paletted or integer color types. Sometimes another alpha value is added, to be used for transparency. Common formats are: 8 bits per pixel – Sometimes palette mode, where each value is an index in a table with the real color value specified in one of the other formats. Sometimes three bits for red, three bits for green, and two bits for blue. 16 bits per pixel – Usually the bits are allocated as five bits for red, six bits for green, and five bits for blue. 24 bits per pixel – There are eight bits for each of red, green, and blue. 32 bits per pixel – There are eight bits for each of red, green, blue, and alpha. Floating-point numbers For early fixed-function or limited programmability graphics (i.e., up to and including DirectX 8.1-compliant GPUs) this was sufficient because this is also the representation used in displays. This representation does have certain limitations. Given sufficient graphics processing power even graphics programmers would like to use better formats, such as floating point data formats, to obtain effects such as high-dynamic-range imaging. Many GPGPU applications require floating point accuracy, which came with video cards conforming to the DirectX 9 specification. DirectX 9 Shader Model 2.x suggested the support of two precision types: full and partial precision. Full precision support could either be FP32 or FP24 (floating point 32- or 24-bit per component) or greater, while partial precision was FP16. ATI's Radeon R300 series of GPUs supported FP24 precision only in the programmable fragment pipeline (although FP32 was supported in the vertex processors) while Nvidia's NV30 series supported both FP16 and FP32; other vendors such as S3 Graphics and XGI supported a mixture of formats up to FP24. The implementations of floating point on Nvidia GPUs are mostly IEEE compliant; however, this is not true across all vendors. This has implications for correctness which are considered important to some scientific applications. While 64-bit floating point values (double precision float) are commonly available on CPUs, these are not universally supported on GPUs. Some GPU architectures sacrifice IEEE compliance, while others lack double-precision. Efforts have occurred to emulate double-precision floating point values on GPUs; however, the speed tradeoff negates any benefit to offloading the computing onto the GPU in the first place. Vectorization Most operations on the GPU operate in a vectorized fashion: one operation can be performed on up to four values at once. For example, if one color is to be modulated by another color , the GPU can produce the resulting color in one operation. This functionality is useful in graphics because almost every basic data type is a vector (either 2-, 3-, or 4-dimensional). Examples include vertices, colors, normal vectors, and texture coordinates. Many other applications can put this to good use, and because of their higher performance, vector instructions, termed single instruction, multiple data (SIMD), have long been available on CPUs. GPU vs. CPU Originally, data was simply passed one-way from a central processing unit (CPU) to a graphics processing unit (GPU), then to a display device. As time progressed, however, it became valuable for GPUs to store at first simple, then complex structures of data to be passed back to the CPU that analyzed an image, or a set of scientific-data represented as a 2D or 3D format that a video card can understand. Because the GPU has access to every draw operation, it can analyze data in these forms quickly, whereas a CPU must poll every pixel or data element much more slowly, as the speed of access between a CPU and its larger pool of random-access memory (or in an even worse case, a hard drive) is slower than GPUs and video cards, which typically contain smaller amounts of more expensive memory that is much faster to access. Transferring the portion of the data set to be actively analyzed to that GPU memory in the form of textures or other easily readable GPU forms results in speed increase. The distinguishing feature of a GPGPU design is the ability to transfer information bidirectionally back from the GPU to the CPU; generally the data throughput in both directions is ideally high, resulting in a multiplier effect on the speed of a specific high-use algorithm. GPGPU pipelines may improve efficiency on especially large data sets and/or data containing 2D or 3D imagery. It is used in complex graphics pipelines as well as scientific computing; more so in fields with large data sets like genome mapping, or where two- or three-dimensional analysis is useful especially at present biomolecule analysis, protein study, and other complex organic chemistry. An example of such applications is NVIDIA software suite for genome analysis. Such pipelines can also vastly improve efficiency in image processing and computer vision, among other fields; as well as parallel processing generally. Some very heavily optimized pipelines have yielded speed increases of several hundred times the original CPU-based pipeline on one high-use task. A simple example would be a GPU program that collects data about average lighting values as it renders some view from either a camera or a computer graphics program back to the main program on the CPU, so that the CPU can then make adjustments to the overall screen view. A more advanced example might use edge detection to return both numerical information and a processed image representing outlines to a computer vision program controlling, say, a mobile robot. Because the GPU has fast and local hardware access to every pixel or other picture element in an image, it can analyze and average it (for the first example) or apply a Sobel edge filter or other convolution filter (for the second) with much greater speed than a CPU, which typically must access slower random-access memory copies of the graphic in question. GPGPU is fundamentally a software concept, not a hardware concept; it is a type of algorithm, not a piece of equipment. Specialized equipment designs may, however, even further enhance the efficiency of GPGPU pipelines, which traditionally perform relatively few algorithms on very large amounts of data. Massively parallelized, gigantic-data-level tasks thus may be parallelized even further via specialized setups such as rack computing (many similar, highly tailored machines built into a rack), which adds a third layer many computing units each using many CPUs to correspond to many GPUs. Some Bitcoin "miners" used such setups for high-quantity processing. Caches Historically, CPUs have used hardware-managed caches, but the earlier GPUs only provided software-managed local memories. However, as GPUs are being increasingly used for general-purpose applications, state-of-the-art GPUs are being designed with hardware-managed multi-level caches which have helped the GPUs to move towards mainstream computing. For example, GeForce 200 series GT200 architecture GPUs did not feature an L2 cache, the Fermi GPU has 768 KiB last-level cache, the Kepler GPU has 1.5 MiB last-level cache, the Maxwell GPU has 2 MiB last-level cache, and the Pascal GPU has 4 MiB last-level cache. Register file GPUs have very large register files, which allow them to reduce context-switching latency. Register file size is also increasing over different GPU generations, e.g., the total register file size on Maxwell (GM200), Pascal and Volta GPUs are 6 MiB, 14 MiB and 20 MiB, respectively. By comparison, the size of a register file on CPUs is small, typically tens or hundreds of kilobytes. Energy efficiency The high performance of GPUs comes at the cost of high power consumption, which under full load is in fact as much power as the rest of the PC system combined. The maximum power consumption of the Pascal series GPU (Tesla P100) was specified to be 250W. Classical GPGPU Before CUDA was published in 2007, GPGPU was "classical" and involved repurposing graphics primitives. A standard structure of such was: Load arrays into textures Draw a quadrangle Apply pixel shaders and textures to quadrangle Read out pixel values in the quadrangle as array More examples are available in part 4 of GPU Gems 2. Linear algebra Using GPU for numerical linear algebra began at least in 2001. It had been used for Gauss-Seidel solver, conjugate gradients, etc. Stream processing GPUs are designed specifically for graphics and thus are very restrictive in operations and programming. Due to their design, GPUs are only effective for problems that can be solved using stream processing and the hardware can only be used in certain ways. The following discussion referring to vertices, fragments and textures concerns mainly the legacy model of GPGPU programming, where graphics APIs (OpenGL or DirectX) were used to perform general-purpose computation. With the introduction of the CUDA (Nvidia, 2007) and OpenCL (vendor-independent, 2008) general-purpose computing APIs, in new GPGPU codes it is no longer necessary to map the computation to graphics primitives. The stream processing nature of GPUs remains valid regardless of the APIs used. (See e.g.,) GPUs can only process independent vertices and fragments, but can process many of them in parallel. This is especially effective when the programmer wants to process many vertices or fragments in the same way. In this sense, GPUs are stream processors processors that can operate in parallel by running one kernel on many records in a stream at once. A stream is simply a set of records that require similar computation. Streams provide data parallelism. Kernels are the functions that are applied to each element in the stream. In the GPUs, vertices and fragments are the elements in streams and vertex and fragment shaders are the kernels to be run on them. For each element we can only read from the input, perform operations on it, and write to the output. It is permissible to have multiple inputs and multiple outputs, but never a piece of memory that is both readable and writable. Arithmetic intensity is defined as the number of operations performed per word of memory transferred. It is important for GPGPU applications to have high arithmetic intensity else the memory access latency will limit computational speedup. Ideal GPGPU applications have large data sets, high parallelism, and minimal dependency between data elements. GPU programming concepts Computational resources There are a variety of computational resources available on the GPU: Programmable processors – vertex, primitive, fragment and mainly compute pipelines allow programmer to perform kernel on streams of data Rasterizer – creates fragments and interpolates per-vertex constants such as texture coordinates and color Texture unit – read-only memory interface Framebuffer – write-only memory interface In fact, a program can substitute a write only texture for output instead of the framebuffer. This is done either through Render to Texture (RTT), Render-To-Backbuffer-Copy-To-Texture (RTBCTT), or the more recent stream-out. Textures as stream The most common form for a stream to take in GPGPU is a 2D grid because this fits naturally with the rendering model built into GPUs. Many computations naturally map into grids: matrix algebra, image processing, physically based simulation, and so on. Since textures are used as memory, texture lookups are then used as memory reads. Certain operations can be done automatically by the GPU because of this. Kernels Compute kernels can be thought of as the body of loops. For example, a programmer operating on a grid on the CPU might have code that looks like this: // Input and output grids have 10000 x 10000 or 100 million elements. void transform_10k_by_10k_grid(float in[10000][10000], float out[10000][10000]) { for (int x = 0; x < 10000; x++) { for (int y = 0; y < 10000; y++) { // The next line is executed 100 million times out[x][y] = do_some_hard_work(in[x][y]); } } } On the GPU, the programmer only specifies the body of the loop as the kernel and what data to loop over by invoking geometry processing. Flow control In sequential code it is possible to control the flow of the program using if-then-else statements and various forms of loops. Such flow control structures have only recently been added to GPUs. Conditional writes could be performed using a properly crafted series of arithmetic/bit operations, but looping and conditional branching were not possible. Recent GPUs allow branching, but usually with a performance penalty. Branching should generally be avoided in inner loops, whether in CPU or GPU code, and various methods, such as static branch resolution, pre-computation, predication, loop splitting, and Z-cull can be used to achieve branching when hardware support does not exist. GPU methods Map The map operation simply applies the given function (the kernel) to every element in the stream. A simple example is multiplying each value in the stream by a constant (increasing the brightness of an image). The map operation is simple to implement on the GPU. The programmer generates a fragment for each pixel on screen and applies a fragment program to each one. The result stream of the same size is stored in the output buffer. Reduce Some computations require calculating a smaller stream (possibly a stream of only one element) from a larger stream. This is called a reduction of the stream. Generally, a reduction can be performed in multiple steps. The results from the prior step are used as the input for the current step and the range over which the operation is applied is reduced until only one stream element remains. Stream filtering Stream filtering is essentially a non-uniform reduction. Filtering involves removing items from the stream based on some criteria. Scan The scan operation, also termed parallel prefix sum, takes in a vector (stream) of data elements and an (arbitrary) associative binary function '+' with an identity element 'i'. If the input is [a0, a1, a2, a3, ...], an exclusive scan produces the output [i, a0, a0 + a1, a0 + a1 + a2, ...], while an inclusive scan produces the output [a0, a0 + a1, a0 + a1 + a2, a0 + a1 + a2 + a3, ...] and does not require an identity to exist. While at first glance the operation may seem inherently serial, efficient parallel scan algorithms are possible and have been implemented on graphics processing units. The scan operation has uses in e.g., quicksort and sparse matrix-vector multiplication. Scatter The scatter operation is most naturally defined on the vertex processor. The vertex processor is able to adjust the position of the vertex, which allows the programmer to control where information is deposited on the grid. Other extensions are also possible, such as controlling how large an area the vertex affects. The fragment processor cannot perform a direct scatter operation because the location of each fragment on the grid is fixed at the time of the fragment's creation and cannot be altered by the programmer. However, a logical scatter operation may sometimes be recast or implemented with another gather step. A scatter implementation would first emit both an output value and an output address. An immediately following gather operation uses address comparisons to see whether the output value maps to the current output slot. In dedicated compute kernels, scatter can be performed by indexed writes. Gather Gather is the reverse of scatter. After scatter reorders elements according to a map, gather can restore the order of the elements according to the map scatter used. In dedicated compute kernels, gather may be performed by indexed reads. In other shaders, it is performed with texture-lookups. Sort The sort operation transforms an unordered set of elements into an ordered set of elements. The most common implementation on GPUs is using radix sort for integer and floating point data and coarse-grained merge sort and fine-grained sorting networks for general comparable data. Search The search operation allows the programmer to find a given element within the stream, or possibly find neighbors of a specified element. Mostly the search method used is binary search on sorted elements. Data structures A variety of data structures can be represented on the GPU: Dense arrays Sparse matrices (sparse array) static or dynamic Adaptive structures (union type) Applications The following are some of the areas where GPUs have been used for general purpose computing: Automatic parallelization Physical based simulation and physics engines (usually based on Newtonian physics models) Conway's Game of Life, cloth simulation, fluid incompressible flow by solution of Euler equations (fluid dynamics) or Navier–Stokes equations Statistical physics Ising model Lattice gauge theory Segmentation 2D and 3D Level set methods CT reconstruction Fast Fourier transform GPU learning machine learning and data mining computations, e.g., with software BIDMach k-nearest neighbor algorithm Fuzzy logic Tone mapping Audio signal processing Audio and sound effects processing, to use a GPU for digital signal processing (DSP) Analog signal processing Speech processing Digital image processing Video processing Hardware accelerated video decoding and post-processing Motion compensation (mo comp) Inverse discrete cosine transform (iDCT) Variable-length decoding (VLD), Huffman coding Inverse quantization (IQ, not to be confused with Intelligence Quotient) In-loop deblocking Bitstream processing (CAVLC/CABAC) using special purpose hardware for this task because this is a serial task not suitable for regular GPGPU computation Deinterlacing Spatial-temporal deinterlacing Noise reduction Edge enhancement Color correction Hardware accelerated video encoding and pre-processing Global illumination ray tracing, photon mapping, radiosity among others, subsurface scattering Geometric computing constructive solid geometry, distance fields, collision detection, transparency computation, shadow generation Scientific computing Monte Carlo simulation of light propagation Weather forecasting Climate research Molecular modeling on GPU Quantum mechanical physics Astrophysics Bioinformatics Computational finance Medical imaging Clinical decision support system (CDSS) Computer vision Digital signal processing / signal processing Control engineering Operations research Implementations of: the GPU Tabu Search algorithm solving the Resource Constrained Project Scheduling problem is freely available on GitHub; the GPU algorithm solving the Nurse scheduling problem is freely available on GitHub. Neural networks Database operations Computational Fluid Dynamics especially using Lattice Boltzmann methods Cryptography and cryptanalysis Performance modeling: computationally intensive tasks on GPU Implementations of: MD6, Advanced Encryption Standard (AES), Data Encryption Standard (DES), RSA, elliptic curve cryptography (ECC) Password cracking Cryptocurrency transactions processing ("mining") (Bitcoin mining) Electronic design automation Antivirus software Intrusion detection Increase computing power for distributed computing projects like SETI@home, Einstein@home Bioinformatics GPGPU usage in Bioinformatics: Molecular dynamics † Expected speedups are highly dependent on system configuration. GPU performance compared against multi-core x86 CPU socket. GPU performance benchmarked on GPU supported features and may be a kernel to kernel performance comparison. For details on configuration used, view application website. Speedups as per Nvidia in-house testing or ISV's documentation. ‡ Q=Quadro GPU, T=Tesla GPU. Nvidia recommended GPUs for this application. Check with developer or ISV to obtain certification information.
Technology
Computer science
null
1268958
https://en.wikipedia.org/wiki/Function%20of%20a%20real%20variable
Function of a real variable
In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers , or a subset of that contains an interval of positive length. Most real functions that are considered and studied are differentiable in some interval. The most widely considered such functions are the real functions, which are the real-valued functions of a real variable, that is, the functions of a real variable whose codomain is the set of real numbers. Nevertheless, the codomain of a function of a real variable may be any set. However, it is often assumed to have a structure of -vector space over the reals. That is, the codomain may be a Euclidean space, a coordinate vector, the set of matrices of real numbers of a given size, or an -algebra, such as the complex numbers or the quaternions. The structure -vector space of the codomain induces a structure of -vector space on the functions. If the codomain has a structure of -algebra, the same is true for the functions. The image of a function of a real variable is a curve in the codomain. In this context, a function that defines curve is called a parametric equation of the curve. When the codomain of a function of a real variable is a finite-dimensional vector space, the function may be viewed as a sequence of real functions. This is often used in applications. Real function A real function is a function from a subset of to where denotes as usual the set of real numbers. That is, the domain of a real function is a subset , and its codomain is It is generally assumed that the domain contains an interval of positive length. Basic examples For many commonly used real functions, the domain is the whole set of real numbers, and the function is continuous and differentiable at every point of the domain. One says that these functions are defined, continuous and differentiable everywhere. This is the case of: All polynomial functions, including constant functions and linear functions Sine and cosine functions Exponential function Some functions are defined everywhere, but not continuous at some points. For example The Heaviside step function is defined everywhere, but not continuous at zero. Some functions are defined and continuous everywhere, but not everywhere differentiable. For example The absolute value is defined and continuous everywhere, and is differentiable everywhere, except for zero. The cubic root is defined and continuous everywhere, and is differentiable everywhere, except for zero. Many common functions are not defined everywhere, but are continuous and differentiable everywhere where they are defined. For example: A rational function is a quotient of two polynomial functions, and is not defined at the zeros of the denominator. The tangent function is not defined for where is any integer. The logarithm function is defined only for positive values of the variable. Some functions are continuous in their whole domain, and not differentiable at some points. This is the case of: The square root is defined only for nonnegative values of the variable, and not differentiable at 0 (it is differentiable for all positive values of the variable). General definition A real-valued function of a real variable is a function that takes as input a real number, commonly represented by the variable x, for producing another real number, the value of the function, commonly denoted f(x). For simplicity, in this article a real-valued function of a real variable will be simply called a function. To avoid any ambiguity, the other types of functions that may occur will be explicitly specified. Some functions are defined for all real values of the variables (one says that they are everywhere defined), but some other functions are defined only if the value of the variable is taken in a subset X of , the domain of the function, which is always supposed to contain an interval of positive length. In other words, a real-valued function of a real variable is a function such that its domain X is a subset of that contains an interval of positive length. A simple example of a function in one variable could be: which is the square root of x. Image The image of a function is the set of all values of when the variable x runs in the whole domain of . For a continuous (see below for a definition) real-valued function with a connected domain, the image is either an interval or a single value. In the latter case, the function is a constant function. The preimage of a given real number y is the set of the solutions of the equation . Domain The domain of a function of several real variables is a subset of that is sometimes explicitly defined. In fact, if one restricts the domain X of a function f to a subset Y ⊂ X, one gets formally a different function, the restriction of f to Y, which is denoted f|Y. In practice, it is often not harmful to identify f and f|Y, and to omit the subscript |Y. Conversely, it is sometimes possible to enlarge naturally the domain of a given function, for example by continuity or by analytic continuation. This means that it is not worthy to explicitly define the domain of a function of a real variable. Algebraic structure The arithmetic operations may be applied to the functions in the following way: For every real number r, the constant function , is everywhere defined. For every real number r and every function f, the function has the same domain as f (or is everywhere defined if r = 0). If f and g are two functions of respective domains X and Y such that contains an open subset of , then and are functions that have a domain containing . It follows that the functions of n variables that are everywhere defined and the functions of n variables that are defined in some neighbourhood of a given point both form commutative algebras over the reals (-algebras). One may similarly define which is a function only if the set of the points in the domain of f such that contains an open subset of . This constraint implies that the above two algebras are not fields. Continuity and limit Until the second part of 19th century, only continuous functions were considered by mathematicians. At that time, the notion of continuity was elaborated for the functions of one or several real variables a rather long time before the formal definition of a topological space and a continuous map between topological spaces. As continuous functions of a real variable are ubiquitous in mathematics, it is worth defining this notion without reference to the general notion of continuous maps between topological space. For defining the continuity, it is useful to consider the distance function of , which is an everywhere defined function of 2 real variables: A function f is continuous at a point which is interior to its domain, if, for every positive real number , there is a positive real number such that for all such that In other words, may be chosen small enough for having the image by f of the interval of radius centered at contained in the interval of length centered at A function is continuous if it is continuous at every point of its domain. The limit of a real-valued function of a real variable is as follows. Let a be a point in topological closure of the domain X of the function f. The function, f has a limit L when x tends toward a, denoted if the following condition is satisfied: For every positive real number ε > 0, there is a positive real number δ > 0 such that for all x in the domain such that If the limit exists, it is unique. If a is in the interior of the domain, the limit exists if and only if the function is continuous at a. In this case, we have When a is in the boundary of the domain of f, and if f has a limit at a, the latter formula allows to "extend by continuity" the domain of f to a. Calculus One can collect a number of functions each of a real variable, say into a vector parametrized by x: The derivative of the vector y is the vector derivatives of fi(x) for i = 1, 2, ..., n: One can also perform line integrals along a space curve parametrized by x, with position vector r = r(x), by integrating with respect to the variable x: where · is the dot product, and x = a and x = b are the start and endpoints of the curve. Theorems With the definitions of integration and derivatives, key theorems can be formulated, including the fundamental theorem of calculus, integration by parts, and Taylor's theorem. Evaluating a mixture of integrals and derivatives can be done by using theorem differentiation under the integral sign. Implicit functions A real-valued implicit function of a real variable is not written in the form "y = f(x)". Instead, the mapping is from the space 2 to the zero element in (just the ordinary zero 0): and is an equation in the variables. Implicit functions are a more general way to represent functions, since if: then we can always define: but the converse is not always possible, i.e. not all implicit functions have the form of this equation. One-dimensional space curves in n Formulation Given the functions , , ..., all of a common variable t, so that: or taken together: then the parametrized n-tuple, describes a one-dimensional space curve. Tangent line to curve At a point for some constant t = c, the equations of the one-dimensional tangent line to the curve at that point are given in terms of the ordinary derivatives of r1(t), r2(t), ..., rn(t), and r with respect to t: Normal plane to curve The equation of the n-dimensional hyperplane normal to the tangent line at r = a is: or in terms of the dot product: where are points in the plane, not on the space curve. Relation to kinematics The physical and geometric interpretation of dr(t)/dt is the "velocity" of a point-like particle moving along the path r(t), treating r as the spatial position vector coordinates parametrized by time t, and is a vector tangent to the space curve for all t in the instantaneous direction of motion. At t = c, the space curve has a tangent vector , and the hyperplane normal to the space curve at t = c is also normal to the tangent at t = c. Any vector in this plane (p − a) must be normal to . Similarly, d2r(t)/dt2 is the "acceleration" of the particle, and is a vector normal to the curve directed along the radius of curvature. Matrix valued functions A matrix can also be a function of a single variable. For example, the rotation matrix in 2d: is a matrix valued function of rotation angle of about the origin. Similarly, in special relativity, the Lorentz transformation matrix for a pure boost (without rotations): is a function of the boost parameter β = v/c, in which v is the relative velocity between the frames of reference (a continuous variable), and c is the speed of light, a constant. Banach and Hilbert spaces and quantum mechanics Generalizing the previous section, the output of a function of a real variable can also lie in a Banach space or a Hilbert space. In these spaces, division and multiplication and limits are all defined, so notions such as derivative and integral still apply. This occurs especially often in quantum mechanics, where one takes the derivative of a ket or an operator. This occurs, for instance, in the general time-dependent Schrödinger equation: where one takes the derivative of a wave function, which can be an element of several different Hilbert spaces. Complex-valued function of a real variable A complex-valued function of a real variable may be defined by relaxing, in the definition of the real-valued functions, the restriction of the codomain to the real numbers, and allowing complex values. If is such a complex valued function, it may be decomposed as = + , where and are real-valued functions. In other words, the study of the complex valued functions reduces easily to the study of the pairs of real valued functions. Cardinality of sets of functions of a real variable The cardinality of the set of real-valued functions of a real variable, , is , which is strictly larger than the cardinality of the continuum (i.e., set of all real numbers). This fact is easily verified by cardinal arithmetic: Furthermore, if is a set such that , then the cardinality of the set is also , since However, the set of continuous functions has a strictly smaller cardinality, the cardinality of the continuum, . This follows from the fact that a continuous function is completely determined by its value on a dense subset of its domain. Thus, the cardinality of the set of continuous real-valued functions on the reals is no greater than the cardinality of the set of real-valued functions of a rational variable. By cardinal arithmetic: On the other hand, since there is a clear bijection between and the set of constant functions , which forms a subset of , must also hold. Hence, .
Mathematics
Functions: General
null
1269039
https://en.wikipedia.org/wiki/Bioaugmentation
Bioaugmentation
Biological augmentation is the addition of archaea or bacterial cultures required to speed up the rate of degradation of a contaminant. Organisms that originate from contaminated areas may already be able to break down waste, but perhaps inefficiently and slowly. Bioaugmentation is a type of bioremediation in which it requires studying the indigenous varieties present in the location to determine if biostimulation is possible. After discovering the indigenous bacteria found in the location, if the indigenous bacteria can metabolize the contaminants, more of the indigenous bacterial cultures will be implemented into the location to boost the degradation of the contaminants. Bioaugmentation is the introduction of more archaea or bacterial cultures to enhance the contaminant degradation whereas biostimulation is the addition of nutritional supplements for the indigenous bacteria to promote the bacterial metabolism. If the indigenous variety do not have the metabolic capability to perform the remediation process, exogenous varieties with such sophisticated pathways are introduced. The utilization of bioaugmentation provides advancement in the fields of microbial ecology and biology, immobilization, and bioreactor design. Bioaugmentation is commonly used in municipal wastewater treatment to restart activated sludge bioreactors. Most cultures available contain microbial cultures, already containing all necessary microorganisms (B. licheniformis, B. thuringiensis, P. polymyxa, B. stearothermophilus, Penicillium sp., Aspergillus sp., Flavobacterium, Arthrobacter, Pseudomonas, Streptomyces, Saccharomyces, etc.). Activated sludge systems are generally based on microorganisms like bacteria, protozoa, nematodes, rotifers, and fungi, which are capable of degrading biodegradable organic matter. There are many positive outcomes from the use of bioaugmentation, such as the improvement in efficiency and speed of the process of breaking down substances and the reduction of toxic particles in an area. Applications Soil remediation Bioaugmentation is favorable in contaminated soils that have undergone bioremediation, but still pose an environmental risk. This is because microorganisms that were originally in the environment did not accomplish their task during bioremediation when it came to breaking down chemicals in the contaminated soil. The failure of original bacteria can be caused by environmental stresses, as well as changes in the microbial population due to mutation rates. When microorganisms are added, they are potentially more suited to the nature of the new contaminant, meanwhile the older microorganisms are similar to the older pollution and contamination. However, this is merely one of many factors; site size is also a very important determinant. In order to see whether bioaugmentation should be implemented, the overall setting must be considered. Also, some highly specialized microorganisms are not capable of adapting to certain site settings. Availability of certain microorganism types (as used for bioremediation) may also be a problem. Although bioaugmentation may appear to be a perfect solution for contaminated soil, it can have drawbacks. For example, the wrong type of bacteria can result in potentially clogged aquifers, or the remediation result may be incomplete or unsatisfactory. Bioaugmentation of chlorinated solvents At sites where soil and groundwater are contaminated with chlorinated ethenes, such as tetrachloroethylene and trichloroethylene, bioaugmentation can be used to ensure that the in situ microorganisms can completely degrade these contaminants to ethylene and chloride, which are non-toxic. Bioaugmentation is typically only applicable to bioremediation of chlorinated ethenes, although there are emerging cultures with the potential to biodegrade other compounds including BTEX, chloroethanes, chloromethanes, and MTBE. The first reported application of bioaugmentation for chlorinated ethenes was at Kelly Air Force Base, TX. Bioaugmentation is typically performed in conjunction with the addition of electron donor (biostimulation) to achieve geochemical conditions in groundwater that favor the growth of the dechlorinating microorganisms in the bioaugmentation culture. Niche fitness Including more microbes into an environment is beneficial to the speed of the cleanup duration. The interaction and competitions of two compounds influence the performance that a microorganism, original or new, could have. This can be tested by placing a soil that favors the new microbes into the area and then looking at the performance. The results will show if the new microorganism can perform well enough in that soil with other microorganisms. This helps to determine the correct amount of microbes and indigenous substances that are needed in order to optimize performance and create a co-metabolism. Coke plant wastewater in China An example of how bioaugmentation has improved an environment, is in the coke plant wastewater in China. Coal in China is used as a main energy source and the contaminated water contains harmful toxic contaminants like ammonia, thiocyanate, phenols and other organic compounds, such as mono- and polycyclic nitrogen-containing aromatics, oxygen and sulfur-containing heterocyclics and polynuclear aromatic hydrocarbons. Previous measures to treat this problem was an aerobic-anoxic-oxic system, solvent extractions, stream stripping, and biological treatment. Bioaugmentation has been reported to remove 3-chlorobenzoate, 4-methyl benzoate, toluene, phenol, and chlorinated solvents. The anaerobic reactor was packed with semi-soft media, which were constructed by plastic ring and synthetic fiber string. The anoxic reactor is a completely mixed reactor while the oxic reactor is a hybrid bioreactor in which polyurethane foam carriers were added. Water from anoxic reactor, odic reactor and sedimentation tank were used and had mix-ins of different amount of old and developed microbes with .75 concentration and 28 degree Celsius. The rate of contaminant degradation depended on the amount of microbe concentration. In the enhanced microbial community indigenous microorganisms broke down the contaminants in the coke plant wastewater, such as pyridines, and phenolic compounds. When indigenous heterotrophic microorganisms were added, they converted many large molecular compounds into smaller and simpler compounds, which could be taken from more biodegradable organic compounds. This proves that bioaugmentation could be used as a tool for the removal of unwanted compounds that are not properly removed by conventional biological treatment system. When bioaugmentation is combined with A1–A2–O system for the treatment of coke plant wastewater it is very powerful. Petroleum cleanup In the petroleum industry, there is a large problem with how oilfield drilling pit is disposed of. Many used to simply place dirt over the pit, but it is far more productive and economically beneficial to use bioaugmentation. With the use of advanced microbes, drilling companies can actually treat the problem in the oilfield pit instead of transferring the waste around. Specifically, polycyclic aromatic hydrocarbons can be metabolized by some bacteria, which significantly reduces environmental damage from drilling activities. Given suitable environmental conditions, microbes are placed in the oilpit to break down hydrocarbons and alongside are other nutrients. Before treatment there was a total petroleum hydrocarbon (TPH) level of 44,880 ppm, which within just 47 days the TPH was lowered to a level of 10,000 ppm to 6,486 ppm. Failures and potential solutions There have been many instances where bioaugmentation had deficiencies in its process, including the use of the wrong organism. The implementation of bioaugmentation on the environment can pose problems of predation, nutritional competition between indigenous and inoculated bacteria, insufficient inoculations, and disturbing the ecological balance due to large inoculations. Each problem can be solved using different techniques to limit the possibilities of these problems occurring. Predation can be prevented by high initial doses of the inoculated bacteria or heat treatment prior to inoculation whereas nutritional competition can be settled with biostimulation. Insufficient inoculations can be treated by repeated or continual inoculations and large inoculations are resolved with highly monitored dosages of the bacteria. Examples include the introduced bacteria fail to enhance the degradation within the soil, and the bioaugmentation trials fail on the laboratory scale, but succeed on the large scale. Many of these problems occurred because the microbial ecology issues were not taken into consideration in order to map the performance of the bioaugmentation. It is crucial to consider the microbes' ability to withstand the conditions in the microbial community to be placed in. In many of the cases that have failed, only the microbes' ability to break down compounds was considered and less their fitness in existing communities and the resulting competitive stress. It is better to identify the existing communities before looking at the strains needed to break down pollutants.
Technology
Environmental remediation
null
1270655
https://en.wikipedia.org/wiki/Myspace
Myspace
Myspace (formerly stylized as MySpace; also myspace; and sometimes my␣, with an elongated open box symbol) is a social networking service based in the United States. Launched on August 1, 2003, it was the first social network to reach a global audience and had a significant influence on technology, pop culture and music. It also played a critical role in the early growth of companies like YouTube and created a developer platform that launched companies such as Zynga, RockYou, and Photobucket, among others, to success. From 2005 to 2009, Myspace was the largest social networking site in the world. In July 2005, Myspace was acquired by News Corporation for $580 million; in June 2006, it surpassed Yahoo and Google to become the most visited website in the United States. During the 2008 fiscal year, it generated $800 million in revenue. At its peak in April 2008, Myspace had 115 million monthly visitors; by that time, the recently emergent Facebook had about the same number of visitors, but somewhat more global users than MySpace. In May 2009, Facebook surpassed Myspace in its number of unique U.S. visitors. Since then, the number of Myspace users has declined steadily despite several redesigns. By 2019, the number of monthly visitors to the site had dropped to seven million. In June 2009, Myspace employed approximately 1,600 people. In June 2011, Specific Media Group and Justin Timberlake jointly purchased the company for approximately $35 million. On February 11, 2016, it was announced that Myspace and its parent company had been purchased by Time Inc. for $87 million. On January 31, 2018, Time Inc. was in turn purchased by Meredith Corporation, and later that year, on November 4, 2019, Meredith spun off Myspace and its original holding company (Viant Technology Holding Inc.) and sold it to Viant Technology LLC. History 2003–2005: Beginnings and rise In August 2003, several eUniverse employees with Friendster accounts saw potential in its social networking features. The group decided to mimic the more popular features of the website. Within 10 days, the first version of MySpace was ready for launch, implemented using ColdFusion. A complete infrastructure of finance, human resources, technical expertise, bandwidth, and server capacity was available for the site. The project was overseen by Brad Greenspan (eUniverse's founder, chairman and CEO), who managed Chris DeWolfe (MySpace's starting CEO), Josh Berman, Tom Anderson (MySpace's starting president), and a team of programmers and resources provided by eUniverse. It was during this early period in June 2003, just prior to the birth of MySpace, that Jeffrey Edell was brought on as chairman of parent company Intermix Media. The first MySpace users were eUniverse employees. The company held contests to see who could sign up the most users. eUniverse used its 20 million users and e-mail subscribers to breathe life into MySpace and move it to the head of the pack of social networking websites. A key architect was tech expert Toan Nguyen, who helped stabilize the platform when Greenspan asked him to join the team. Co-founder and CTO Aber Whitcomb played an integral role in software architecture, utilizing the then-superior development speed of ColdFusion over other dynamic database driven server-side languages of the time. Despite having over ten times the number of developers, Friendster, which was developed in JavaServer Pages (jsp), could not keep up with the speed of development of MySpace and cfm. For example, users could customize the background, look and feel of pages on MySpace. MySpace originally gained users because of how easy it made to communicate with other users. Before MySpace debuted, many people communicated online through Instant Messaging or IM. However, MySpace got so popular that people started to use MySpace to message people even more than IM. This was especially true in bigger cities that had more people compared to suburbs that still used IM more. The MySpace.com domain was originally owned by YourZ.com, Inc., intended until 2002 for use as an online data storage and sharing site. By late 2003, it was transitioned from a file storage service to a social networking site. A friend who also worked in the data storage business reminded DeWolfe that he had earlier bought the MySpace.com domain. DeWolfe suggested they charge a fee for the basic MySpace service. However, Greenspan nixed the idea, believing that keeping the site free was necessary to make it a successful community. MySpace quickly gained popularity among teenagers and young adults. In February 2005, DeWolfe held talks with Mark Zuckerberg over acquiring Facebook, but rejected Zuckerberg's offer to sell Facebook to him for $75 million. Some employees of MySpace, including DeWolfe and Berman, were able to purchase equity in the property before MySpace and its parent company eUniverse (now renamed Intermix Media) were bought. 2005–2009: Purchase by News Corp. and peak years In July 2005, in one of the company's first major Internet purchases, News Corporation purchased MySpace for US$580 million. At the time of the acquisition, the company was seeing 16 million monthly users and was growing exponentially. News Corporation had beat out Viacom by offering a higher price for the website, and the purchase was seen as a good investment at the time. Within a year, MySpace had tripled in value from its purchase price. News Corporation saw the purchase as a way to capitalize on Internet advertising and drive traffic to other News Corporation properties. After the acquisition, MySpace continued its exponential growth. In January 2006, the site was signing up 200,000 new users a day. A year later, it was registering 320,000 users a day, and had overtaken Yahoo! to become the most visited website in the United States. ComScore said that a key driver of the site's success in the US was high "engagement levels", with the average MySpace user viewing over 660 pages a month. In January 2006, Fox announced plans to launch a UK version of MySpace. During 2006, MySpace launched localized versions in 11 countries across Europe, Asia and the Americas, including MySpace China with Solstice. At the time, Travis Katz, senior vice-president for international operations, reported that 30 million of the site's 90 million users were coming from outside of the United States. The 100 millionth MySpace account was created on August 9, 2006, in the Netherlands. That same month, MySpace signed a landmark advertising deal with Google that guaranteed MySpace $900 million over three years, over 55% more than the price News Corporation had paid to acquire the business. In exchange, Google received exclusive rights to provide Web search results and sponsored links on MySpace. When the deal was signed, Google chairman Eric Schmidt said, "When we looked at what was growing on the Web, all our internal metrics pointed to [MySpace] [...] It's important to move Google to where users are, and that is where user-generated content is." By October 2006, MySpace had grown from generating $1 million in revenue per month to $30 million per month, half of which came from the Google deal. The remaining 50% came from display advertising sold by MySpace's in-house sales team. In November 2006, Myspace announced a 50-50 joint venture with Softbank to launch the site in Japan. In mid-2007, MySpace was the largest social-networking site in every European country where it had created a local presence. By July 2007, Nielsen//NetRatings reported the company's "active reach", or the percentage of the population that visited the site, was anywhere from 10 to 15 times higher in Spain, France and Germany than for runner-up Facebook; in the United Kingdom, MySpace led Facebook by two-to-one in terms of reach. MySpace would even land deals with major corporations like Sony. In 2007 MySpace partnered with Sony BMG, a Sony record label, to put music directly on the MySpace platform. Sony became interested in MySpace as they had 110 million users and had a lot of musical artists make their start on the platform. On November 1, 2007, MySpace and Bebo joined the Google-led OpenSocial alliance, which already included Friendster, Hi5, LinkedIn, Plaxo, Ning, and Six Apart. The alliance's goal was to promote a common set of standards for software developers to write programs for social networks. Google had been unsuccessful in building its own social networking site Orkut in the American market, and was using the alliance to present a counterweight to Facebook. By late 2007 and into 2008, MySpace was considered the leading social networking site, and consistently beat out its main competitor Facebook in traffic. Initially, the emergence of Facebook did little to diminish MySpace's popularity; at the time, Facebook was targeted only at college students. At its peak, when News Corporation attempted to merge it with Yahoo! in 2007, Myspace was valued at $12 billion and had more than 300 million registered users. 2009–2016: Decline and sale by News Corporation On April 19, 2008, Facebook overtook MySpace in Alexa rankings. In May 2009, Facebook surpassed MySpace in the number of unique U.S. visitors. From that point, Myspace saw a consistent loss of membership. There are several suggested explanations for its decline, including the fact that it stuck to a "portal strategy" of building an audience around entertainment and music, whereas Facebook and Twitter continually added new features to improve the social networking experience. A former MySpace executive suggested that the $900 million three-year advertisement deal with Google, while being a short-term cash windfall, was a handicap in the long run, as it required MySpace to place even more ads on its already heavily advertised space, which made the site slow, more difficult to use and less flexible. MySpace could not experiment with its own site without forfeiting revenue, while Facebook was rolling out a new, clean site design. MySpace CEO Chris DeWolfe reported that he had to fight Fox Interactive Media's sales team, who monetized the site without regard to user experience. In 2012, Katz described how News Corporation had put significant pressure on MySpace to "focus on near-term monetization, as opposed to thinking about long-term product strategy," while Facebook focused user engagement over revenue. Danah Boyd, a senior researcher at Microsoft Research, noted of social networking websites that "companies might serially rise, fall, and disappear, as influential peers pull others in on the climb up—and signal to flee when it's time to get out." The volatility of social networks was exemplified in 2006, when Connecticut Attorney General Richard Blumenthal launched an investigation into children's exposure to pornography on MySpace. The resulting media frenzy and the site's lack of an effective spam filter gave the site a reputation as a "vortex of perversion". Around that time, specialized social media companies such as Twitter formed and began targeting users on MySpace, while Facebook rolled out communication tools that were seen as safe in comparison to MySpace. In addition, MySpace had particular problems with vandalism, phishing, malware, and spam, which it failed to curtail, making the site seem inhospitable. These have been cited as factors why users, who as teenagers were MySpace's strongest audience in 2006 and 2007, had been migrating to Facebook, which started strongly with the 18-to-24 group (mostly college students) and has been much more successful than MySpace at attracting older users. News Corporation chairman and CEO Rupert Murdoch was said to be frustrated that MySpace never met expectations as a distribution outlet for Fox studio content and missed the US$1 billion mark in total revenues. This resulted in DeWolfe and Anderson gradually losing their status within Murdoch's inner circle of executives, as well as DeWolfe's mentor Peter Chernin, president and COO of News Corporation, departing the company in June 2009. Former AOL executive Jonathan Miller, who joined News Corporation in charge of the digital media business, was in the job for three weeks when he shuffled MySpace's executive team in April 2009. MySpace president Tom Anderson stepped down while Chris DeWolfe was replaced as CEO by former Facebook COO Owen Van Natta. A meeting at News Corporation over the direction of MySpace in March 2009 was reportedly the catalyst for that management shakeup, with the Google search deal about to expire and the departure of key personnel (Myspace's COO, SVP of engineering, and SVP of strategy) to form a startup. Furthermore, the opening of extravagant new offices around the world was questioned, as Facebook did not have similarly expensive expansion plans but still attracted international users at a rapid rate. The changes to MySpace's executive ranks were followed in June 2009 by a layoff of 37.5% of its workforce (including 30% of its U.S. employees), reducing employees from 1,600 to 1,000. The downfall of MySpace can be attributed to many different factors. One of which was the demographic of MySpace and how they reacted to the debut of Facebook. When MySpace was launched, many of its users were people who never really used the internet before. As time went on, many users start to become frustrated with the very limited features of MySpace. Facebook launched with many quality of life features that MySpace simply did not have. So, a lot of users began to migrate from MySpace to Facebook. According to Tim Vanderhook, the CEO of MySpace when it was owned by Viant, MySpace was killed by a “calculated takedown by Google over music”. Vanderhook alleges that Google used their recent acquisition of YouTube to take away a lot of the music deals they otherwise would have gotten by getting artists to put music on YouTube instead of MySpace. This utterly crippled MySpace as they had come to rely on the content of musical artists. Vanderhook also alleges that Google used their search engine algorithm to steer users away from MySpace and towards YouTube. In 2009, MySpace implemented site redesigns as a way to get users back. However, this may have backfired, as users generally disliked tweaks and changes on Facebook. In March 2011, market research figures released by Comscore suggested that Myspace had lost 10 million users between January and February 2011, and had fallen from 95 million to 63 million unique users in the previous 12 months. Myspace registered its sharpest audience declines in February 2011, as traffic fell 44% from a year earlier to 37.7 million U.S. visitors. Advertisers were reported as unwilling to commit to long-term deals with the site. In late February 2011, News Corporation officially put the site up for sale for an estimated $50–200 million. Losses from the last quarter of 2010 were $156 million, over double the previous year, which dragged down the otherwise strong results of News Corporation. The deadline for bids, May 31, 2011, passed without any above the reserve price of $100 million being submitted. It has been said that the decline in users during the most recent quarter deterred several potential suitors. On June 29, 2011, Myspace announced in an email to label partners and press that it had been acquired by Specific Media for an undisclosed sum, which was rumored to be as low as $35 million. CNN reported that the site sold for $35 million, and noted that it was "far less than the $580 million News Corp. paid for Myspace in 2005." Murdoch went on to call the Myspace purchase a "huge mistake", and Time magazine compared it to Time Warner's 2000 purchase of AOL, which saw a conglomerate trying to stay ahead of the competition. Many former executives have gone on to further success after departing Myspace. 2016-2019: Time Inc. and Meredith Corporation ownership On February 11, 2016, it was announced that Myspace and its parent company had been bought by Time Inc. On January 31, 2018, Time Inc. was in turn purchased by Meredith Corporation, who went on to sell a number of Time Inc.'s assets, including (as it announced on November 4, 2019) selling its equity in Viant, the parent company of Specific Media, back to Viant Technology Holding Inc. In May 2016, the data for almost 360 million Myspace accounts was offered on TheRealDeal dark market website, which included email addresses, usernames, and weakly encrypted passwords (SHA1 hashes of the first 10 characters of the password converted to lowercase and stored without a cryptographic salt). The exact data breach date is unknown, but analysis of the data suggests it was exposed around eight years before being made public, around mid-2008 to early 2009. Since 2019: Viant Technology Holding Inc. ownership In March 2019, Myspace lost all content before 2016 after a faulty server migration. As of October 5, 2024, Myspace has still been placed in a read-only mode of sorts, as no new articles have been published since early 2022, but media uploads seem to be working now. MySpace's official account has also sparked some new activity. However, most images on the site still seem to be broken, and existing songs also cannot be played. The terms of service of Myspace have not been changed by Viant. The privacy policy was last revised on 24 June 2024. Features From YouTube's founding in 2005, Myspace users could embed YouTube videos in their profiles. Considering this a competitive threat to its new Myspace Videos service, the site in late 2005 banned embedded YouTube videos from user profiles, which was widely protested by Myspace users, prompting the site to lift the ban shortly after. There were a variety of environments in which users could access Myspace content on their mobile phones. In early 2006, mobile phone provider Helio released a series of mobile phones utilizing a service known as Myspace Mobile to access and edit one's profile and communicate with and view the profiles of other members. Additionally, UIEvolution and Myspace developed a mobile version of Myspace for a wider range of carriers, including AT&T, Vodafone and Rogers Wireless. In August 2006, Myspace began offering classified ads, a service which grew by 33 percent during the following year. It previously had an instant messaging tool called MySpace IM. Myspace used an implementation of Telligent Community for its forum system. Music Shortly after Myspace was sold to News Corporation in 2005, the website launched a record label called MySpace Records, with JD Mangosing as CEO, in an effort to discover unknown talent on Myspace Music, a service onto which artists can upload songs, EPs and full-length albums. As of June 2014, over 53 million songs had been uploaded to the site by 14.2 million artists. Artists including My Chemical Romance, Nicki Minaj, Lily Allen, Taylor Swift, Lady Gaga, and Katy Perry gained fame and recognition through Myspace. over eight million artists had been discovered by users through the site. In late 2007, the site launched The MySpace Transmissions, a series of live-in-studio recordings by well-known artists. On March 18, 2019, it was revealed that Myspace had lost all of its user content from launch until 2015 in a botched server migration with no backup. Over 50 million songs and 12 years' worth of content were permanently lost. In April 2019, the Internet Archive recovered 490,000 MP3s "using unknown means by an anonymous academic study conducted between 2008 and 2010". The songs, which were uploaded between 2008 and 2010, are collectively known as the "MySpace Dragon Hoard". Since early 2022, music upload and playback have been disabled on the website. MySpaceTV On May 16, 2007, Myspace partnered with news publications National Geographic, the New York Times and Reuters to provide professional visual contents on its social-networking Web site. On June 27, 2007, Myspace launched MySpaceTV. On August 8, 2007, Myspace partnered with satire publication The Onion to provide audio, video and print content to the site. On October 22, 2007, Myspace launched its first original web series, Roommates, which intended to give its users a television-like experience with the interactive benefits of the Internet. On February 27, 2008, TMZ launched its web channel on MySpaceTV. On April 21, 2008, Myspace signed a deal with Byron Allen's Entertainment Studios that brought programming such as the syndicated series Comics Unleashed with Byron Allen, Entertainers with Byron Allen, Beautiful Homes and Great Estates, and Designer Fashions & Runways to MySpaceTV. Redesigns On March 10, 2010, Myspace added new features including a recommendation engine for new users that suggests games, music and videos based on their previous search habits. The security on Myspace was also enhanced, with the criticism of Facebook, to make it a safer site. The security of Myspace enables users to choose if the content could be viewed for "friends only", "18 and older" or "everyone". In October 2010, Myspace introduced a beta version of a new site design on a limited scale, with plans to switch all interested users to the new site in late November. Chief executive Mike Jones said the site was no longer competing with Facebook as a general social networking site; instead, it would be music-oriented and would target younger people. Jones believed most younger users would continue to use the site after the redesign, though older users might not. The goal of the redesign was to increase the number of Myspace users and the time they spent on the site. BTIG (.com) analyst Richard Greenfield said, "Most investors have written off MySpace now," and was unsure whether the changes would help the company recover. In November 2010, Myspace changed its logo to coincide with the new site design. The word "my" appears in the Helvetica font, followed by a symbol representing a space. The logo change was announced on October 8, 2010, and appeared on the site on November 11. In the same month, Myspace integrated with Facebook Connect – calling it "Mash Up with Facebook" in an announcement widely seen as the final act of acknowledging Facebook's domination of social networking. In January 2011, it was announced that the Myspace staff would be reduced by 47%. User adoption continued to decrease. In September 2012, a new redesign was announced, with no date given, making Myspace more visual and apparently optimized for tablets. The redesign was publicly released on January 15, 2013; by April 2013 (and presumably before), users were able to transfer to the new Myspace redesign. In June 2013, the redesign deleted all previous blogs, angering many users, and destroying information that would have been useful history in later years. Key executives Corporate information Foreign versions Since early 2006, Myspace has offered the option to access the service in different regional versions. The alternative regional versions present automated content according to locality (e.g., UK users see other UK users as "Cool New People", and UK-oriented events and adverts, etc.), offer local languages other than English, or accommodate the regional differences in spelling and conventions in the English-speaking world (e.g., United States: "favorites", mm/dd/yyyy; the rest of the world: "favourites", dd/mm/yyyy). MySpace Developer Platform (MDP) On February 5, 2008, MySpace set up a developer platform allowing developers to share their ideas and write their own Myspace applications. The opening was inaugurated with a workshop at the MySpace offices in San Francisco two weeks before the official launch. The MDP is based on the OpenSocial API, which was presented by Google in November 2007 to support social networks to develop social and interacting widgets, and can be seen as an answer to Facebook's developer platform. The first public beta of the MySpace Apps was released on March 5, 2008, with around 1,000 applications available. Myspace server infrastructure At QCon London 2008, MySpace Chief Systems Architect Dan Farino indicated that the site was sending 100 gigabits of data per second out to the Internet; 10 gigabits of which was HTML content and the remainder was media such as videos and pictures. The server infrastructure consists of over 4,500 web servers (running Windows Server 2003, IIS 6.0, ASP.NET and .NET Framework 3.5), over 1,200 cache servers (running 64-bit Windows Server 2003), and over 500 database servers (running 64-bit Windows Server 2003 and SQL Server 2005), as well as a custom distributed file system which runs on Gentoo Linux. In 2009, MySpace began migrating from HDD to SSD technology in some of their servers, resulting in space and power usage savings. Revenue model Myspace operates solely on revenues generated by advertising, as its revenue model possesses no user-paid features. Through its site and affiliated advertising networks, the site collects data about its users and utilizes behavioral targeting to select the ads each visitor sees. On August 8, 2006, search engine Google signed a $900 million deal to provide a search facility and advertising on MySpace. Third-party content Companies such as Slide.com and RockYou were all launched on Myspace as widgets providing additional functionality to the site. Other sites created layouts to personalize the site and made hundreds of thousands of dollars for its owners, most of whom were in their late teens and early twenties. In November 2008, MySpace announced that user-uploaded content infringing on copyrights held by MTV and its subsidiary networks would be redistributed with advertisements to generate revenue for the companies. Acquisition of Imeem On November 18, 2009, MySpace Music acquired Imeem for less than $1 million. MySpace stated that they would be transitioning Imeem's users and migrating their playlists over to MySpace Music. On January 15, 2010, MySpace began restoring Imeem playlists. Mobile application Along with its website redesign, Myspace also completely redesigned their mobile application. The redesigned app on the Apple App Store was released in June 2013. The app featured a tool for users to create and edit gif images and post them to their Myspace stream. The app also allowed users to stream available "live streams" of concerts. New users were able to join Myspace from the app by signing in with Facebook or Twitter or by signing up with email. Availability The Myspace mobile app is no longer available on the Google Play Store or the Apple App Store. The mobile web app can be accessed by visiting Myspace.com from a mobile device. Radio The app once allowed users to play Myspace radio channels from the device. Users could select from genre stations, featured stations and user or artist stations. A user could build their own station by connecting and listening to songs on Myspace's desktop website. The user was given six skips per station. As of early 2022, the radio player no longer functions on Myspace.com.
Technology
Social network and blogging
null
3574554
https://en.wikipedia.org/wiki/Mechanical%20resonance
Mechanical resonance
Mechanical resonance is the tendency of a mechanical system to respond at greater amplitude when the frequency of its oscillations matches the system's natural frequency of vibration (its resonance frequency or resonant frequency) closer than it does other frequencies. It may cause violent swaying motions and potentially catastrophic failure in improperly constructed structures including bridges, buildings and airplanes. This is a phenomenon known as resonance disaster. Avoiding resonance disasters is a major concern in every building, tower and bridge construction project. The Taipei 101 building for instance relies on a 660-ton pendulum—a tuned mass damper—to modify the response at resonance. The structure is also designed to resonate at a frequency which does not typically occur. Buildings in seismic zones are often constructed to take into account the oscillating frequencies of expected ground motion. Engineers designing objects that have engines must ensure that the mechanical resonant frequencies of the component parts do not match driving vibrational frequencies of the motors or other strongly oscillating parts. Many resonant objects have more than one resonance frequency. Such objects will vibrate easily at those frequencies, and less so at other frequencies. Many clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal. Description The natural frequency of the very simple mechanical system consisting of a weight suspended by a spring is: where m is the mass and k is the spring constant. For a given mass, stiffening the system (increasing ) increases its natural frequency, which is a general characteristic of vibrating mechanical systems. A swing set is another simple example of a resonant system with which most people have practical experience. It is a form of pendulum. If the system is excited (pushed) with a period between pushes equal to the inverse of the pendulum's natural frequency, the swing will swing higher and higher, but if excited at a different frequency, it will be difficult to move. The resonance frequency of a pendulum, the only frequency at which it will vibrate, is given approximately, for small displacements, by the equation: where g is the acceleration due to gravity (about 9.8 m/s2 near the surface of Earth), and L is the length from the pivot point to the center of mass. (An elliptic integral yields a description for any displacement). Note that, in this approximation, the frequency does not depend on mass. Mechanical resonators work by transferring energy repeatedly from kinetic to potential form and back again. In the pendulum, for example, all the energy is stored as gravitational energy (a form of potential energy) when the bob is instantaneously motionless at the top of its swing. This energy is proportional to both the mass of the bob and its height above the lowest point. As the bob descends and picks up speed, its potential energy is gradually converted to kinetic energy (energy of movement), which is proportional to the bob's mass and to the square of its speed. When the bob is at the bottom of its travel, it has maximum kinetic energy and minimum potential energy. The same process then happens in reverse as the bob climbs towards the top of its swing. Some resonant objects have more than one resonance frequency, particularly at harmonics (multiples) of the strongest resonance. It will vibrate easily at those frequencies, and less so at other frequencies. It will "pick out" its resonance frequency from a complex excitation, such as an impulse or a wideband noise excitation. In effect, it is filtering out all frequencies other than its resonance. In the example above, the swing cannot easily be excited by harmonic frequencies, but can be excited by subharmonics, such as pushing the swing every second or third oscillation. Examples Various examples of mechanical resonance include: Musical instruments (acoustic resonance). Most clocks keep time by mechanical resonance in a balance wheel, pendulum, or quartz crystal. Tidal resonance of the Bay of Fundy. Orbital resonance, as in some moons of the Solar System's giant planets. The resonance of the basilar membrane in the ear. A wineglass breaking when someone sings a loud note at exactly the right pitch. Resonance may cause violent swaying motions in constructed structures, such as bridges and buildings. The London Millennium Footbridge (nicknamed the Wobbly Bridge) exhibited this problem. A faulty bridge can even be destroyed by its resonance (see Broughton Suspension Bridge and Angers Bridge). Mechanical systems store potential energy in different forms. For example, a spring/mass system stores energy as tension in the spring, which is ultimately stored as the energy of bonds between atoms. Resonance disaster In mechanics and construction a resonance disaster describes the destruction of a building or a technical mechanism by induced vibrations at a system's resonant frequency, which causes it to oscillate. Periodic excitation optimally transfers to the system the energy of the vibration and stores it there. Because of this repeated storage and additional energy input the system swings ever more strongly, until its load limit is exceeded. Tacoma Narrows Bridge The dramatic, rhythmic twisting that resulted in the 1940 collapse of "Galloping Gertie", the original Tacoma Narrows Bridge, is sometimes characterized in physics textbooks as a classic example of resonance. The catastrophic vibrations that destroyed the bridge were due to an oscillation caused by interactions between the bridge and the winds passing through its structure—a phenomenon known as aeroelastic flutter. Robert H. Scanlan, father of the field of bridge aerodynamics, wrote an article about this. Other examples Collapse of Broughton Suspension Bridge (due to soldiers walking in step) Collapse of Angers Bridge Collapse of Königs Wusterhausen Central Tower Resonance of the Millennium Bridge Applications Various method of inducing mechanical resonance in a medium exist. Mechanical waves can be generated in a medium by subjecting an electromechanical element to an alternating electric field having a frequency which induces mechanical resonance and is below any electrical resonance frequency. Such devices can apply mechanical energy from an external source to an element to mechanically stress the element or apply mechanical energy produced by the element to an external load. The United States Patent Office classifies devices that tests mechanical resonance under subclass 579, resonance, frequency, or amplitude study, of Class 73, Measuring and testing. This subclass is itself indented under subclass 570, Vibration. Such devices test an article or mechanism by subjecting it to a vibratory force for determining qualities, characteristics, or conditions thereof, or sensing, studying or making analysis of the vibrations otherwise generated in or existing in the article or mechanism. Devices include right methods to cause vibrations at a natural mechanical resonance and measure the frequency and/or amplitude the resonance made. Various devices study the amplitude response over a frequency range is made. This includes nodal points, wave lengths, and standing wave characteristics measured under predetermined vibration conditions.
Physical sciences
Classical mechanics
Physics
3574578
https://en.wikipedia.org/wiki/Acoustic%20resonance
Acoustic resonance
Acoustic resonance is a phenomenon in which an acoustic system amplifies sound waves whose frequency matches one of its own natural frequencies of vibration (its resonance frequencies). The term "acoustic resonance" is sometimes used to narrow mechanical resonance to the frequency range of human hearing, but since acoustics is defined in general terms concerning vibrational waves in matter, acoustic resonance can occur at frequencies outside the range of human hearing. An acoustically resonant object usually has more than one resonance frequency, especially at harmonics of the strongest resonance. It will easily vibrate at those frequencies, and vibrate less strongly at other frequencies. It will "pick out" its resonance frequency from a complex excitation, such as an impulse or a wideband noise excitation. In effect, it is filtering out all frequencies other than its resonance. Acoustic resonance is an important consideration for instrument builders, as most acoustic instruments use resonators, such as the strings and body of a violin, the length of tube in a flute, and the shape of a drum membrane. Acoustic resonance is also important for hearing. For example, resonance of a stiff structural element, called the basilar membrane within the cochlea of the inner ear allows hair cells on the membrane to detect sound. (For mammals the membrane has tapering resonances across its length so that high frequencies are concentrated on one end and low frequencies on the other.) Like mechanical resonance, acoustic resonance can result in catastrophic failure of the vibrator. The classic example of this is breaking a wine glass with sound at the precise resonant frequency of the glass. Vibrating string In musical instruments, strings under tension, as in lutes, harps, guitars, pianos, violins and so forth, have resonant frequencies directly related to the mass, length, and tension of the string. The wavelength that will create the first resonance on the string is equal to twice the length of the string. Higher resonances correspond to wavelengths that are integer divisions of the fundamental wavelength. The corresponding frequencies are related to the speed v of a wave traveling down the string by the equation where L is the length of the string (for a string fixed at both ends) and n = 1, 2, 3...(Harmonic in an open end pipe (that is, both ends of the pipe are open)). The speed of a wave through a string or wire is related to its tension T and the mass per unit length ρ: So the frequency is related to the properties of the string by the equation where T is the tension, ρ is the mass per unit length, and m is the total mass. Higher tension and shorter lengths increase the resonant frequencies. When the string is excited with an impulsive function (a finger pluck or a strike by a hammer), the string vibrates at all the frequencies present in the impulse (an impulsive function theoretically contains 'all' frequencies). Those frequencies that are not one of the resonances are quickly filtered out—they are attenuated—and all that is left is the harmonic vibrations that we hear as a musical note. String resonance in music instruments String resonance occurs on string instruments. Strings or parts of strings may resonate at their fundamental or overtone frequencies when other strings are sounded. For example, an A string at 440 Hz will cause an E string at 330 Hz to resonate, because they share an overtone of 1320 Hz (3rd overtone of A and 4th overtone of E). Resonance of a tube of air The resonance of a tube of air is related to the length of the tube, its shape, and whether it has closed or open ends. Many musical instruments resemble tubes that are conical or cylindrical (see bore). A pipe that is closed at one end and open at the other is said to be stopped or closed while an open pipe is open at both ends. Modern orchestral flutes behave as open cylindrical pipes; clarinets behave as closed cylindrical pipes; and saxophones, oboes, and bassoons as closed conical pipes, while most modern lip-reed instruments (brass instruments) are acoustically similar to closed conical pipes with some deviations (see pedal tones and false tones). Like strings, vibrating air columns in ideal cylindrical or conical pipes also have resonances at harmonics, although there are some differences. Cylinders Any cylinder resonates at multiple frequencies, producing multiple musical pitches. The lowest frequency is called the fundamental frequency or the first harmonic. Cylinders used as musical instruments are generally open, either at both ends, like a flute, or at one end, like some organ pipes. However, a cylinder closed at both ends can also be used to create or visualize sound waves, as in a Rubens Tube. The resonance properties of a cylinder may be understood by considering the behavior of a sound wave in air. Sound travels as a longitudinal compression wave, causing air molecules to move back and forth along the direction of travel. Within a tube, a standing wave is formed, whose wavelength depends on the length of the tube. At the closed end of the tube, air molecules cannot move much, so this end of the tube is a displacement node in the standing wave. At the open end of the tube, air molecules can move freely, producing a displacement antinode. Displacement nodes are pressure antinodes and vice versa. Closed at both ends The table below shows the displacement waves in a cylinder closed at both ends. Note that the air molecules near the closed ends cannot move, whereas the molecules near the center of the pipe move freely. In the first harmonic, the closed tube contains exactly half of a standing wave (node-antinode-node). Considering the pressure wave in this setup, the two closed ends are the antinodes for the change in pressure Δp; Therefore, at both ends, the change in pressure Δp must have the maximal amplitude (or satisfy in the form of the Sturm–Liouville formulation), which gives the equation for the pressure wave: . The intuition for this boundary condition at and is that the pressure of the closed ends will follow that of the point next to them. Applying the boundary condition at gives the wavelengths of the standing waves: And the resonant frequencies are Open at both ends In cylinders with both ends open, air molecules near the end move freely in and out of the tube. This movement produces displacement antinodes in the standing wave. Nodes tend to form inside the cylinder, away from the ends. In the first harmonic, the open tube contains exactly half of a standing wave (antinode-node-antinode). Thus the harmonics of the open cylinder are calculated in the same way as the harmonics of a closed/closed cylinder. The physics of a pipe open at both ends are explained in Physics Classroom. Note that the diagrams in this reference show displacement waves, similar to the ones shown above. These stand in sharp contrast to the pressure waves shown near the end of the present article. By overblowing an open tube, a note can be obtained that is an octave above the fundamental frequency or note of the tube. For example, if the fundamental note of an open pipe is C1, then overblowing the pipe gives C2, which is an octave above C1. Open cylindrical tubes resonate at the approximate frequencies: where n is a positive integer (1, 2, 3...) representing the resonance node, L is the length of the tube and v is the speed of sound in air (which is approximately at ). This equation comes from the boundary conditions for the pressure wave, which treats the open ends as pressure nodes where the change in pressure Δp must be zero. A more accurate equation considering an end correction is given below: where r is the radius of the resonance tube. This equation compensates for the fact that the exact point at which a sound wave is reflecting at an open end is not perfectly at the end section of the tube, but a small distance outside the tube. The reflection ratio is slightly less than 1; the open end does not behave like an infinitesimal acoustic impedance; rather, it has a finite value, called radiation impedance, which is dependent on the diameter of the tube, the wavelength, and the type of reflection board possibly present around the opening of the tube. So when n is 1: where v is the speed of sound, L is the length of the resonant tube, r is the radius of the tube, f is the resonant sound frequency, and λ is the resonant wavelength. Closed at one end When used in an organ a tube which is closed at one end is called a "stopped pipe". Such cylinders have a fundamental frequency but can be overblown to produce other higher frequencies or notes. These overblown registers can be tuned by using different degrees of conical taper. A closed tube resonates at the same fundamental frequency as an open tube twice its length, with a wavelength equal to four times its length. In a closed tube, a displacement node, or point of no vibration, always appears at the closed end and if the tube is resonating, it will have a displacement antinode, or point of greatest vibration at the Phi point (length × 0.618) near the open end. By overblowing a cylindrical closed tube, a note can be obtained that is approximately a twelfth above the fundamental note of the tube, or a fifth above the octave of the fundamental note. For example, if the fundamental note of a closed pipe is C1, then overblowing the pipe gives G2, which is one-twelfth above C1. Alternatively we can say that G2 is one-fifth above C2 — the octave above C1. Adjusting the taper of this cylinder for a decreasing cone can tune the second harmonic or overblown note close to the octave position or 8th. Opening a small "speaker hole" at the Phi point, or shared "wave/node" position will cancel the fundamental frequency and force the tube to resonate at a 12th above the fundamental. This technique is used in a recorder by pinching open the dorsal thumb hole. Moving this small hole upwards, closer to the voicing will make it an "Echo Hole" (Dolmetsch Recorder Modification) that will give a precise half note above the fundamental when opened. Note: Slight size or diameter adjustment is needed to zero in on the precise half note frequency. A closed tube will have approximate resonances of: where "n" here is an odd number (1, 3, 5...). This type of tube produces only odd harmonics and has its fundamental frequency an octave lower than that of an open cylinder (that is, half the frequency). This equation comes from the boundary conditions for the pressure wave, which treats the closed end as pressure antinodes where the change in pressure Δp must have the maximal amplitude, or satisfy in the form of the Sturm–Liouville formulation. The intuition for this boundary condition at is that the pressure of the closed end will follow that of the point next to it. A more accurate equation considering an end correction is given below: . Again, when n is 1: where v is the speed of sound, L is the length of the resonant tube, d is the diameter of the tube, f is the resonant sound frequency, and λ is the resonant wavelength. Pressure wave In the two diagrams below are shown the first three resonances of the pressure wave in a cylindrical tube, with antinodes at the closed end of the pipe. In diagram 1, the tube is open at both ends. In diagram 2, it is closed at one end. The horizontal axis is pressure. Note that in this case, the open end of the pipe is a pressure node while the closed end is a pressure antinode. Cones An open conical tube, that is, one in the shape of a frustum of a cone with both ends open, will have resonant frequencies approximately equal to those of an open cylindrical pipe of the same length. The resonant frequencies of a stopped conical tube — a complete cone or frustum with one end closed — satisfy a more complicated condition: where the wavenumber k is and x is the distance from the small end of the frustum to the vertex. When x is small, that is, when the cone is nearly complete, this becomes leading to resonant frequencies approximately equal to those of an open cylinder whose length equals L + x. In words, a complete conical pipe behaves approximately like an open cylindrical pipe of the same length, and to first order the behavior does not change if the complete cone is replaced by a closed frustum of that cone. Closed rectangular box Sound waves in a rectangular box include such examples as loudspeaker enclosures and buildings. Rectangular buildings have resonances described as room modes. For a rectangular box, the resonant frequencies are given by where v is the speed of sound, Lx and Ly and Lz are the dimensions of the box. , , and are nonnegative integers that cannot all be zero. If the small loudspeaker box is airtight, the frequency low enough and the compression is high enough, the sound pressure (decibel level) inside the box will be the same anywhere inside the box, this is hydraulic pressure. Resonance of a sphere of air (vented) The resonant frequency of a rigid cavity of static volume V0 with a necked sound hole of area A and length L is given by the Helmholtz resonance formula where is the equivalent length of the neck with end correction   for an unflanged neck   for a flanged neck For a spherical cavity, the resonant frequency formula becomes where D = diameter of sphere d = diameter of sound hole For a sphere with just a sound hole, L=0 and the surface of the sphere acts as a flange, so In dry air at 20 °C, with d and D in metres, f in hertz, this becomes Breaking glass with sound via resonance This is a classic demonstration of resonance. A glass has a natural resonance, a frequency at which the glass will vibrate easily. Therefore the glass needs to be moved by the sound wave at that frequency. If the force from the sound wave making the glass vibrate is big enough, the size of the vibration will become so large that the glass fractures. To do it reliably for a science demonstration requires practice and careful choice of the glass and loudspeaker. In musical composition Several composers have begun to make resonance the subject of compositions. Alvin Lucier has used acoustic instruments and sine wave generators to explore the resonance of objects large and small in many of his compositions. The complex inharmonic partials of a swell shaped crescendo and decrescendo on a tamtam or other percussion instrument interact with room resonances in James Tenney's Koan: Having Never Written A Note For Percussion. Pauline Oliveros and Stuart Dempster regularly perform in large reverberant spaces such as the cistern at Fort Worden, WA, which has a reverb with a 45-second decay. Malmö Academy of Music composition professor and composer Kent Olofsson's "Terpsichord, a piece for percussion and pre-recorded sounds, [uses] the resonances from the acoustic instruments [to] form sonic bridges to the pre-recorded electronic sounds, that, in turn, prolong the resonances, re-shaping them into new sonic gestures."
Physical sciences
Waves
Physics
3575319
https://en.wikipedia.org/wiki/Aristida
Aristida
Aristida is a very nearly cosmopolitan genus of plants in the grass family. Aristida is distinguished by having three awns (bristles) on each lemma of each floret. The genus includes about 300 species found worldwide, often in arid warm regions. This genus is among those colloquially called three-awns wiregrasses, speargrasses and needlegrasses. The name Aristida is derived from the Latin "arista", meaning "awn". They are characteristic of semiarid grassland. The Wiregrass Region of North America is named for A. stricta. Other locales where this genus is an important component of the ecosystem include the Carolina Bays, the sandhills of the Carolinas, and elsewhere, Mulga scrub in Australia, and the xeric grasslands around Lake Turkana in Africa. Local increases in the abundance of wiregrasses is a good indicator of overgrazing, as livestock avoid them. Description Aristida stems are ascending to erect, with both basal and cauline leaves. The leaves may be flat or inrolled, and the basal leaves may be tufted. The inflorescences may be either panicle-like or raceme-like, with spiky branches. The glumes of a spikelet are narrow lanceolate, usually without any awns, while the lemmas are hard, three-veined, and have the three awns near the tip. The awns may be quite long; in A. purpurea var. longiseta they may be up to 10 cm. Species Selected species include:
Biology and health sciences
Poales
Plants
1892841
https://en.wikipedia.org/wiki/Cleavage%20%28embryo%29
Cleavage (embryo)
In embryology, cleavage is the division of cells in the early development of the embryo, following fertilization. The zygotes of many species undergo rapid cell cycles with no significant overall growth, producing a cluster of cells the same size as the original zygote. The different cells derived from cleavage are called blastomeres and form a compact mass called the morula. Cleavage ends with the formation of the blastula, or of the blastocyst in mammals. Depending mostly on the concentration of yolk in the egg, the cleavage can be holoblastic (total or complete cleavage) or meroblastic (partial or incomplete cleavage). The pole of the egg with the highest concentration of yolk is referred to as the vegetal pole while the opposite is referred to as the animal pole. Cleavage differs from other forms of cell division in that it increases the number of cells and nuclear mass without increasing the cytoplasmic mass. This means that with each successive subdivision, there is roughly half the cytoplasm in each daughter cell than before that division, and thus the ratio of nuclear to cytoplasmic material increases. Mechanism The rapid cell cycles are facilitated by maintaining high levels of proteins that control cell cycle progression such as the cyclins and their associated cyclin-dependent kinases (CDKs). The complex cyclin B/CDK1 also known as MPF (maturation promoting factor) promotes entry into mitosis. The processes of karyokinesis (mitosis) and cytokinesis work together to result in cleavage. The mitotic apparatus is made up of a central spindle and polar asters made up of polymers of tubulin protein called microtubules. The asters are nucleated by centrosomes and the centrosomes are organized by centrioles brought into the egg by the sperm as basal bodies. Cytokinesis is mediated by the contractile ring made up of polymers of actin protein called microfilaments. Karyokinesis and cytokinesis are independent but spatially and temporally coordinated processes. While mitosis can occur in the absence of cytokinesis, cytokinesis requires the mitotic apparatus. The end of cleavage coincides with the beginning of zygotic transcription. This point in non-mammals is referred to as the midblastula transition and appears to be controlled by the nuclear-cytoplasmic ratio (about 1:6). Types of cleavage Determinate Determinate cleavage (also called mosaic cleavage) is in most protostomes. It results in the developmental fate of the cells being set early in the embryo development. Each blastomere produced by early embryonic cleavage does not have the capacity to develop into a complete embryo. Indeterminate A cell can only be indeterminate (also called regulative) if it has a complete set of undisturbed animal/vegetal cytoarchitectural features. It is characteristic of deuterostomes—when the original cell in a deuterostome embryo divides, the two resulting cells can be separated, and each one can individually develop into a whole organism. Holoblastic In holoblastic cleavage, the zygote and blastomeres are completely divided during the cleavage, so the number of blastomeres doubles with each cleavage. In the absence of a large concentration of yolk, four major cleavage types can be observed in isolecithal cells (cells with a small, even distribution of yolk) or in mesolecithal cells or microlecithal cells (moderate concentration of yolk in a gradient)—bilateral holoblastic, radial holoblastic, rotational holoblastic, and spiral holoblastic, cleavage. These holoblastic cleavage planes pass all the way through isolecithal zygotes during the process of cytokinesis. Coeloblastula is the next stage of development for eggs that undergo these radial cleavages. In holoblastic eggs, the first cleavage always occurs along the vegetal-animal axis of the egg, the second cleavage is perpendicular to the first. From here, the spatial arrangement of blastomeres can follow various patterns, due to different planes of cleavage, in various organisms. Bilateral The first cleavage results in bisection of the zygote into left and right halves. The following cleavage planes are centered on this axis and result in the two halves being mirror images of one another. In bilateral holoblastic cleavage, the divisions of the blastomeres are complete and separate; compared with bilateral meroblastic cleavage, in which the blastomeres stay partially connected. Radial Radial cleavage is characteristic of the deuterostomes, which include some vertebrates and echinoderms, in which the spindle axes are parallel or at right angles to the polar axis of the oocyte. Rotational Rotational cleavage involves a normal first division along the meridional axis, giving rise to two daughter cells. The way in which this cleavage differs is that one of the daughter cells divides meridionally, whilst the other divides equatorially. The nematode C. elegans, a popular developmental model organism, undergoes holoblastic rotational cell cleavage. Spiral Spiral cleavage is conserved between many members of the lophotrochozoan taxa, referred to as Spiralia. Most spiralians undergo equal spiral cleavage, although some undergo unequal cleavage (see below). This group includes annelids, molluscs, and sipuncula. Spiral cleavage can vary between species, but generally the first two cell divisions result in four macromeres, also called blastomeres, (A, B, C, D) each representing one quadrant of the embryo. These first two cleavages are not oriented in planes that occur at right angles parallel to the animal-vegetal axis of the zygote. At the 4-cell stage, the A and C macromeres meet at the animal pole, creating the animal cross-furrow, while the B and D macromeres meet at the vegetal pole, creating the vegetal cross-furrow. With each successive cleavage cycle, the macromeres give rise to quartets of smaller micromeres at the animal pole. The divisions that produce these quartets occur at an oblique angle, an angle that is not a multiple of 90 degrees, to the animal-vegetal axis. Each quartet of micromeres is rotated relative to their parent macromere, and the chirality of this rotation differs between odd- and even-numbered quartets, meaning that there is alternating symmetry between the odd and even quartets. In other words, the orientation of divisions that produces each quartet alternates between being clockwise and counterclockwise with respect to the animal pole. The alternating cleavage pattern that occurs as the quartets are generated produces quartets of micromeres that reside in the cleavage furrows of the four macromeres. When viewed from the animal pole, this arrangement of cells displays a spiral pattern. Specification of the D macromere and is an important aspect of spiralian development. Although the primary axis, animal-vegetal, is determined during oogenesis, the secondary axis, dorsal-ventral, is determined by the specification of the D quadrant. The D macromere facilitates cell divisions that differ from those produced by the other three macromeres. Cells of the D quadrant give rise to dorsal and posterior structures of the spiralian. Two known mechanisms exist to specify the D quadrant. These mechanisms include equal cleavage and unequal cleavage. In equal cleavage, the first two cell divisions produce four macromeres that are indistinguishable from one another. Each macromere has the potential of becoming the D macromere. After the formation of the third quartet, one of the macromeres initiates maximum contact with the overlying micromeres in the animal pole of the embryo. This contact is required to distinguish one macromere as the official D quadrant blastomere. In equally cleaving spiral embryos, the D quadrant is not specified until after the formation of the third quartet, when contact with the micromeres dictates one cell to become the future D blastomere. Once specified, the D blastomere signals to surrounding micromeres to lay out their cell fates. In unequal cleavage, the first two cell divisions are unequal producing four cells in which one cell is bigger than the other three. This larger cell is specified as the D macromere. Unlike equally cleaving spiralians, the D macromere is specified at the four-cell stage during unequal cleavage. Unequal cleavage can occur in two ways. One method involves asymmetric positioning of the cleavage spindle. This occurs when the aster at one pole attaches to the cell membrane, causing it to be much smaller than the aster at the other pole. This results in an unequal cytokinesis, in which both macromeres inherit part of the animal region of the egg, but only the bigger macromere inherits the vegetal region. The second mechanism of unequal cleavage involves the production of an enucleate, membrane bound, cytoplasmic protrusion, called a polar lobe. This polar lobe forms at the vegetal pole during cleavage, and then gets shunted to the D blastomere. The polar lobe contains vegetal cytoplasm, which becomes inherited by the future D macromere. Meroblastic In the presence of a large concentration of yolk in the fertilized egg cell, the cell can undergo partial, or meroblastic, cleavage. Two major types of meroblastic cleavage are discoidal and superficial. Discoidal In discoidal cleavage, the cleavage furrows do not penetrate the yolk. The embryo forms a disc of cells, called a blasto-disc, on top of the yolk. Discoidal cleavage is commonly found in monotremes, birds, reptiles, and fish that have telolecithal egg cells (egg cells with the yolk concentrated at one end). The layer of cells that have incompletely divided and are in contact with the yolk are called the "syncytial layer". Superficial In superficial cleavage, mitosis occurs but not cytokinesis, resulting in a polynuclear cell. With the yolk positioned in the center of the egg cell, the nuclei migrate to the periphery of the egg, and the plasma membrane grows inward, partitioning the nuclei into individual cells. Superficial cleavage occurs in arthropods that have centrolecithal egg cells (egg cells with the yolk located in the center of the cell). This type of cleavage can work to promote synchronicity in developmental timing, such as in Drosophila. Mammals Compared to other fast developing animals, mammals have a slower rate of division that is between 12 and 24 hours. Initially synchronous, these cellular divisions progressively become more and more asynchronous. Zygotic transcription starts at the two-, four-, or eight-cell stage depending on the species (for example, mouse zygotic transcription begins towards the end of the zygote stage and becomes significant at the two-cell stage, whereas human embryos begin zygotic transcription at the eight-cell stage). Cleavage is holoblastic and rotational. In human embryonic development at the eight-cell stage, having undergone three cleavages the embryo starts to change shape as it develops into a morula and then a blastocyst. At the eight-cell stage the blastomeres are initially round, and only loosely adhered. With further division in the process of compaction the cells flatten onto one another. At the 16–cell stage the compacted embryo is called a morula. Once the embryo has divided into 16 cells, it begins to resemble a mulberry, hence the name morula (Latin, morus: mulberry). Concomitantly, they develop an inside-out polarity that provides distinct characteristics and functions to their cell-cell and cell-medium interfaces. As surface cells become epithelial, they begin to tightly adhere as gap junctions are formed, and tight junctions are developed with the other blastomeres. With further compaction the individual outer blastomeres, the trophoblasts, become indistinguishable as they become organised into a thin sheet of tightly adhered epithelial cells. They are still enclosed within the zona pellucida. The morula is now watertight, to contain the fluid that the cells will later pump into the embryo to transform it into the blastocyst. In humans, the morula enters the uterus after three or four days, and begins to take in fluid, as sodium-potassium pumps on the trophoblasts pump sodium into the morula, drawing in water by osmosis from the maternal environment to become blastocoelic fluid. As a consequence to increased osmotic pressure, the accumulation of fluid raises the hydrostatic pressure inside the embryo. Hydrostatic pressure breaks open cell-cell contacts within the embryo by hydraulic fracturing. Initially dispersed in hundreds of water pockets throughout the embryo, the fluid collects into a single large cavity, called blastocoel, following a process akin to Ostwald ripening. Embryoblast cells also known as the inner cell mass form a compact mass of cells at the embryonic pole on one side of the cavity that will go on to produce the embryo proper. The embryo is now termed a blastocyst. The trophoblasts will eventually give rise to the embryonic contribution to the placenta called the chorion. A single cell can be removed from a pre-compaction eight-cell embryo and used for genetic screening, and the embryo will recover. Differences exist between cleavage in placental mammals and other mammals.
Biology and health sciences
Animal reproduction
Biology
1894059
https://en.wikipedia.org/wiki/Suicide%20prevention
Suicide prevention
Suicide prevention is a collection of efforts to reduce the risk of suicide. Suicide is often preventable, and the efforts to prevent it may occur at the individual, relationship, community, and society level. Suicide is a serious public health problem that can have long-lasting effects on individuals, families, and communities. Preventing suicide requires strategies at all levels of society. This includes prevention and protective strategies for individuals, families, and communities. Suicide can be prevented by learning the warning signs, promoting prevention and resilience, and committing to social change. Beyond direct interventions to stop an impending suicide, methods may include: treating mental illness improving coping strategies of people who are at risk reducing risk factors for suicide, such as substance misuse, poverty and social vulnerability giving people hope for a better life after current problems are resolved calling a suicide hotline number General efforts include measures within the realms of medicine, mental health, and public health. Because protective factors such as social support and social engagement—as well as environmental risk factors such as access to lethal means— play a role in suicide, suicide is not solely a medical or mental-health issue. Detection and assessment of a risk of suicide Warning signs Warning signs of suicide can allow individuals to direct people who may be considering suicide to get help. Behaviors that may be warning signs include: Talking about wanting to die or wanting to kill themselves Suicidal ideation: thinking, talking, or writing about suicide, planning for suicide Substance abuse Feelings of purposelessness Anxiety, agitation, being unable to sleep, or sleeping all the time Feelings of being trapped Feelings of hopelessness Social withdrawal Displaying extreme mood swings, suddenly changing from sad to very calm or happy Recklessness or impulsiveness, taking risks that could lead to death, such as driving extremely fast Mood changes, including depression Feelings of uselessness Settling outstanding affairs, giving away prized or valuable possessions, or making amends when they are otherwise not expected to die (as an example, this behavior would be typical in a terminal cancer patient but not a healthy young adult) Strong feelings of pain, either emotional or physical Considering oneself burdensome Increased use of drugs, including alcohol Direct talk for assessment An effective way to assess suicidal thoughts is to talk with the person directly, to ask about depression, and assess suicide plans as to how and when it might be attempted. Contrary to popular misconceptions, talking with people about suicide does not plant the idea in their heads. However, such discussions and questions should be asked with care, concern and compassion. The tactic is to reduce sadness and provide assurance that other people care. The WHO advises to not say everything will be all right nor make the problem seem trivial, nor give false assurances about serious issues. The discussions should be gradual and specifically executed when the person is comfortable about discussing their feelings. ICARE (Identify the thought, Connect with it, Assess evidence for it, Restructure the thought in positive light, Express or provide room for expressing feelings from the restructured thought) is a model of approach used here. Risk factors All people can be at risk of suicide. Risk factors that contribute to someone feeling suicidal or making a suicide attempt may include: Depression, other mental disorders, or substance abuse disorder Certain medical conditions Chronic pain A prior suicide attempt Childhood trauma Betrayal and abandonment Financial troubles or poverty Family history of a mental disorder or substance abuse Family history of suicide Family violence, including physical or sexual abuse Psychiatric Abuse Benzodiazepines Having guns or other firearms in the home Having recently been released from prison, jail or mental asylum Self-harm Being exposed to others' suicidal behavior, such as that of family members, peers, or celebrities Being male Food insecurity There may be an association between long-term PM2.5 exposure and depression, and a possible association between short-term PM10 exposure and suicide. Strategies for detection and assessment The traditional approach has been to identify the risk factors that increase suicide or self-harm, though meta-analysis studies suggest that suicide risk assessment might not be useful and recommend immediate hospitalization of the person with suicidal feelings as the healthy choice. In 2001, the U.S. Department of Health and Human Services, published the National Strategy for Suicide Prevention, establishing a framework for suicide prevention in the U.S. The document, and its 2012 revision, calls for a public health approach to suicide prevention, focusing on identifying patterns of suicide and suicidal ideation throughout a group or population (as opposed to exploring the history and health conditions that could lead to suicide in a single individual). The ability to recognize warning signs of suicide allows individuals who may be concerned about someone they know to direct them to help. Suicide gesture and suicidal desire (a vague wish for death without any actual intent to kill oneself) are potentially self-injurious behaviors that a person may use to attain some other ends, like to seek help, punish others, or to receive attention. This behavior has the potential to aid an individual's capability for suicide and can be considered as a suicide warning, when the person shows intent through verbal and behavioral signs. Screening The U.S. Surgeon General has suggested that screening to detect those at risk of suicide may be one of the most effective means of preventing suicide in children and adolescents. There are various screening tools in the form of self-report questionnaires to help identify those at risk such as the Beck Hopelessness Scale and Is Path Warm?. A number of these self-report questionnaires have been tested and found to be effective for use among adolescents and young adults. There is however a high rate of false-positive identification and those deemed to be at risk should ideally have a follow-up clinical interview. The predictive quality of these screening questionnaires has not been conclusively validated so it is not possible to determine if those identified at risk of suicide will actually die by suicide. Asking about or screening for suicide does not create or increase the risk. In approximately 75 percent of suicides, the individuals had seen a physician within the year before their death, including 45 to 66 percent within the prior month. Approximately 33 to 41 percent of those who died by suicide had contact with mental health services in the prior year, including 20 percent within the prior month. These studies suggest an increased need for effective screening. Many suicide risk assessment measures are not sufficiently validated, and do not include all three core suicidality attributes (i.e., suicidal affect, behavior, and cognition). A study published by the University of New South Wales has concluded that asking about suicidal thoughts cannot be used as a reliable predictor of suicide risk. Underlying condition The conservative estimate is that 10% of individuals with psychiatric disorders may have an undiagnosed medical condition causing their symptoms, with some estimates stating that upwards of 50% may have an undiagnosed medical condition which, if not causing, is exacerbating their psychiatric symptoms. Illegal drugs and prescribed medications may also produce psychiatric symptoms. Effective diagnosis and, if necessary, medical testing, which may include neuroimaging to diagnose and treat any such medical conditions or medication side effects, may reduce the risk of suicidal ideation as a result of psychiatric symptoms. Most often including depression, which are present in up to 90–95% of cases. The calification of a case as psychiatric frequently implies more rigid treatments. Methods of intervention Restriction of lethal means Restriction of dangerous means ⁠— ⁠reducing the odds that a person attempting suicide will use highly lethal means ⁠— ⁠is an important component of suicide prevention. This practice is also called "means restriction". It has been demonstrated that restricting lethal means can help reduce suicide rates, as delaying action until the desire to die has passed. In general, strong evidence supports the effectiveness of means restriction in preventing suicides. There is also strong evidence that restricted access at so-called suicide hotspots, such as bridges and cliffs, reduces suicides, whereas other interventions such as placing signs or increasing surveillance at these sites appears less effective. One of the most famous historical examples of means reduction is that of coal gas in the United Kingdom. Until the 1950s, the most common means of suicide in the UK was poisoning by gas inhalation. In 1958, natural gas (virtually free of carbon monoxide) was introduced, and over the next decade, comprised over 50% of gas used. As carbon monoxide in gas decreased, suicides also decreased. The decrease was driven entirely by dramatic decreases in the number of suicides by carbon monoxide poisoning. A 2020 Cochrane review on means restrictions for jumping found tentative evidence of reductions in frequency. In the United States, firearm access is associated with increased suicide completion. About 85% of suicide attempts with a gun result in death, while most other widely used suicide attempt methods result in death less than 5% of the time. Matthew Miller, M.D., Sc.D. conducted research comparing the number of suicides in states with the highest rates of gun ownership, to the number of suicides in states with the lowest rates of gun ownership. He found that men were 3.7 times more likely to die by firearm suicide and women were 7.9 times more likely to die by firearm suicide living in states with high rates of gun ownership. There was no difference in non-firearm suicides. Although restrictions on access to firearms have reduced firearm suicide rates in other countries, such restrictions are difficult in the United States because the Second Amendment to the United States Constitution limits restrictions on weapons. For those who decide to end their lives impulsively, a 24-hour waiting period for firearm access could substantially reduce suicide success rates. Contrary to the popular notion that suicidal people will simply find another way to kill themselves, many people who survive suicide attempts go on to lead long lives. "In 2023, more than 42,967 people died from gun related injuries. Over half of those deaths were suicides" in the United States. Spiritual counseling The majority of known religions consider that suicide is a sin (or an equivalent fault). Their priests are available to guide in this problem and their circumstances. Psychological counseling There are multiple talking therapies that reduce suicidal thoughts and behaviors. In the group therapies, suicides can participate with other people (usually other patients with whom the patient of suicidal tendence would talk without major problems). The rest of the patients can have the same psychological problem or any other. A psychologist would direct the chat. Psychotherapies that have been shown most successful, or evidence based, are dialectical behavior therapy (DBT), which has shown to be helpful in reducing suicide attempts and reducing hospitalizations for suicidal ideation, and cognitive behavioral therapy for suicide prevention (CBT-SP), a form of DBT that is adapted for adolescents at high risk for repeated suicide attempts, and has shown to improve problem-solving and coping abilities. The brief intervention and contact technique developed by the World Health Organization also has shown benefit. Crisis hotlines and associations that provide help Crisis hotlines connect a person in distress to either a volunteer or staff member of an association that provides comfort and help. This may occur via telephone, online chat, or in person. Even though crisis hotlines are common, they have not been well studied. One study found a decrease in psychological pain, hopelessness, and desire to die from the beginning of the call through the next few weeks; however, the desire to die did not decrease long term. Direct conversation for intervention It cannot be despised that a reliable person talks directly with the person with suicidal tendences. Some guides about conversation with suicidal patients have been distributed between people with certain probabilities to find that situation. Caring letters The "Caring Letters" model of suicide prevention involved mailing short letters that expressed the researchers' interest in the recipients without pressuring them to take any action. The intervention reduced deaths by suicide, as proven through a randomized controlled trial. The technique involves letters sent from a researcher who had spoken at length with the recipient during a suicidal crisis. The typewritten form letters were brief – sometimes as short as two sentences – personally signed by the researcher, and expressed interest in the recipient without making any demands. They were initially sent monthly, eventually decreasing in frequency to quarterly letters; if the recipient wrote back, then an additional personal letter was mailed. The approach was partly inspired by Jerome Motto's experience of receiving letters during World War II from a young woman he had met before being deployed. Motto was the psychiatrist who first devised the experiment. Although the exact mechanisms have been debated, researchers generally think that the letters communicate a genuine interest and social connection that the recipients find helpful. Caring letters are inexpensive and either the only, or one of very few, approaches to suicide prevention that has been scientifically proven to work during the first years after a suicide attempt that resulted in hospitalization. Coping planning Coping planning is an intervention that is based in the strengths of patient for solving the problems or at least reducing and damping their impact. It aims to meet the needs of people who ask for help, including those experiencing suicidal ideation. By addressing why someone asks for help, the risk assessment and management stays on what the person needs, and the needs assessment focuses on the individual needs of each person. The coping planning approach to suicide prevention draws on the health-focused theory of coping. Coping is normalized as a normal and universal human response to unpleasant emotions, and interventions are considered a change continuum of low intensity (e.g., self-soothing) to high intensity support (e.g. professional help). By planning for coping, it supports people who are distressed and provides a sense of belongingness and resilience in treatment of illness. The proactive coping planning approach overcomes implications of ironic process theory. The biopsychosocial strategy of training people in healthy coping improves emotional regulation and decreases memories of unpleasant emotions. A good coping planning strategically reduces the inattentional blindness for a person while developing resilience and regulation strengths. Improval of the physical condition According to researches, a proper diet, correct sleeping and physical exercise have a positive influence in the mood and the activity of the person. In diet About 50% of people who die of suicide have a mood disorder such as major depression. Sleep and diet may play a role in depression (major depressive disorder), and interventions in these areas may be an effective add-on to conventional methods. According to Healthdirect, the national health advice service in Australia, risk of depression may be reduced with a healthy diet "high in fruits, vegetables, nuts, and legumes; moderate amounts of poultry, eggs, and dairy products; and only occasional red meat". Consuming oily fish (e.g., salmon, perch, tuna, mackerel, sardines and herring) may also help as they contain omega-3 fats. Consuming too much refined carbohydrates (e.g., snack foods) may increase the risk of depression symptoms. The mechanism on how diet improves or worsens mental health is still not fully understood. Blood glucose levels alterations, inflammation, or effects on the gut microbiome have been suggested. More information about food (e.g. oily fish with omega-3 fats, a class of PUFA), drink (e.g. water), healthy, balanced diet and mental health can be found on Healthdirect’s website. Vitamin B2, B6 and B12 deficiency may cause depression in females. Vitamin B12, for humans, is the only vitamin that must be sourced from animal-derived foods or from supplements. Only some archaea and bacteria can synthesize vitamin B12. Foods containing vitamin B12 include meat, clams, liver, fish, poultry, eggs, and dairy products. Many breakfast cereals are fortified with the vitamin. Sources of Vitamin B2 (riboflavin): Sources of Vitamin B6: Access of health professionals Contact with health professionals is important in the fight against suicide, because it makes possible to detect suicidal intentions and attempts. Medication Common treatments may include antidepressants, antianxiety, antipsychotics, stimulants, mood stabilizers, and all kinds of SSRI medications. Alongside medications, a health team often includes therapy and other beneficial resources to support good outcomes for individuals and their communities. The medication lithium may be useful in certain situations to reduce the risk of suicide. Specifically, it is effective at lowering the risk of suicide in those with bipolar disorder and major depressive disorder. Some antidepressant medications may increase suicidal ideation in some patients under certain conditions. Medical professionals advise supervision and communication during the usage of these medications. In case of a psychiatrist prescribes any of the medications, the problem would be taken to the field of psychiatry, with its own contexts and plannings, that are usually more rigid than those of other fields. It is also important that, in a proportion of cases of use of drugs to prevent suicide, a "paradoxical reaction" can happen, consisting of an increase in suicidal intention, mainly on the following occasions: the beginning of the period of taking the medication, any change of dose to adjust it, and the end of its intake period (its abandonment or discontinuation). Therefore, a bigger caution is recommended at that times. Barriers and physical protections Physical protection systems, such as barriers and anti-suicide nets, are sometimes installed in bridges, buildings and other dangerous points, to prevent suicides in them. The decision can be influenced by the frequent use for suicide attempts of those dangerous points, and the possibility of hurting someone else in those attempts (something very feasible if jumping from skyscrapers and similar situations). Sometimes, the problem is not a possible use of those points for suicide, but a simple lack of security in them that makes people to be involuntarily exposed to the danger of accident. Preventive programs to reduce the cause Some plans try to avoid suicide by avoiding previous problems that could produce it. For example: violence in a relationship, in the family, school bullying, workplace mobbing, and any other.The World Health Organization recommends "specific skills should be available in the education system to prevent bullying and violence in and around the school". Information campaigns Prevention of suicide also implies informing to the general public, or only to a sector, about the signs of suicide, to be able to detect them, and about the existing means of help. Informative campaigns must be correctly made to work as planned. In a review of communication campaigns against suicide, only two studies of three considered that the effect of those campaigns was positive. Inappropriate mentions to suicide could increase its amount. Media guidelines Recommendations around media reporting of suicide include not sensationalizing the event or attributing it to a single cause. It is also recommended that media messages include suicide prevention messages such as stories of hope and links to further resources. Particular care is recommended when the person who died is famous. Including specific details of the method or the location is not recommended. There is little evidence, however, regarding the benefit of providing resources for those looking for help, and the evidence for media guidelines generally is mixed at best. TV shows and news media may also be able to help prevent suicide by linking suicide with negative outcomes such as pain for the person who has attempted suicide and their survivors, conveying that the majority of people choose something other than suicide to solve their problems, avoiding mentioning suicide epidemics, and avoiding presenting authorities or sympathetic, ordinary people as spokespersons for the reasonableness of suicide. General strategies for society In the United States, the 2012 National Strategy for Suicide Prevention promotes various specific suicide prevention efforts including: Developing groups led by professionally trained individuals for broad-based support for suicide prevention. Promoting community-based suicide prevention programs. Screening and reducing at-risk behavior through psychological resilience programs that promotes optimism and connectedness. Education about suicide, including risk factors, warning signs, stigma related issues and the availability of help through social campaigns. Increasing the proficiency of health and welfare services at responding to people in need. e.g., sponsored training for helping professionals, increased access to community linkages, employing crisis counseling organizations. Reducing domestic violence and substance abuse through legal and empowerment means are long-term strategies. Reducing access to convenient means of suicide and methods of self-harm. e.g., toxic substances, poisons, handguns. Reducing the quantity of dosages supplied in packages of non-prescription medicines e.g., aspirin. School-based competency promoting and skill enhancing programs. Interventions and usage of ethical surveillance systems targeted at high-risk groups. Improving reporting and portrayals of negative behavior, suicidal behavior, mental illness and substance abuse in the entertainment and news media. Research on protective factors & development of effective clinical and professional practices. Specific strategies in society Suicide prevention strategies focus on reducing the risk factors and intervening strategically to reduce the level of risk. Risk and protective factors unique to the individual can be assessed by a qualified mental health professional.Some of the specific strategies used to address are: Crisis intervention. Structured counseling and psychotherapy. Hospitalization for those with low adherence to collaboration for help and those who require monitoring and secondary symptom treatment. Supportive therapy like substance abuse treatment, psychotropic medication, family psychoeducation and access to emergency phone call care with emergency rooms, suicide prevention hotlines, etc. Restricting access to lethality of suicide means through policies and laws. Creating and using crisis cards, an easy-to-read uncluttered card that describes a list of activities one should follow in crisis until the positive behavior responses settles in the personality. Person-centered life skills training. e.g., Problem solving. Registering with support groups like Alcoholics Anonymous, Suicide Bereavement Support Group, a religious group with flow rituals, etc. Therapeutic recreational therapy that improves mood. Motivating self-care activities like physical exercises and meditative relaxation. After a suicide Postvention is for people affected by an individual's suicide. This intervention facilitates grieving, guides to reduce guilt, guides to reduce anxiety and depression, and helps to decrease the effects of trauma. Bereavement is ruled out and promoted for catharsis and supporting their adaptive capacities before intervening depression and any psychiatric disorders. Postvention is also provided to minimize the risk of imitative or copycat suicides, but there is a lack of evidence based standard protocol. The general goal of the mental health practitioner is to decrease the likelihood of others identifying with the suicidal behavior of the deceased as a coping strategy in dealing with adversity. Legislation Support organizations Many non-profit organizations exist, such as the American Foundation for Suicide Prevention in the United States, which serve as crisis hotlines; it has benefited from at least one crowd-sourced campaign. The first documented program aimed at preventing suicide was initiated in 1906 in both New York, the National Save-A-Life League, and in London, the Suicide Prevention Department of the Salvation Army. Suicide prevention interventions fall into two broad categories: prevention targeted at the level of the individual and prevention targeted at the level of the population. To identify, review, and disseminate information about best practices to address specific objectives of the National Strategy Best Practices Registry (BPR) was initiated. The Best Practices Registry of Suicide Prevention Resource Center is a registry of various suicide intervention programs maintained by the American Association of Suicide Prevention. The programs are divided, with those in Section I listing evidence-based programs: interventions which have been subjected to in depth review and for which evidence has demonstrated positive outcomes. Section III programs have been subjected to review. Examples of support organizations American Foundation for Suicide Prevention Befrienders Worldwide Campaign Against Living Miserably Crisis Text Line International Association for Suicide Prevention The Jed Foundation National Suicide Prevention Lifeline Samaritans Suicide Prevention Action Network USA Trans Lifeline The Trevor Project Economics In the United States it is estimated that a suicide results in costs of about $1.3 million. The loss of productivity from the deceased individual accounts for 97 percent of these costs. The remaining 3 percent of the costs were from medical expenses. Money spent on intervention programs is estimated to result in a decrease in economic losses that are 2.5-fold greater than the amount spent.
Biology and health sciences
Mental disorders
Health
1894416
https://en.wikipedia.org/wiki/Jackson%27s%20chameleon
Jackson's chameleon
Jackson's chameleon (Trioceros jacksonii), also known commonly as Jackson's horned chameleon, the three-horned chameleon, and the Kikuyu three-horned chameleon, is a species of chameleon, a lizard in the family Chamaeleonidae. The species is native to East Africa, and introduced to Hawaii, Florida, and California. There are three recognized subspecies. Taxonomy Jackson's chameleon was described by Belgian-British zoologist George Albert Boulenger in 1896. Etymology The generic name, Trioceros, is derived from the Greek τρί- (tri-) meaning "three" and κέρας (kéras) meaning "horns". This is in reference to the three horns found on the heads of males. The specific name, jacksonii, is a Latinized form of the last name of English explorer and ornithologist Frederick John Jackson, who was serving as the first Governor of Kenya at the time of Boulenger's description. The English word chameleon (also chamaeleon) derives from Latin chamaeleō, a borrowing of the Ancient Greek χαμαιλέων (khamailéōn), a compound of χαμαί (khamaí) "low to the ground" and λέων (léōn) "lion". The Greek word is a calque translating the Akkadian nēš qaqqari, "ground lion". Subspecies The following three subspecies are recognized as being valid, including the nominate subspecies. T. j. jacksonii – Jackson's chameleon T. j. merumontanus – dwarf Jackson's chameleon T. j. xantholophus – yellow-crested Jackson's chameleon Nota bene: A trinomial authority in parentheses indicates that the subspecies was originally described in a genus other thanTrioceros. Habitat and geographic range Jackson's chameleon is native to woodlands and forests at altitudes of in south-central Kenya and northern Tanzania. In these areas, the rainfall is seasonal but exceeds per year. Day temperatures are typically , and night temperatures are typically . In Tanzania, it is known only from Mount Meru in the Arusha Region, which is the home of the relatively small endemic subspecies T. j. merumontanus. Jackson's chameleon is more widespread in Kenya, where it is even found in wooded areas of some Nairobi suburbs. The subspecies T. j. xantholophus (native to the Mount Kenya region) was introduced to Hawaii in 1972 and has since established populations on all main islands and has become an invasive species there. This subspecies has also been introduced to Florida. In Hawaii, it is found mainly at altitudes of in wet, shady places. Historically this population was the primary source of Jackson's chameleons for the exotic pet trade in the United States, but exports from Hawaii are now illegal. This has been done to prevent opportunists from willfully establishing further feral animal populations to capture and sell them. Description Jackson's chameleon is sometimes called the three-horned chameleon because males possess three brown horns: one on the nose (the rostral horn) and one above each superior orbital ridge above the eyes (preocular horns), somewhat reminiscent of the ceratopsid dinosaur genus Triceratops. The females generally have no horns, or instead have traces of the rostral horn (in the subspecies T. j. jacksonii and T. j. merumontanus). The coloring is usually bright green, with some individual animals having traces of blue and yellow, but like all chameleons, it changes color quickly depending on mood, health, and temperature. Adult males reach a total length (including tail) of up to and females up to , but more typical lengths are . It has a saw-tooth shaped dorsal ridge and no gular crest. It attains sexual maturity after five months. The lifespan is variable, with males generally living longer than females. The largest subspecies of Jackson's chameleon is T. j. xantholophus, which has been captively bred since the 1980s. Ecology Feeding habits Jackson's chameleon lives primarily on a diet of small insects. It also preys on centipedes, isopods, millipedes, spiders, lizards, small birds, and snails in their native habitat. Invasive species There is a threat of devastating impact by introduced invasive Jackson's chameleons to native ecosystems in Hawaii. They were found with mainly insects in their stomachs: planthoppers Oliarus, grasshoppers Banza, casebearing caterpillars Hyposmocoma, beetles Oodemas, dragonflies Pantala and others. Holland et al. (2010) proved that they also prey on snails in Hawaii. Their prey includes land snails Achatinella, Auriculella, Lamellidea, Philonesia, Oxychilus alliarius. They are swallowing whole snails (including shells). Jackson's chameleons introduced to Hawaii are a substantial threat to native biodiversity of invertebrates and a serious threat especially to endemic species, such as critically endangered O'ahu tree snails (genus Achatinella). Territoriality T. jacksonii is less territorial than most species of chameleons. Males will generally assert dominance over each other through color displays and posturing in an attempt to secure mating rights, but usually not to the point of physical fights. Reproduction Most chameleons are oviparous, but Jackson's chameleon and several other highland species in the genus Trioceros are ovoviviparous, giving birth to offspring soon after they are ready to hatch from their egg sac; eight to thirty live young are born after a five- to six-month gestation. The subspecies T. j. merumontanus gives birth to five to ten live young. In captivity In captivity, Jackson's chameleon requires high humidity, and is in general very needy of colder temperatures during the night. Too much heat, or excessive humidity, can cause eye infections and upper respiratory infections in this species. In captivity, Jackson's chameleon can be expected to live between five and ten years.
Biology and health sciences
Iguania
Animals
1894582
https://en.wikipedia.org/wiki/Dielectric%20spectroscopy
Dielectric spectroscopy
Dielectric spectroscopy (which falls in a subcategory of the impedance spectroscopy) measures the dielectric properties of a medium as a function of frequency. It is based on the interaction of an external field with the electric dipole moment of the sample, often expressed by permittivity. It is also an experimental method of characterizing electrochemical systems. This technique measures the impedance of a system over a range of frequencies, and therefore the frequency response of the system, including the energy storage and dissipation properties, is revealed. Often, data obtained by electrochemical impedance spectroscopy (EIS) is expressed graphically in a Bode plot or a Nyquist plot. Impedance is the opposition to the flow of alternating current (AC) in a complex system. A passive complex electrical system comprises both energy dissipater (resistor) and energy storage (capacitor) elements. If the system is purely resistive, then the opposition to AC or direct current (DC) is simply resistance. Materials or systems exhibiting multiple phases (such as composites or heterogeneous materials) commonly show a universal dielectric response, whereby dielectric spectroscopy reveals a power law relationship between the impedance (or the inverse term, admittance) and the frequency, ω, of the applied AC field. Almost any physico-chemical system, such as electrochemical cells, mass-beam oscillators, and even biological tissue possesses energy storage and dissipation properties. EIS examines them. This technique has grown tremendously in stature over the past few years and is now being widely employed in a wide variety of scientific fields such as fuel cell testing, biomolecular interaction, and microstructural characterization. Often, EIS reveals information about the reaction mechanism of an electrochemical process: different reaction steps will dominate at certain frequencies, and the frequency response shown by EIS can help identify the rate limiting step. Dielectric mechanisms There are a number of different dielectric mechanisms, connected to the way a studied medium reacts to the applied field (see the figure illustration). Each dielectric mechanism is centered around its characteristic frequency, which is the reciprocal of the characteristic time of the process. In general, dielectric mechanisms can be divided into relaxation and resonance processes. The most common, starting from high frequencies, are: Electronic polarization This resonant process occurs in a neutral atom when the electric field displaces the electron density relative to the nucleus it surrounds. This displacement occurs due to the equilibrium between restoration and electric forces. Electronic polarization may be understood by assuming an atom as a point nucleus surrounded by spherical electron cloud of uniform charge density. Atomic polarization Atomic polarization is observed when the nucleus of the atom reorients in response to the electric field. This is a resonant process. Atomic polarization is intrinsic to the nature of the atom and is a consequence of an applied field. Electronic polarization refers to the electron density and is a consequence of an applied field. Atomic polarization is usually small compared to electronic polarization. Dipole relaxation This originates from permanent and induced dipoles aligning to an electric field. Their orientation polarisation is disturbed by thermal noise (which mis-aligns the dipole vectors from the direction of the field), and the time needed for dipoles to relax is determined by the local viscosity. These two facts make dipole relaxation heavily dependent on temperature, pressure, and chemical surrounding. Ionic relaxation Ionic relaxation comprises ionic conductivity and interfacial and space charge relaxation. Ionic conductivity predominates at low frequencies and introduces only losses to the system. Interfacial relaxation occurs when charge carriers are trapped at interfaces of heterogeneous systems. A related effect is Maxwell-Wagner-Sillars polarization, where charge carriers blocked at inner dielectric boundary layers (on the mesoscopic scale) or external electrodes (on a macroscopic scale) lead to a separation of charges. The charges may be separated by a considerable distance and therefore make contributions to the dielectric loss that are orders of magnitude larger than the response due to molecular fluctuations. Dielectric relaxation Dielectric relaxation as a whole is the result of the movement of dipoles (dipole relaxation) and electric charges (ionic relaxation) due to an applied alternating field, and is usually observed in the frequency range 102-1010 Hz. Relaxation mechanisms are relatively slow compared to resonant electronic transitions or molecular vibrations, which usually have frequencies above 1012 Hz. Principles Steady-state For a redox reaction R O + e, without mass-transfer limitation, the relationship between the current density and the electrode overpotential is given by the Butler–Volmer equation: with is the exchange current density and and are the symmetry factors. The curve vs. is not a straight line (Fig. 1), therefore a redox reaction is not a linear system. Dynamic behavior Faradaic impedance In an electrochemical cell the faradaic impedance of an electrolyte-electrode interface is the joint electrical resistance and capacitance at that interface. Let us suppose that the Butler-Volmer relationship correctly describes the dynamic behavior of the redox reaction: Dynamic behavior of the redox reaction is characterized by the so-called charge transfer resistance defined by: The value of the charge transfer resistance changes with the overpotential. For this simplest example the faradaic impedance is reduced to a resistance. It is worthwhile to notice that: for . Double-layer capacitance An electrode electrolyte interface behaves like a capacitance called electrochemical double-layer capacitance . The equivalent circuit for the redox reaction in Fig. 2 includes the double-layer capacitance as well as the charge transfer resistance . Another analog circuit commonly used to model the electrochemical double-layer is called a constant phase element. The electrical impedance of this circuit is easily obtained remembering the impedance of a capacitance which is given by: where is the angular frequency of a sinusoidal signal (rad/s), and . It is obtained: Nyquist diagram of the impedance of the circuit shown in Fig. 3 is a semicircle with a diameter and an angular frequency at the apex equal to (Fig. 3). Other representations, Bode plots, or Black plans can be used. Ohmic resistance The ohmic resistance appears in series with the electrode impedance of the reaction and the Nyquist diagram is translated to the right. Universal dielectric response Under AC conditions with varying frequency ω, heterogeneous systems and composite materials exhibit a universal dielectric response, in which overall admittance exhibits a region of power law scaling with frequency. . Measurement of the impedance parameters Plotting the Nyquist diagram with a potentiostat and an impedance analyzer, most often included in modern potentiostats, allows the user to determine charge transfer resistance, double-layer capacitance and ohmic resistance. The exchange current density can be easily determined measuring the impedance of a redox reaction for . Nyquist diagrams are made of several arcs for reactions more complex than redox reactions and with mass-transfer limitations. Applications Electrochemical impedance spectroscopy is used in a wide range of applications. In the paint and coatings industry, it is a useful tool to investigate the quality of coatings and to detect the presence of corrosion. It is used in many biosensor systems as a label-free technique to measure bacterial concentration and to detect dangerous pathogens such as Escherichia coli O157:H7 and Salmonella, and yeast cells. Electrochemical impedance spectroscopy is also used to analyze and characterize different food products. Some examples are the assessment of food–package interactions, the analysis of milk composition, the characterization and the determination of the freezing end-point of ice-cream mixes, the measure of meat ageing, the investigation of ripeness and quality in fruits and the determination of free acidity in olive oil. In the field of human health monitoring is better known as bioelectrical impedance analysis (BIA) and is used to estimate body composition as well as different parameters such as total body water and free fat mass. Electrochemical impedance spectroscopy can be used to obtain the frequency response of batteries and electrocatalytic systems at relatively high temperatures. Biomedical sensors working in the microwave range relies on dielectric spectroscopy to detect changes in the dielectric properties over a frequency range, such as non-invasive continuous blood glucose monitoring. The IFAC database can be used as a resource to get the dielectric properties for human body tissues. For heterogenous mixtures like suspensions impedance spectroscopy can be used to monitor the particle sedimentation process.
Physical sciences
Electrical methods
Chemistry
1895094
https://en.wikipedia.org/wiki/Opportunistic%20infection
Opportunistic infection
An opportunistic infection is an infection caused by pathogens (bacteria, fungi, parasites or viruses) that take advantage of an opportunity not normally available. These opportunities can stem from a variety of sources, such as a weakened immune system (as can occur in acquired immunodeficiency syndrome or when being treated with immunosuppressive drugs, as in cancer treatment), an altered microbiome (such as a disruption in gut microbiota), or breached integumentary barriers (as in penetrating trauma). Many of these pathogens do not necessarily cause disease in a healthy host that has a non-compromised immune system, and can, in some cases, act as commensals until the balance of the immune system is disrupted. Opportunistic infections can also be attributed to pathogens which cause mild illness in healthy individuals but lead to more serious illness when given the opportunity to take advantage of an immunocompromised host. Types of opportunistic infections A wide variety of pathogens are involved in opportunistic infection and can cause a similarly wide range in pathologies. A partial list of opportunistic pathogens and their associated presentations includes: Bacteria Clostridioides difficile (formerly known as Clostridium difficile) is a species of bacteria that is known to cause gastrointestinal infection and is typically associated with the hospital setting. Legionella pneumophila is a bacterium that causes Legionnaire's disease, a respiratory infection. Mycobacterium avium complex (MAC) is a group of two bacteria, M. avium and M. intracellulare, that typically co-infect, leading to a lung infection called mycobacterium avium-intracellulare infection. Mycobacterium tuberculosis is a species of bacteria that causes tuberculosis, a respiratory infection. Pseudomonas aeruginosa is a bacterium that can cause respiratory infections. It is frequently associated with cystic fibrosis and hospital-acquired infections. Salmonella is a genus of bacteria, known to cause gastrointestinal infections. Staphylococcus aureus is a bacterium known to cause skin infections and sepsis, among other pathologies. Notably, S. aureus has evolved several drug-resistant strains, including MRSA. Streptococcus pneumoniae is a bacterium that causes respiratory infections. Streptococcus pyogenes (also known as group A Streptococcus) is a bacterium that can cause a variety of pathologies, including impetigo and strep throat, as well as other, more serious, illnesses. Fungi Aspergillus is a fungus, commonly associated with respiratory infection. Candida albicans is a species of fungus that is associated with oral thrush and gastrointestinal infection. Coccidioides immitis is a fungus known for causing coccidioidomycosis, more commonly known as Valley Fever. Cryptococcus neoformans is a fungus that causes cryptococcosis, which can lead to pulmonary infection as well as nervous system infections, like meningitis. Histoplasma capsulatum is a species of fungus known to cause histoplasmosis, which can present with an array of symptoms, but often involves respiratory infection. Pseudogymnoascus destructans (formerly known as Geomyces destructans) is a fungus that causes white-nose syndrome in bats. Microsporidia is a group of fungi that infect species across the animal kingdom, one species of which can cause microsporidiosis in immunocompromised human hosts. Pneumocystis jirovecii (formerly known as Pneumocystis carinii) is a fungus that causes pneumocystis pneumonia, a respiratory infection. Parasites Cryptosporidium is a protozoan that infects the gastrointestinal tract. Toxoplasma gondii is a protozoan, known for causing toxoplasmosis. Viruses Cytomegalovirus is a family of opportunistic viruses, most frequently associated with respiratory infection. Human polyomavirus 2 (also known as JC virus) is known to cause progressive multifocal leukoencephalopathy (PML). Human herpesvirus 8 (also known as Kaposi sarcoma-associated herpesvirus) is a virus associated with Kaposi sarcoma, a type of cancer. Causes Immunodeficiency or immunosuppression are characterized by the absence of or disruption in components of the immune system, leading to lower-than-normal levels of immune function and immunity against pathogens. They can be caused by a variety of factors, including: Malnutrition Fatigue Recurrent infections Immunosuppressing agents for organ transplant recipients Advanced HIV infection Chemotherapy for cancer Genetic predisposition Skin damage Antibiotic treatment leading to disruption of the physiological microbiome, thus allowing some microorganisms to outcompete others and become pathogenic (e.g. disruption of intestinal microbiota may lead to Clostridium difficile infection) Medical procedures Pregnancy Aging Leukopenia (i.e. neutropenia and lymphocytopenia) Burns The lack of or the disruption of normal vaginal microbiota allows the proliferation of opportunistic microorganisms and will cause the opportunistic infection bacterial vaginosis. Opportunistic Infection and HIV/AIDS HIV is a virus that targets T cells of the immune system and, as a result, HIV infection can lead to progressively worsening immunodeficiency, a condition ideal for the development of opportunistic infection. Because of this, respiratory and central nervous system opportunistic infections, including tuberculosis and meningitis, respectively, are associated with later-stage HIV infection, as are numerous other infectious pathologies. Kaposi's sarcoma, a virally-associated cancer, has higher incidence rates in HIV-positive patients than in the general population. As immune function declines and HIV-infection progresses to AIDS, individuals are at an increased risk of opportunistic infections that their immune systems are no longer capable of responding properly to. Because of this, opportunistic infections are a leading cause of HIV/AIDS-related deaths. Prevention Since opportunistic infections can cause severe disease, much emphasis is placed on measures to prevent infection. Such a strategy usually includes restoration of the immune system as soon as possible, avoiding exposures to infectious agents, and using antimicrobial medications ("prophylactic medications") directed against specific infections. Restoration of immune system In patients with HIV, starting antiretroviral therapy is especially important for restoration of the immune system and reducing the incidence rate of opportunistic infections In patients undergoing chemotherapy, completion of and recovery from treatment is the primary method for immune system restoration. In a select subset of high risk patients, granulocyte colony stimulating factors (G-CSF) can be used to aid immune system recovery. Avoidance of infectious exposure The following may be avoided as a preventative measure to reduce the risk of infection: Eating undercooked meat or eggs, unpasteurized dairy products or juices. Potential sources of tuberculosis (high-risk healthcare facilities, regions with high rates of tuberculosis, patients with known tuberculosis). Any oral exposure to feces. Contact with farm animals, especially those with diarrhea: source of Toxoplasma gondii, Cryptosporidium parvum. Cat feces (e.g. cat litter): source of Toxoplasma gondii, Bartonella spp. Soil/dust in areas where there is known histoplasmosis, coccidioidomycosis. Reptiles, chicks, and ducklings are a common source of Salmonella. Unprotected sexual intercourse with individuals with known sexually transmitted infections. Prophylactic medications Individuals at higher risk are often prescribed prophylactic medication to prevent an infection from occurring. A person's risk level for developing an opportunistic infection is approximated using the person's CD4 T-cell count and other indications. The table below provides information regarding the treatment management of common opportunistic infections. Alternative agents can be used instead of the preferred agents. These alternative agents may be used due to allergies, availability, or clinical presentation. The alternative agents are listed in the table below. Treatment Treatment depends on the type of opportunistic infection, but usually involves different antibiotics. Veterinary treatment Opportunistic infections caused by feline leukemia virus and feline immunodeficiency virus retroviral infections can be treated with lymphocyte T-cell immunomodulator.
Biology and health sciences
Concepts
Health
1895477
https://en.wikipedia.org/wiki/Allanite
Allanite
Allanite (also called orthite) is a sorosilicate group of minerals within the broader epidote group that contain a significant amount of rare-earth elements. The mineral occurs mainly in metamorphosed clay-rich sediments and felsic igneous rocks. It has the general formula A2M3Si3O12[OH], where the A sites can contain large cations such as Ca2+, Sr2+, and rare-earth elements, and the M sites admit Al3+, Fe3+, Mn3+, Fe2+, or Mg2+ among others. However, a large amount of additional elements, including Th, U, Be, Zr, P, Ba, Cr and others may be present in the mineral. The International Mineralogical Association lists four minerals in the allanite group, each recognized as a unique mineral: allanite-(Ce), allanite-(La), allanite-(Nd), and allanite-(Y), depending on the dominant rare earth present: cerium, lanthanum, neodymium or yttrium. Allanite contains up to 20% rare-earth elements and is a valuable source of them. The inclusion of thorium and other radioactive elements in allanite results in some interesting phenomena. Allanite often has a pleochroic halo of radiation damage in the minerals immediately adjacent. Also highly radioactive grains of allanite often have their structure disrupted or are metamict. The age of allanite grains that have not been destroyed by radiation can be determined using different techniques. Allanite is usually black in color, but can be brown or brown-violet. It is often coated with a yellow-brown alteration product, likely limonite. It crystallizes in the monoclinic system and forms prismatic crystals. It has a Mohs hardness of 5.5–6 and a specific gravity of 3.5–4.2. It is also pyrognomic, meaning that it becomes incandescent at a relatively low temperature of about 95 °C. It was discovered in 1810 and named for the Scottish mineralogist Thomas Allan (1777–1833). The type locality is Aluk Island, Greenland, where it was first discovered by Karl Ludwig Giesecke.
Physical sciences
Silicate minerals
Earth science
8286675
https://en.wikipedia.org/wiki/System
System
A system is a group of interacting or interrelated elements that act according to a set of rules to form a unified whole. A system, surrounded and influenced by its environment, is described by its boundaries, structure and purpose and is expressed in its functioning. Systems are the subjects of study of systems theory and other systems sciences. Systems have several common properties and characteristics, including structure, function(s), behavior and interconnectivity. Etymology The term system comes from the Latin word systēma, in turn from Greek systēma: "whole concept made of several parts or members, system", literary "composition". History In the 19th century, the French physicist Nicolas Léonard Sadi Carnot, who studied thermodynamics, pioneered the development of the concept of a system in the natural sciences. In 1824, he studied the system which he called the working substance (typically a body of water vapor) in steam engines, in regard to the system's ability to do work when heat is applied to it. The working substance could be put in contact with either a boiler, a cold reservoir (a stream of cold water), or a piston (on which the working body could do work by pushing on it). In 1850, the German physicist Rudolf Clausius generalized this picture to include the concept of the surroundings and began to use the term working body when referring to the system. The biologist Ludwig von Bertalanffy became one of the pioneers of the general systems theory. In 1945 he introduced models, principles, and laws that apply to generalized systems or their subclasses, irrespective of their particular kind, the nature of their component elements, and the relation or 'forces' between them. In the late 1940s and mid-50s, Norbert Wiener and Ross Ashby pioneered the use of mathematics to study systems of control and communication, calling it cybernetics. In the 1960s, Marshall McLuhan applied general systems theory in an approach that he called a field approach and figure/ground analysis, to the study of media theory. In the 1980s, John Henry Holland, Murray Gell-Mann and others coined the term complex adaptive system at the interdisciplinary Santa Fe Institute. Concepts Environment and boundaries Systems theory views the world as a complex system of interconnected parts. One scopes a system by defining its boundary; this means choosing which entities are inside the system and which are outside—part of the environment. One can make simplified representations (models) of the system in order to understand it and to predict or impact its future behavior. These models may define the structure and behavior of the system. Natural and human-made systems There are natural and human-made (designed) systems. Natural systems may not have an apparent objective but their behavior can be interpreted as purposeful by an observer. Human-made systems are made with various purposes that are achieved by some action performed by or with the system. The parts of a system must be related; they must be "designed to work as a coherent entity"—otherwise they would be two or more distinct systems. Theoretical framework Most systems are open systems, exchanging matter and energy with their respective surroundings; like a car, a coffeemaker, or Earth. A closed system exchanges energy, but not matter, with its environment; like a computer or the project Biosphere 2. An isolated system exchanges neither matter nor energy with its environment. A theoretical example of such a system is the Universe. Process and transformation process An open system can also be viewed as a bounded transformation process, that is, a black box that is a process or collection of processes that transform inputs into outputs. Inputs are consumed; outputs are produced. The concept of input and output here is very broad. For example, an output of a passenger ship is the movement of people from departure to destination. System model A system comprises multiple views. Human-made systems may have such views as concept, analysis, design, implementation, deployment, structure, behavior, input data, and output data views. A system model is required to describe and represent all these views. Systems architecture A systems architecture, using one single integrated model for the description of multiple views, is a kind of system model. Subsystem A subsystem is a set of elements, which is a system itself, and a component of a larger system. The IBM Mainframe Job Entry Subsystem family (JES1, JES2, JES3, and their HASP/ASP predecessors) are examples. The main elements they have in common are the components that handle input, scheduling, spooling and output; they also have the ability to interact with local and remote operators. A subsystem description is a system object that contains information defining the characteristics of an operating environment controlled by the system. The data tests are performed to verify the correctness of the individual subsystem configuration data (e.g. MA Length, Static Speed Profile, …) and they are related to a single subsystem in order to test its Specific Application (SA). Analysis There are many kinds of systems that can be analyzed both quantitatively and qualitatively. For example, in an analysis of urban systems dynamics, A . W. Steiss defined five intersecting systems, including the physical subsystem and behavioral system. For sociological models influenced by systems theory, Kenneth D. Bailey defined systems in terms of conceptual, concrete, and abstract systems, either isolated, closed, or open. Walter F. Buckley defined systems in sociology in terms of mechanical, organic, and process models. Bela H. Banathy cautioned that for any inquiry into a system understanding its kind is crucial, and defined natural and designed, i. e. artificial, systems. For example, natural systems include subatomic systems, living systems, the Solar System, galaxies, and the Universe, while artificial systems include man-made physical structures, hybrids of natural and artificial systems, and conceptual knowledge. The human elements of organization and functions are emphasized with their relevant abstract systems and representations. Artificial systems inherently have a major defect: they must be premised on one or more fundamental assumptions upon which additional knowledge is built. This is in strict alignment with Gödel's incompleteness theorems. The Artificial system can be defined as a "consistent formalized system which contains elementary arithmetic". These fundamental assumptions are not inherently deleterious, but they must by definition be assumed as true, and if they are actually false then the system is not as structurally integral as is assumed (i.e. it is evident that if the initial expression is false, then the artificial system is not a "consistent formalized system"). For example, in geometry this is very evident in the postulation of theorems and extrapolation of proofs from them. George J. Klir maintained that no "classification is complete and perfect for all purposes", and defined systems as abstract, real, and conceptual physical systems, bounded and unbounded systems, discrete to continuous, pulse to hybrid systems, etc. The interactions between systems and their environments are categorized as relatively closed and open systems. Important distinctions have also been made between hard systems—–technical in nature and amenable to methods such as systems engineering, operations research, and quantitative systems analysis—and soft systems that involve people and organizations, commonly associated with concepts developed by Peter Checkland and Brian Wilson through soft systems methodology (SSM) involving methods such as action research and emphasis of participatory designs. Where hard systems might be identified as more scientific, the distinction between them is often elusive. Economic system An economic system is a social institution which deals with the production, distribution and consumption of goods and services in a particular society. The economic system is composed of people, institutions and their relationships to resources, such as the convention of property. It addresses the problems of economics, like the allocation and scarcity of resources. The international sphere of interacting states is described and analyzed in systems terms by several international relations scholars, most notably in the neorealist school. This systems mode of international analysis has however been challenged by other schools of international relations thought, most notably the constructivist school, which argues that an over-large focus on systems and structures can obscure the role of individual agency in social interactions. Systems-based models of international relations also underlie the vision of the international sphere held by the liberal institutionalist school of thought, which places more emphasis on systems generated by rules and interaction governance, particularly economic governance. Information and computer science In computer science and information science, an information system is a hardware system, software system, or combination, which has components as its structure and observable inter-process communications as its behavior. There are systems of counting, as with Roman numerals, and various systems for filing papers, or catalogs, and various library systems, of which the Dewey Decimal Classification is an example. This still fits with the definition of components that are connected together (in this case to facilitate the flow of information). System can also refer to a framework, aka platform, be it software or hardware, designed to allow software programs to run. A flaw in a component or system can cause the component itself or an entire system to fail to perform its required function, e.g., an incorrect statement or data definition. Engineering and physics In engineering and physics, a physical system is the portion of the universe that is being studied (of which a thermodynamic system is one major example). Engineering also has the concept of a system referring to all of the parts and interactions between parts of a complex project. Systems engineering is the branch of engineering that studies how this type of system should be planned, designed, implemented, built, and maintained. Sociology, cognitive science and management research Social and cognitive sciences recognize systems in models of individual humans and in human societies. They include human brain functions and mental processes as well as normative ethics systems and social and cultural behavioral patterns. In management science, operations research and organizational development, human organizations are viewed as management systems of interacting components such as subsystems or system aggregates, which are carriers of numerous complex business processes (organizational behaviors) and organizational structures. Organizational development theorist Peter Senge developed the notion of organizations as systems in his book The Fifth Discipline. Organizational theorists such as Margaret Wheatley have also described the workings of organizational systems in new metaphoric contexts, such as quantum physics, chaos theory, and the self-organization of systems. Pure logic There is also such a thing as a logical system. An obvious example is the calculus developed simultaneously by Leibniz and Isaac Newton. Another example is George Boole's Boolean operators. Other examples relate specifically to philosophy, biology, or cognitive science. Maslow's hierarchy of needs applies psychology to biology by using pure logic. Numerous psychologists, including Carl Jung and Sigmund Freud developed systems that logically organize psychological domains, such as personalities, motivations, or intellect and desire. Strategic thinking In 1988, military strategist, John A. Warden III introduced the Five Ring System model in his book, The Air Campaign, contending that any complex system could be broken down into five concentric rings. Each ring—leadership, processes, infrastructure, population and action units—could be used to isolate key elements of any system that needed change. The model was used effectively by Air Force planners in the Iran–Iraq War. In the late 1990s, Warden applied his model to business strategy.
Technology
Basics_8
null
8286923
https://en.wikipedia.org/wiki/Helicopter
Helicopter
A helicopter is a type of rotorcraft in which lift and thrust are supplied by horizontally spinning rotors. This allows the helicopter to take off and land vertically, to hover, and to fly forward, backward and laterally. These attributes allow helicopters to be used in congested or isolated areas where fixed-wing aircraft and many forms of short take-off and landing (STOL) or short take-off and vertical landing (STOVL) aircraft cannot perform without a runway. In 1942, the Sikorsky R-4 became the first helicopter to reach full-scale production. Although most earlier designs used more than one main rotor, the configuration of a single main rotor accompanied by a vertical anti-torque tail rotor (i.e. unicopter, not to be confused with the single-blade monocopter) has become the most common helicopter configuration. However, twin-rotor helicopters (bicopters), in either tandem or transverse rotors configurations, are sometimes in use due to their greater payload capacity than the monorotor design, and coaxial-rotor, tiltrotor and compound helicopters are also all flying today. Four-rotor helicopters (quadcopters) were pioneered as early as 1907 in France, and along with other types of multicopters, have been developed mainly for specialized applications such as commercial unmanned aerial vehicles (drones) due to the rapid expansion of drone racing and aerial photography markets in the early 21st century, as well as recently weaponized utilities such as artillery spotting, aerial bombing and suicide attacks. Etymology The English word helicopter is adapted from the French word , coined by Gustave Ponton d'Amécourt in 1861, which originates from the Greek (), genitive helikos (ἕλῐκος), "helix, spiral, whirl, convolution" and () "wing". In a process of rebracketing, the word is often (erroneously, from an etymological point of view) perceived by English speakers as consisting of heli- and -copter, leading to words like helipad and quadcopter. English language nicknames for "helicopter" include "chopper", "copter", "heli", and "whirlybird". In the United States military, the common slang is "helo" pronounced /ˈhiː.loʊ/. Design A helicopter is a type of rotorcraft in which lift and thrust are supplied by one or more horizontally-spinning rotors. By contrast the autogyro (or gyroplane) and gyrodyne have a free-spinning rotor for all or part of the flight envelope, relying on a separate thrust system to propel the craft forwards, so that the airflow sets the rotor spinning to provide lift. The compound helicopter also has a separate thrust system, but continues to supply power to the rotor throughout normal flight. US Federal regulations state that "helicopter" means a rotorcraft that, for its horizontal motion, depends principally on its engine-driven rotors. Rotor system The rotor system, or more simply rotor, is the rotating part of a helicopter that generates lift. A rotor system may be mounted horizontally, as main rotors are, providing lift vertically, or it may be mounted vertically, such as a tail rotor, to provide horizontal thrust to counteract torque from the main rotors. The rotor consists of a mast, hub and rotor blades. The mast is a cylindrical metal shaft that extends upwards from the transmission. At the top of the mast is the attachment point for the rotor blades called the hub. Main rotor systems are classified according to how the rotor blades are attached and move relative to the hub. There are three basic types: hingeless, fully articulated, and teetering; although some modern rotor systems use a combination of these. Anti-torque Most helicopters have a single main rotor, but torque created by its aerodynamic drag must be countered by an opposed torque. The design that Igor Sikorsky settled on for his VS-300 was a smaller tail rotor. The tail rotor pushes or pulls against the tail to counter the torque effect, and this has become the most common configuration for helicopter design, usually at the end of a tail boom. Some helicopters use other anti-torque controls instead of the tail rotor, such as the ducted fan (called Fenestron or FANTAIL) and NOTAR. NOTAR provides anti-torque similar to the way a wing develops lift through the use of the Coandă effect on the tail boom. The use of two or more horizontal rotors turning in opposite directions is another configuration used to counteract the effects of torque on the aircraft without relying on an anti-torque tail rotor. This allows the power normally required to be diverted for the tail rotor to be applied fully to the main rotors, increasing the aircraft's power efficiency and lifting capacity. There are several common configurations that use the counter-rotating effect to benefit the rotorcraft: Tandem rotors are two counter-rotating rotors with one mounted behind the other. Transverse rotors are pair of counter-rotating rotors transversely mounted at the ends of fixed wings or outrigger structures. Now used on tiltrotors, some early model helicopters had used them. Coaxial rotors are two counter-rotating rotors mounted one above the other with the same axis. Intermeshing rotors are two counter-rotating rotors mounted close to each other at a sufficient angle to let the rotors intermesh over the top of the aircraft without colliding. An aircraft utilizing this is known as a synchropter. Multirotors make use of three or more rotors. Specific terms are also used depending on the exact amount of rotors, such as tricopter, quadcopter, hexacopter and octocopter for three rotors, four rotors, six rotors and eight rotors respectively, of which quadcopter is the most common. Multirotors are primarily used on drones and use on aircraft with a human pilot is rare. Tip jet designs let the rotor push itself through the air and avoid generating torque. Engines The number, size and type of engine(s) used on a helicopter determines the size, function and capability of that helicopter design. The earliest helicopter engines were simple mechanical devices, such as rubber bands or spindles, which relegated the size of helicopters to toys and small models. For a half century before the first airplane flight, steam engines were used to forward the development of the understanding of helicopter aerodynamics, but the limited power did not allow for manned flight. The introduction of the internal combustion engine at the end of the 19th century became the watershed for helicopter development as engines began to be developed and produced that were powerful enough to allow for helicopters able to lift humans. Early helicopter designs utilized custom-built engines or rotary engines designed for airplanes, but these were soon replaced by more powerful automobile engines and radial engines. The single, most-limiting factor of helicopter development during the first half of the 20th century was that the amount of power produced by an engine was not able to overcome the engine's weight in vertical flight. This was overcome in early successful helicopters by using the smallest engines available. When the compact, flat engine was developed, the helicopter industry found a lighter-weight powerplant easily adapted to small helicopters, although radial engines continued to be used for larger helicopters. Turbine engines revolutionized the aviation industry; and the turboshaft engine for helicopter use, pioneered in December 1951 by the aforementioned Kaman K-225, finally gave helicopters an engine with a large amount of power and a low weight penalty. Turboshafts are also more reliable than piston engines, especially when producing the sustained high levels of power required by a helicopter. The turboshaft engine was able to be scaled to the size of the helicopter being designed, so that all but the lightest of helicopter models are powered by turbine engines today. Special jet engines developed to drive the rotor from the rotor tips are referred to as tip jets. Tip jets powered by a remote compressor are referred to as cold tip jets, while those powered by combustion exhaust are referred to as hot tip jets. An example of a cold jet helicopter is the Sud-Ouest Djinn, and an example of the hot tip jet helicopter is the YH-32 Hornet. Some radio-controlled helicopters and smaller, helicopter-type unmanned aerial vehicles, use electric motors or motorcycle engines. Radio-controlled helicopters may also have piston engines that use fuels other than gasoline, such as nitromethane. Some turbine engines commonly used in helicopters can also use biodiesel instead of jet fuel. There are also human-powered helicopters. Transmission The transmission is a mechanical system that transmits power from the engine(s) to the rotors. The transmission is a system of gears, bearings, clutches and shafts that performs several functions (1) Translates the alignment of the drive shaft to match the alignment of the rotor shafts; (2) Reduces the RPM of the drive shaft to the lower RPMs of the rotors; and (3) Enables the engine to engage or disengage from the rotors. For helicopters with tail rotors, the transmission drivetrain forks into two paths: one leading to the main rotor, and one leading to the tail rotor.<ref>Bailey, Norman (2014) 'Helicopter Pilot's Manual Crowood, ISBN 9781847979230, 1847979238</ref> The drive shafts of helicopter engines are typically not aligned with the rotor shafts, so the transmission must translate the alignment of the drive shaft to match the shafts of the rotors. Many engine drive shafts are aligned horizontally, yet the main rotor shaft ("mast") is usually vertical, and the tail rotor shaft is often perpendicular to the engine's drive shaft. The transmission contains a series of gears, usually bevel gears, that translate the alignment of the drive shaft to the alignment of the rotor shafts. The transmission also reduces the RPMs of the engine to the lower RPMs required by the rotors. The output drive shaft of the engine, before any gearing is applied, is typically between 3,000 and 50,000 RPM (turbine engines typically have higher RPM than piston engines). The main rotor typically rotates between 300 to 600 RPM. The tail rotor, if present, usually rotates between 1,000 to 5,000 RPM. (The RPMs of a given model of helicopter are usually fixed the RPM ranges listed above represent a variety of helicopter models). The transmission contains a series of reduction gears to reduce the engine RPM to the rotor RPMs. Several types of reduction gears may be used, including bevel gears, planetary gears, helical gears, and spur gears. Most transmissions contain several reduction gears: the engine itself may contain reduction gears (often spur gears) between the engine's internal shaft and the output drive shaft; the main rotor may have a reduction gear at its base (typically a planetary gear); and there may be reduction gears at the tail rotor, and on the shaft leading to the tail rotor. The transmission often includes one or more clutches, which permit the rotors to engage or disengage from the engine. A clutch is required so the engine can start up and gain speed before taking the load of the rotors. A clutch is also required in the case of engine failure: in that situation, the rotors must disengage from the engine so that the rotors can continue spinning and perform autorotation. Helicopter clutches are usually freewheel clutches relying on centrifugal forces (sprag clutchs are commonly used), but belt drive clutches are also used. Flight controls A helicopter has four flight control inputs. These are the cyclic, the collective, the anti-torque pedals, and the throttle. The cyclic control is usually located between the pilot's legs and is commonly called the cyclic stick or just cyclic. On most helicopters, the cyclic is similar to a joystick. However, the Robinson R22 and Robinson R44 have a unique teetering bar cyclic control system and a few helicopters have a cyclic control that descends into the cockpit from overhead. The control is called the cyclic because it changes cyclic pitch of the main blades. The result is to tilt the rotor disk in a particular direction, resulting in the helicopter moving in that direction. If the pilot pushes the cyclic forward, the rotor disk tilts forward, and the rotor produces a thrust in the forward direction. If the pilot pushes the cyclic to the side, the rotor disk tilts to that side and produces thrust in that direction, causing the helicopter to hover sideways. The collective pitch control or collective is located on the left side of the pilot's seat with a settable friction control to prevent inadvertent movement. The collective changes the pitch angle of all the main rotor blades collectively (i.e. all at the same time) and independently of their position. Therefore, if a collective input is made, all the blades change equally, and the result is the helicopter increasing or decreasing in altitude. A swashplate controls the collective and cyclic pitch of the main blades. The swashplate moves up and down, along the main shaft, to change the pitch of both blades. This causes the helicopter to push air downward or upward, depending on the angle of attack. The swashplate can also change its angle to move the blades angle forwards or backwards, or left and right, to make the helicopter move in those directions. The anti-torque pedals are located in the same position as the rudder pedals in a fixed-wing aircraft, and serve a similar purpose, namely to control the direction in which the nose of the aircraft is pointed. Application of the pedal in a given direction changes the pitch of the tail rotor blades, increasing or reducing the thrust produced by the tail rotor and causing the nose to yaw in the direction of the applied pedal. The pedals mechanically change the pitch of the tail rotor altering the amount of thrust produced. Helicopter rotors are designed to operate in a narrow range of RPM.Johnson, Pam. Delta D2 page 44 Pacific Wings. Retrieved 2 January 2010The UH-60 permits 95–101% rotor RPM UH-60 limits US Army Aviation. Retrieved 2 January 2010 The throttle controls the power produced by the engine, which is connected to the rotor by a fixed ratio transmission. The purpose of the throttle is to maintain enough engine power to keep the rotor RPM within allowable limits so that the rotor produces enough lift for flight. In single-engine helicopters, the throttle control is a motorcycle-style twist grip mounted on the collective control, while dual-engine helicopters have a power lever for each engine. Compound helicopter A compound helicopter has an additional system for thrust and, typically, small stub fixed wings. This offloads the rotor in cruise, which allows its rotation to be slowed down, thus increasing the maximum speed of the aircraft. The Lockheed AH-56A Cheyenne diverted up to 90% of its engine power to a pusher propeller during forward flight. Flight There are three basic flight conditions for a helicopter: hover, forward flight and the transition between the two. Hover Hovering is the most challenging part of flying a helicopter. This is because a helicopter generates its own gusty air while in a hover, which acts against the fuselage and flight control surfaces. The result is constant control inputs and corrections by the pilot to keep the helicopter where it is required to be. Despite the complexity of the task, the control inputs in a hover are simple. The cyclic is used to eliminate drift in the horizontal plane, that is to control forward and back, right and left. The collective is used to maintain altitude. The pedals are used to control nose direction or heading. It is the interaction of these controls that makes hovering so difficult, since an adjustment in any one control requires an adjustment of the other two, creating a cycle of constant correction. Transition from hover to forward flight As a helicopter moves from hover to forward flight it enters a state called translational lift which provides extra lift without increasing power. This state, most typically, occurs when the airspeed reaches approximately , and may be necessary for a helicopter to obtain flight. Forward flight In forward flight a helicopter's flight controls behave more like those of a fixed-wing aircraft. Applying forward pressure on the cyclic will cause the nose to pitch down, with a resultant increase in airspeed and loss of altitude. Aft cyclic will cause the nose to pitch up, slowing the helicopter and causing it to climb. Increasing collective (power) while maintaining a constant airspeed will induce a climb while decreasing collective will cause a descent. Coordinating these two inputs, down collective plus aft cyclic or up collective plus forward cyclic, will result in airspeed changes while maintaining a constant altitude. The pedals serve the same function in both a helicopter and a fixed-wing aircraft, to maintain balanced flight. This is done by applying a pedal input in whichever direction is necessary to center the ball in the turn and bank indicator. Uses Due to the operating characteristics of the helicopter—its ability to take off and land vertically, and to hover for extended periods of time, as well as the aircraft's handling properties under low airspeed conditions—it has proved advantageous to conduct tasks that were previously not possible with other aircraft, or were time- or work-intensive to accomplish on the ground. Today, helicopter uses include transportation of people and cargo, military uses, construction, firefighting, search and rescue, tourism, medical transport, law enforcement, agriculture, news and media, and aerial observation, among others. A helicopter used to carry loads connected to long cables or slings is called an aerial crane. Aerial cranes are used to place heavy equipment, like radio transmission towers and large air conditioning units, on the tops of tall buildings, or when an item must be raised up in a remote area, such as a radio tower raised on the top of a hill or mountain. Helicopters are used as aerial cranes in the logging industry to lift trees out of terrain where vehicles cannot travel and where environmental concerns prohibit the building of roads. These operations are referred to as longline because of the long, single sling line used to carry the load. In military service helicopters are often useful for delivery of outsized slung loads that would not fit inside ordinary cargo aircraft: artillery pieces, large machinery (field radars, communications gear, electrical generators), or pallets of bulk cargo. In military operations these payloads are often delivered to remote locations made inaccessible by mountainous or riverine terrain, or naval vessels at sea. In electronic news gathering, helicopters have provided aerial views of some major news stories, and have been doing so, from the late 1960s. Helicopters have also been used in films, both in front and behind the camera. The largest single non-combat helicopter operation in history was the disaster management operation following the 1986 Chernobyl nuclear disaster. Hundreds of pilots were involved in airdrop and observation missions, making dozens of sorties a day for several months. "Helitack" is the use of helicopters to combat wildland fires. The helicopters are used for aerial firefighting (water bombing) and may be fitted with tanks or carry helibuckets. Helibuckets, such as the Bambi bucket, are usually filled by submerging the bucket into lakes, rivers, reservoirs, or portable tanks. Tanks fitted onto helicopters are filled from a hose while the helicopter is on the ground or water is siphoned from lakes or reservoirs through a hanging snorkel as the helicopter hovers over the water source. Helitack helicopters are also used to deliver firefighters, who rappel down to inaccessible areas, and to resupply firefighters. Common firefighting helicopters include variants of the Bell 205 and the Erickson S-64 Aircrane helitanker. Helicopters are used as air ambulances for emergency medical assistance in situations when an ambulance cannot easily or quickly reach the scene, or cannot transport the patient to a medical facility in time. Helicopters are also used when patients need to be transported between medical facilities and air transportation is the most practical method. An air ambulance helicopter is equipped to stabilize and provide limited medical treatment to a patient while in flight. The use of helicopters as air ambulances is often referred to as "MEDEVAC", and patients are referred to as being "airlifted", or "medevaced". This use was pioneered in the Korean War, when time to reach a medical facility was reduced to three hours from the eight hours needed in World War II, and further reduced to two hours by the Vietnam War. In naval service a prime function of rescue helicopters is to promptly retrieve downed aircrew involved in crashes occurring upon launch or recovery aboard aircraft carriers. In past years this function was performed by destroyers escorting the carrier, but since then helicopters have proved vastly more effective. Police departments and other law enforcement agencies use helicopters to pursue suspects and patrol the skies. Since helicopters can achieve a unique aerial view, they are often used in conjunction with police on the ground to report on suspects' locations and movements. They are often mounted with lighting and heat-sensing equipment for night pursuits. Military forces use attack helicopters to conduct aerial attacks on ground targets. Such helicopters are mounted with missile launchers and miniguns. Transport helicopters are used to ferry troops and supplies where the lack of an airstrip would make transport via fixed-wing aircraft impossible. The use of transport helicopters to deliver troops as an attack force on an objective is referred to as "air assault". Unmanned aerial systems (UAS) helicopter systems of varying sizes are developed by companies for military reconnaissance and surveillance duties. Naval forces also use helicopters equipped with dipping sonar for anti-submarine warfare, since they can operate from small ships. Oil companies charter helicopters to move workers and parts quickly to remote drilling sites located at sea or in remote locations. The speed advantage over boats makes the high operating cost of helicopters cost-effective in ensuring that oil platforms continue to operate. Various companies specialize in this type of operation. NASA developed Ingenuity, a helicopter used to survey Mars (along with a rover). It began service in February 2021 and was retired due to sustained rotor blade damage in January 2024 after 73 sorties. As the Martian atmosphere is 100 times thinner than Earth's, its two blades spin at close to 3,000 revolutions a minute, approximately 10 times faster than that of a terrestrial helicopter. Market In 2017, 926 civil helicopters were shipped for $3.68 billion, led by Airbus Helicopters with $1.87 billion for 369 rotorcraft, Leonardo Helicopters with $806 million for 102 (first three-quarters only), Bell Helicopter with $696 million for 132, then Robinson Helicopter with $161 million for 305. By October 2018, the in-service and stored helicopter fleet of 38,570 with civil or government operators was led Robinson Helicopter with 24.7% followed by Airbus Helicopters with 24.4%, then Bell with 20.5 and Leonardo with 8.4%, Russian Helicopters with 7.7%, Sikorsky Aircraft with 7.2%, MD Helicopters with 3.4% and other with 2.2%. The most widespread model is the piston Robinson R44 with 5,600, then the H125/AS350 with 3,600 units, followed by the Bell 206 with 3,400. Most were in North America with 34.3% then in Europe with 28.0% followed by Asia-Pacific with 18.6%, Latin America with 11.6%, Africa with 5.3% and Middle East with 1.7%. History Early design The earliest references for vertical flight came from China. Since around 400 BC, Chinese children have played with bamboo flying toys (or Chinese top). This bamboo-copter is spun by rolling a stick attached to a rotor. The spinning creates lift, and the toy flies when released. The 4th-century AD Daoist book Baopuzi by Ge Hong ( "Master who Embraces Simplicity") reportedly describes some of the ideas inherent to rotary wing aircraft. Designs similar to the Chinese helicopter toy appeared in some Renaissance paintings and other works. In the 18th and early 19th centuries Western scientists developed flying machines based on the Chinese toy. It was not until the early 1480s, when Italian polymath Leonardo da Vinci created a design for a machine that could be described as an "aerial screw", that any recorded advancement was made towards vertical flight. His notes suggested that he built small flying models, but there were no indications for any provision to stop the rotor from making the craft rotate.Pilotfriend.com "Leonardo da Vinci's Helical Air Screw". Pilotfriend.com. Retrieved 12 December 2010 As scientific knowledge increased and became more accepted, people continued to pursue the idea of vertical flight. In July 1754, Russian Mikhail Lomonosov had developed a small coaxial modeled after the Chinese top but powered by a wound-up spring device and demonstrated it to the Russian Academy of Sciences. It was powered by a spring, and was suggested as a method to lift meteorological instruments. In 1783, Christian de Launoy, and his mechanic, Bienvenu, used a coaxial version of the Chinese top in a model consisting of contrarotating turkey flight feathers as rotor blades, and in 1784, demonstrated it to the French Academy of Sciences. Sir George Cayley, influenced by a childhood fascination with the Chinese flying top, developed a model of feathers, similar to that of Launoy and Bienvenu, but powered by rubber bands. By the end of the century, he had progressed to using sheets of tin for rotor blades and springs for power. His writings on his experiments and models would become influential on future aviation pioneers. Alphonse Pénaud would later develop coaxial rotor model helicopter toys in 1870, also powered by rubber bands. One of these toys, given as a gift by their father, would inspire the Wright brothers to pursue the dream of flight. In 1861, the word "helicopter" was coined by Gustave de Ponton d'Amécourt, a French inventor who demonstrated a small steam-powered model. While celebrated as an innovative use of a new metal, aluminum, the model never lifted off the ground. D'Amecourt's linguistic contribution would survive to eventually describe the vertical flight he had envisioned. Steam power was popular with other inventors as well. In 1877, the Italian engineer, inventor and aeronautical pioneer Enrico Forlanini developed an unmanned helicopter powered by a steam engine. It rose to a height of , where it remained for 20 seconds, after a vertical take-off from a park in Milan. Milan has dedicated its city airport to Enrico Forlanini, also named Linate Airport, as well as the nearby park, the Parco Forlanini. Emmanuel Dieuaide's steam-powered design featured counter-rotating rotors powered through a hose from a boiler on the ground. In 1887 Parisian inventor, Gustave Trouvé, built and flew a tethered electric model helicopter. In July 1901, the maiden flight of Hermann Ganswindt's helicopter took place in Berlin-Schöneberg; this was probably the first heavier-than-air motor-driven flight carrying humans. A movie covering the event was taken by Max Skladanowsky, but it remains lost. In 1885, Thomas Edison was given US$1,000 (equivalent to $ today) by James Gordon Bennett, Jr., to conduct experiments towards developing flight. Edison built a helicopter and used the paper for a stock ticker to create guncotton, with which he attempted to power an internal combustion engine. The helicopter was damaged by explosions and one of his workers was badly burned. Edison reported that it would take a motor with a ratio of three to four pounds per horsepower produced to be successful, based on his experiments. Ján Bahýľ, a Slovak inventor, adapted the internal combustion engine to power his helicopter model that reached a height of in 1901. On 5 May 1905, his helicopter reached in altitude and flew for over . In 1908, Edison patented his own design for a helicopter powered by a gasoline engine with box kites attached to a mast by cables for a rotor, but it never flew. First flights In 1906, two French brothers, Jacques and Louis Breguet, began experimenting with airfoils for helicopters. In 1907, those experiments resulted in the Gyroplane No.1, possibly as the earliest known example of a quadcopter. Although there is some uncertainty about the date, sometime between 14 August and 29 September 1907, the Gyroplane No. 1 lifted its pilot into the air about for a minute. The Gyroplane No.1 proved to be extremely unsteady and required a man at each corner of the airframe to hold it steady. For this reason, the flights of the Gyroplane No.1 are considered to be the first manned flight of a helicopter, but not a free or untethered flight. That same year, fellow French inventor Paul Cornu designed and built the Cornu helicopter which used two counter-rotating rotors driven by a Antoinette engine. On 13 November 1907, it lifted its inventor to and remained aloft for 20 seconds. Even though this flight did not surpass the flight of the Gyroplane No. 1, it was reported to be the first truly free flight with a pilot. Cornu's helicopter completed a few more flights and achieved a height of nearly , but it proved to be unstable and was abandoned. In 1909, J. Newton Williams of Derby, Connecticut, and Emile Berliner of Washington, D.C., flew a helicopter "on three occasions" at Berliner's lab in Washington's Brightwood neighborhood. In 1911, Slovenian philosopher and economist Ivan Slokar patented a helicopter configuration. The Danish inventor Jacob Ellehammer built the Ellehammer helicopter in 1912. It consisted of a frame equipped with two counter-rotating discs, each of which was fitted with six vanes around its circumference. After indoor tests, the aircraft was demonstrated outdoors and made several free take-offs. Experiments with the helicopter continued until September 1916, when it tipped over during take-off, destroying its rotors. During World War I, Austria-Hungary developed the PKZ, an experimental helicopter prototype, with two aircraft built. Early development In the early 1920s, Argentine Raúl Pateras-Pescara de Castelluccio, while working in Europe, demonstrated one of the first successful applications of cyclic pitch. Coaxial, contra-rotating, biplane rotors could be warped to cyclically increase and decrease the lift they produced. The rotor hub could also be tilted forward a few degrees, allowing the aircraft to move forward without a separate propeller to push or pull it. Pateras-Pescara was also able to demonstrate the principle of autorotation. By January 1924, Pescara's helicopter No.1 was tested but was found to be underpowered and could not lift its own weight. His 2F fared better and set a record. The British government funded further research by Pescara which resulted in helicopter No. 3, powered by a radial engine which could fly for up to ten minutes. In March 1923 Time magazine reported Thomas Edison sent George de Bothezat a congratulations for a successful helicopter test flight. Edison wrote, "So far as I know, you have produced the first successful helicopter." The helicopter was tested at McCook's Field and remained airborne for 2 minutes and 45 seconds at a height of 15 feet. On 14 April 1924, Frenchman Étienne Oehmichen set the first helicopter world record recognized by the Fédération Aéronautique Internationale (FAI), flying his quadrotor helicopter . On 18April 1924, Pescara beat Oemichen's record, flying for a distance of (nearly ) in 4 minutes and 11 seconds (about ), maintaining a height of . On 4May, Oehmichen completed the first closed-circuit helicopter flight in 7 minutes 40 seconds with his No. 2 machine.The JAviator Quadrotor – Rainer K. L. Trummer, University of Salzburg, Austria, 2010, p. 21 In the US, George de Bothezat built the quadrotor helicopter de Bothezat helicopter for the United States Army Air Service but the Army cancelled the program in 1924, and the aircraft was scrapped. Albert Gillis von Baumhauer, a Dutch aeronautical engineer, began studying rotorcraft design in 1923. His first prototype "flew" ("hopped" and hovered in reality) on 24 September 1925, with Dutch Army-Air arm Captain Floris Albert van Heijst at the controls. The controls that van Heijst used were von Baumhauer's inventions, the cyclic and collective.Alex de Voogt. The Transmission of Helicopter Technology, 1920-1939: Exchanges with von Baumhauer. Int. j. for the history of eng. & tech., Vol. 83 No. 1, January 2013, 119–40. web extract Patents were granted to von Baumhauer for his cyclic and collective controls by the British ministry of aviation on 31January 1927, under patent number 265,272. In 1927, Engelbert Zaschka from Germany built a helicopter, equipped with two rotors, in which a gyroscope was used to increase stability and serves as an energy accumulator for a gliding flight to make a landing. Zaschka's aircraft, the first helicopter, which ever worked so successfully in miniature, not only rises and descends vertically, but is able to remain stationary at any height. In 1928, Hungarian aviation engineer Oszkár Asbóth constructed a helicopter prototype that took off and landed at least 182 times, with a maximum single flight duration of 53 minutes. Retrieved: 12 December 2010. In 1930, the Italian engineer Corradino D'Ascanio built his D'AT3, a coaxial helicopter. His relatively large machine had two, two-bladed, counter-rotating rotors. Control was achieved by using auxiliary wings or servo-tabs on the trailing edges of the blades, a concept that was later adopted by other helicopter designers, including Bleeker and Kaman. Three small propellers mounted to the airframe were used for additional pitch, roll, and yaw control. The D'AT3 held modest FAI speed and altitude records for the time, including altitude (18 m or 59 ft), duration (8 minutes 45 seconds) and distance flown (1,078 m or 3,540 ft)."FAI Record ID #13086 – Straight distance. Class E former G (Helicopters), piston " Fédération Aéronautique Internationale. Retrieved: 21 September 2014. First practical rotorcraft Spanish aeronautical engineer and pilot Juan de la Cierva invented the autogyro in the early 1920s, becoming the first practical rotorcraft. In 1928, de la Cierva successfully flew an autogyro across the English Channel, from London to Paris. In 1934, an autogyro became the first rotorcraft to successfully take off and land on the deck of a ship. That same year, the autogyro was employed by the Spanish military during the Asturias revolt, becoming the first military deployment of a rotocraft. Autogyros were also employed in New Jersey and Pennsylvania for delivering mail and newspapers prior to the invention of the helicopter. Though lacking true vertical flight capability, work on the autogyro forms the basis for helicopter analysis. Single lift-rotor success In the Soviet Union, Boris N. Yuriev and Alexei M. Cheremukhin, two aeronautical engineers working at the Tsentralniy Aerogidrodinamicheskiy Institut (TsAGI or the Central Aerohydrodynamic Institute), constructed and flew the TsAGI 1-EA single lift-rotor helicopter, which used an open tubing framework, a four-blade main lift rotor, and twin sets of diameter, two-bladed anti-torque rotors: one set of two at the nose and one set of two at the tail. Powered by two M-2 powerplants, up-rated copies of the Gnome Monosoupape 9 Type B-2 100 CV output rotary engine of World War I, the TsAGI 1-EA made several low altitude flights. By 14 August 1932, Cheremukhin managed to get the 1-EA up to an unofficial altitude of , shattering d'Ascanio's earlier achievement. As the Soviet Union was not yet a member of the FAI, however, Cheremukhin's record remained unrecognized. Nicolas Florine, a Russian engineer, built the first twin tandem rotor machine to perform a free flight. It flew in Sint-Genesius-Rode, at the Laboratoire Aérotechnique de Belgique (now von Karman Institute) in April 1933, and attained an altitude of and an endurance of eight minutes. Florine chose a co-rotating configuration because the gyroscopic stability of the rotors would not cancel. Therefore, the rotors had to be tilted slightly in opposite directions to counter torque. Using hingeless rotors and co-rotation also minimised the stress on the hull. At the time, it was one of the most stable helicopters in existence. The Bréguet-Dorand Gyroplane Laboratoire was built in 1933. It was a coaxial helicopter, contra-rotating. After many ground tests and an accident, it first took flight on 26 June 1935. Within a short time, the aircraft was setting records with pilot Maurice Claisse at the controls. On 14 December 1935, he set a record for closed-circuit flight with a diameter. The next year, on 26 September 1936, Claisse set a height record of . And, finally, on 24 November 1936, he set a flight duration record of one hour, two minutes and 50 seconds over a closed circuit at 44.7 kilometres per hour (27.8 mph). The aircraft was destroyed in 1943 by an Allied airstrike at Villacoublay airport. American single-rotor beginnings American inventor Arthur M. Young started work on model helicopters in 1928 using converted electric hover motors to drive the rotor head. Young invented the stabilizer bar and patented it shortly after. A mutual friend introduced Young to Lawrence Dale, who once seeing his work asked him to join the Bell Aircraft company. When Young arrived at Bell in 1941, he signed his patent over and began work on the helicopter. His budget was US$250,000 (equivalent to $ million today) to build two working helicopters. In just six months they completed the first Bell Model 1, which spawned the Bell Model 30, later succeeded by the Bell 47. Birth of an industry Heinrich Focke at Focke-Wulf had purchased a license from Cierva Autogiro Company, which according to Frank Kingston Smith Sr., included "the fully controllable cyclic/collective pitch hub system". In return, Cierva Autogiro received a cross-license to build the Focke-Achgelis helicopters. Focke designed the world's first practical helicopter, the transverse twin-rotor Focke-Wulf Fw 61, which first flew in June 1936. It was demonstrated by Hanna Reitsch in February 1938 inside the Deutschlandhalle in Berlin. The Fw 61 set a number of FAI records from 1937 to 1939, including: maximum altitude of , maximum distance of , and maximum speed of . Autogiro development was now being bypassed by a focus on helicopters. During World War II, Nazi Germany used helicopters in small numbers for observation, transport, and medical evacuation. The Flettner Fl 282 Kolibri synchropter—using the same basic configuration as Anton Flettner's own pioneering Fl 265—was used in the Baltic, Mediterranean, and Aegean Seas. The Focke-Achgelis Fa 223 Drache, like the Fw 61, used two transverse rotors, and was the largest rotorcraft of the war. Extensive bombing by the Allied forces prevented Germany from producing helicopters in large quantities during the war.In the United States, Russian-born engineer Igor Sikorsky and Wynn Laurence LePage competed to produce the U.S. military's first helicopter. LePage received the patent rights to develop helicopters patterned after the Fw 61, and built the XR-1 in 1941. Meanwhile, Sikorsky settled on a simpler, single-rotor design, the VS-300 of 1939, which turned out to be the first practical single lifting-rotor helicopter design. After experimenting with configurations to counteract the torque produced by the single main rotor, Sikorsky settled on a single, smaller rotor mounted on the tail boom. Developed from the VS-300, Sikorsky's R-4 of 1942 was the first large-scale mass-produced helicopter, with a production order for 100 aircraft. The R-4 was the only Allied helicopter to serve in World War II, used primarily for search and rescue (by the USAAF 1st Air Commando Group) in the Burma campaign; in Alaska; and in other areas with harsh terrain. Total production reached 131 helicopters before the R-4 was replaced by other Sikorsky helicopters such as the R-5 and the R-6. In all, Sikorsky produced over 400 helicopters before the end of World War II. While LePage and Sikorsky built their helicopters for the military, Bell Aircraft hired Arthur Young to help build a helicopter using Young's two-blade teetering rotor design, which used a weighted stabilizer bar placed at a 90° angle to the rotor blades. The subsequent Model 30 helicopter of 1943 showed the design's simplicity and ease of use. The Model 30 was developed into the Bell 47 of 1945, which became the first helicopter certified for civilian use in the United States (March 1946). Produced in several countries, the Bell 47 was the most popular helicopter model for nearly 30 years. Turbine age In 1951, at the urging of his contacts at the Department of the Navy, Charles Kaman modified his K-225 synchropter—a design for a twin-rotor helicopter concept first pioneered by Anton Flettner in 1939, with the aforementioned Fl 265 piston-engined design in Germany—with a new kind of engine, the turboshaft engine. This adaptation of the turbine engine provided a large amount of power to Kaman's helicopter with a lower weight penalty than piston engines, with their heavy engine blocks and auxiliary components. On 11December 1951, the Kaman K-225 became the first turbine-powered helicopter in the world. Two years later, on 26 March 1954, a modified Navy HTK-1, another Kaman helicopter, became the first twin-turbine helicopter to fly. However, it was the Sud Aviation Alouette II that would become the first helicopter to be produced with a turbine-engine. Reliable helicopters capable of stable hover flight were developed decades after fixed-wing aircraft. This is largely due to higher engine power density requirements than fixed-wing aircraft. Improvements in fuels and engines during the first half of the 20th century were a critical factor in helicopter development. The availability of lightweight turboshaft engines in the second half of the 20th century led to the development of larger, faster, and higher-performance helicopters. While smaller and less expensive helicopters still use piston engines, turboshaft engines are the preferred powerplant for helicopters today. Safety Maximum speed limit There are several reasons a helicopter cannot fly as fast as a fixed-wing aircraft. When the helicopter is hovering, the outer tips of the rotor travel at a speed determined by the length of the blade and the rotational speed. In a moving helicopter, however, the speed of the blades relative to the air depends on the speed of the helicopter as well as on their rotational speed. The airspeed of the advancing rotor blade is much higher than that of the helicopter itself. It is possible for this blade to exceed the speed of sound, and thus produce vastly increased drag and vibration. At the same time, the advancing blade creates more lift traveling forward, the retreating blade produces less lift. If the aircraft were to accelerate to the air speed that the blade tips are spinning, the retreating blade passes through air moving at the same speed of the blade and produces no lift at all, resulting in very high torque stresses on the central shaft that can tip down the retreating-blade side of the vehicle, and cause a loss of control. Dual counter-rotating blades prevent this situation due to having two advancing and two retreating blades with balanced forces. Because the advancing blade has higher airspeed than the retreating blade and generates a dissymmetry of lift, rotor blades are designed to "flap" – lift and twist in such a way that the advancing blade flaps up and develops a smaller angle of attack. Conversely, the retreating blade flaps down, develops a higher angle of attack, and generates more lift. At high speeds, the force on the rotors is such that they "flap" excessively, and the retreating blade can reach too high an angle and stall. For this reason, the maximum safe forward airspeed of a helicopter is given a design rating called VNE, velocity, never exceed. In addition, it is possible for the helicopter to fly at an airspeed where an excessive amount of the retreating blade stalls, which results in high vibration, pitch-up, and roll into the retreating blade. Noise At the end of the 20th century, designers began working on helicopter noise reduction. Urban communities have often expressed great dislike of noisy aviation or noisy aircraft, and police and passenger helicopters can be unpopular because of the sound. The redesigns followed the closure of some city heliports and government action to constrain flight paths in national parks and other places of natural beauty. Vibration To reduce vibration, all helicopters have rotor adjustments for height and weight. A maladjusted helicopter can easily vibrate so much that it will shake itself apart. Blade height is adjusted by changing the pitch of the blade. Weight is adjusted by adding or removing weights on the rotor head and/or at the blade end caps. Most also have vibration dampers for height and pitch. Some also use mechanical feedback systems to sense and counter vibration. Usually the feedback system uses a mass as a "stable reference" and a linkage from the mass operates a flap to adjust the rotor's angle of attack to counter the vibration. Adjustment can be difficult in part because measurement of the vibration is hard, usually requiring sophisticated accelerometers mounted throughout the airframe and gearboxes. The most common blade vibration adjustment measurement system is to use a stroboscopic flash lamp, and observe painted markings or coloured reflectors on the underside of the rotor blades. The traditional low-tech system is to mount coloured chalk on the rotor tips, and see how they mark a linen sheet. Health and Usage Monitoring Systems (HUMS) provide vibration monitoring and rotor track and balance solutions to limit vibration. Gearbox vibration most often requires a gearbox overhaul or replacement. Gearbox or drive train vibrations can be extremely harmful to a pilot. The most severe effects are pain, numbness, and loss of tactile discrimination or dexterity. Loss of tail-rotor effectiveness For a standard helicopter with a single main rotor, the tips of the main rotor blades produce a vortex ring in the air, which is a spiraling and circularly rotating airflow. As the craft moves forward, these vortices trail off behind the craft. When hovering with a forward diagonal crosswind, or moving in a forward diagonal direction, the spinning vortices trailing off the main rotor blades will align with the rotation of the tail rotor and cause an instability in flight control. When the trailing vortices colliding with the tail rotor are rotating in the same direction, this causes a loss of thrust from the tail rotor. When the trailing vortices rotate in the opposite direction of the tail rotor, thrust is increased. Use of the foot pedals is required to adjust the tail rotor's angle of attack, to compensate for these instabilities. These issues are due to the exposed tail rotor cutting through open air around the rear of the vehicle. This issue disappears when the tail is instead ducted, using an internal impeller enclosed in the tail and a jet of high pressure air sideways out of the tail, as the main rotor vortices can not impact the operation of an internal impeller. Critical wind azimuth For a standard helicopter with a single main rotor, maintaining steady flight with a crosswind presents an additional flight control problem, where strong crosswinds from certain angles will increase or decrease lift from the main rotors. This effect is also triggered in a no-wind condition when moving the craft diagonally in various directions, depending on the direction of main rotor rotation. This can lead to a loss of control and a crash or hard landing when operating at low altitudes, due to the sudden unexpected loss of lift, and insufficient time and distance available to recover. Transmission Conventional rotary-wing aircraft use a set of complex mechanical gearboxes to convert the high rotation speed of gas turbines into the low speed required to drive main and tail rotors. Unlike powerplants, mechanical gearboxes cannot be duplicated (for redundancy) and have always been a major weak point in helicopter reliability. In-flight catastrophic gear failures often result in gearbox jamming and subsequent fatalities, whereas loss of lubrication can trigger onboard fire. Another weakness of mechanical gearboxes is their transient power limitation, due to structural fatigue limits. Recent EASA studies point to engines and transmissions as prime cause of crashes just after pilot errors. By contrast, electromagnetic transmissions do not use any parts in contact; hence lubrication can be drastically simplified, or eliminated. Their inherent redundancy offers good resilience to single point of failure. The absence of gears enables high power transient without impact on service life. The concept of electric propulsion applied to helicopter and electromagnetic drive was brought to reality by Pascal Chretien who designed, built and flew world's first man-carrying, free-flying electric helicopter. The concept was taken from the conceptual computer-aided design model on 10 September 2010 to the first testing at 30% power on 1 March 2011 – less than six months. The aircraft first flew on 12 August 2011. All development was conducted in Venelles, France. Hazards As with any moving vehicle, unsafe operation could result in loss of control, structural damage, or loss of life. The following is a list of some of the potential hazards for helicopters: Settling with power is when the aircraft has insufficient power to arrest its descent. This hazard can develop into vortex ring state if not corrected early. Vortex ring state is a hazard induced by a combination of low airspeed, high power setting, and high descent rate. Rotor-tip vortices circulate from the high pressure air below the rotor disk to low pressure air above the disk, so that the helicopter settles into its own descending airflow. Adding more power increases the rate of air circulation and aggravates the situation. It is sometimes confused with settling with power, but they are aerodynamically different. Retreating blade stall is experienced during high speed flight and is the most common limiting factor of a helicopter's forward speed. Ground resonance is a self-reinforcing vibration that occurs when the lead/lag spacing of the blades of an articulated rotor system becomes irregular. Low-G condition is an abrupt change from a positive G-force state to a negative G-force state that results in loss of lift (unloaded disc) and subsequent roll over. If aft cyclic is applied while the disc is unloaded, the main rotor could strike the tail causing catastrophic failure. Dynamic rollover in which the helicopter pivots around one of the skids and 'pulls' itself onto its side (almost like a fixed-wing aircraft ground loop). Powertrain failures, especially those that occur within the shaded area of the height-velocity diagram. Tail rotor failures which occur from either a mechanical malfunction of the tail rotor control system or a loss of tail rotor thrust authority, called "loss of tail-rotor effectiveness" (LTE). Brownout in dusty conditions or whiteout in snowy conditions. Low rotor RPM, is when the engine cannot drive the blades at sufficient RPM to maintain flight. Rotor overspeed, which can over-stress the rotor hub pitch bearings (brinelling) and, if severe enough, cause blade separation from the aircraft. Wire and tree strikes due to low altitude operations and take-offs and landings in remote locations. Controlled flight into terrain in which the aircraft is flown into the ground unintentionally due to a lack of situational awareness. Mast bumping in some helicopters List of fatal crashes World records
Technology
Aviation
null
12932903
https://en.wikipedia.org/wiki/Methanobacterium
Methanobacterium
Methanobacterium is a genus of the Methanobacteria class in the Archaea kingdom, which produce methane as a metabolic byproduct. Despite the name, this genus belongs not to the bacterial domain but the archaeal domain (for instance, they lack peptidoglycan in their cell walls). Methanobacterium are nonmotile and live without oxygen, which is toxic to them, and they only inhabit anoxic environments. A shared trait by all methanogens is their ability to recycle products. They can use the products of metabolic activities occurring during methanogenesis as substrates for the formation of methane. Methanobacterium species typically thrive in environments with optimal growth temperatures ranging from 28 to 40 °C, and in versatile ecological ranges. They are a part of the scientific world that is still relatively unknown, but methanogens are thought to be some of earth's earliest life forms. They do not create endospores when nutrients are limited. They are ubiquitous in some hot, low-oxygen environments, such as anaerobic digesters, wastewater, and hot springs. Discovery In 1776, Alesandro Volta discovered that gas bubbles coming from a freshwater swamp were flammable. This finding lead him to believe that methane gas could be produced by living organisms, however, he thought that this methane was coming from decomposing organic matter. In 1933, methanogens were first cultured, revealing that this methane was coming from living organisms. Diversity and taxonomy Methanobacterium are a specific genus within the methanogen species. The evolutionary history of Methanobacterium is still relatively unknown, but methanogens are thought to be some of earth's earliest life forms, with origins dating back over 3.4 billion years. Methanogens, including Methanobacterium species, belong to the archaea domain, characterized by unique features such as unconventional 16S rRNA sequences, distinct lipid structures, and novel cell wall compositions. These organisms are prevalent in extreme environments but are also found in more moderate habitats, exhibiting a wide range of growth temperatures from psychrotrophic to hyperthermophilic, and varying salinity preferences from freshwater to saturated brine. Despite their taxonomic placement within archaea, methanogens display diverse cellular envelopes, which can consist of protein surface layers (S-layers), glycosylated S-layer proteins, additional polymers like methanochondroitin, or pseudomurein in Gram-positive staining species. Methanogens are unique among archaea in their adaptability to a broad spectrum of environmental conditions, with a preference for neutral to moderately alkaline pH values. Taxonomically, methanogens are classified into 25 genera, distributed across 12 families and five orders, highlighting the substantial phenotypic and genotypic diversity within this group. This taxonomic diversity suggests that methanogenesis, the metabolic pathway through which methanogens produce methane, is an ancient and widespread trait. The monophyletic nature of modern methanogens indicates that methanogenesis likely evolved only once, with all contemporary methanogens sharing a common ancestor. Recent taxonomic schemes reflect the rich diversity and evolutionary history of methanogens, underscoring their importance in anaerobic microbial ecosystems and their intriguing adaptation to diverse environmental niches. Each species of Methanobacterium is capable of the syntropic process of methane production, with a majority of the species being hydrogenotrophic. The species differ in their ability to use different substrates for the methane production process. The substrates utilized in the methane production process can be hydrogenotrophic, methylotrophic, or acetoclastic. Species There are many different species of Methanobacterium with officially recognized names. A few and listed and described below: Methanobacterium formicicum is an archaeon found in the rumen of cattle, buffalo, sheep, goats and other animals. Microbes in the gut, degrade nutrients from feed (polysaccharides, proteins, and fats) into organic molecules which later are turned into methane by Methanobacterium such as Methanobacterium formicicum. Methanobacterium formicicum can be found in the human gut as well as in animals and can cause gastrointestinal and metabolic disorders in both humans and animals. Methanobacterium oryzae was isolated from rice field soil in the Philippines. Methanobacterium, such as Methanobacterium oryzae, that thrive in rice fields often use hydrogen and acetate as their main energy source. This Methanobacterium as well as other species of Methanobacterium found in rice field soils from around the world are a major source of methane which is a dominant greenhouse gas. Methanobacterium palustre thrives in marshland areas and was first found in a peat bog. Methanobacterium arcticum was isolated from permafrost sediments in the Russian Arctic. This species of Methanobacterium uses only hydrogen, carbon dioxide, and formate as fuel. Unlike some other Methanobacteria, it does not use acetate to grow. Methanobacterium thermoautotrophicum Marburg can undergo natural genetic transformation, the transfer of DNA from one cell to another. Genetic transformation in archaeal species, generally, appears to be an adaptation for repairing DNA damage in a cell by utilizing intact DNA information derived from another cell. Methanobacterium thermaggregans were found from fed-batch fermentation. M. thermaggregans is alkophilic and thermophilic. This was based on the findings of M. thermaggregans being able to alter an increase of agitated speeds that is used to increase methane formulation. Genome The genome of seven different Methanobacterium and Methanobrevibacter have been sequenced. Methanobacterium has a strain that demonstrates a genome of approximately 1,350 sequences. About 190 of those strains are specific in BRM9 genes, which are correlated to proteins or prophage. It includes mesophilic methanogens from various anaerobic conditions. However, they carry a tiny amount of methanogen characteristic within the rumen. These genes, which are used for their central metabolism and their pseudomurein cell wall, propose that the species is capable of inhibition by the small molecule inhibitor and vaccine. This is determined by the methane alleviation devices that have the ability to grow the genes found in the rumen. Methanobacterium plays a role in both the waste and water waste processes due to its abilities of degrading organic substances. Methanobacterium are normally isolated from natural oxygen deficient environments such as, freshwater, marine sediments, wet soils, the rumen and the intestines of animals, humans, and insects. Through molecular findings of the 16S rRNA and mcrA gene, which encodes the methyl coenzyme M reductase on the alpha subunit, shows that there are additional unidentified methanogens that exist in other ecosystems. Morphology Methanobacterium are generally bacillus-shaped microbes. Because there are many different species in the Methanobacterium genus, there are a variety of shapes, sizes, and arrangements these microbes can possess. These rod shaped microbes can be curved, straight, or crooked. They can also range in size, can be short or long, and can be found individually, in pairs, or in chains. Some Methanobacterium species can even be found in large clusters or aggregates which consist of long intertwined chains of individual microbes. There have been many strains of Methanobacterium that have been isolated and studied profoundly. One particular strain of Methanobacterium that has been isolated and studied is Methanobacterium thermoautotrophicum. This revealed the presence of intracytoplasmic membranes, an internal membrane system consisting of 3 membranes stacked on top of each other without a cytoplasm separating them. Methanobacterium palustre is another strain that further confirms a large characteristic of Methanobacterium is a gram-positive cell wall, lacking a peptidoglycan layer outside of its cytoplasmic membrane. The cell wall of the family Methanobacteriaea consists of pseudomurein, a carbohydrate backbone and a cross-linking peptide with amino acids that form the peptide bonds and serve the nature of the bonding and sugar type. Physiology Methanobacterium are strict anaerobes, meaning they cannot survive in the presence of oxygen. Most species belonging to this genus are also autotrophs which create organic compounds from inorganic materials such as carbon dioxide. Methanobacterium can be classified as hydrogenotrophic methanogens. Hydrogenotrophic methanogens use hydrogen, carbon dioxide, formate, and alcohols to synthesize methane. These substrates are also important for the growth and maintenance of Methanobacterium. Methanogenesis is a vital part of the carbon cycle as it performs the conversion of organic carbon into methane gas. This part of the carbon cycle is referred to as the methanogenesis cycle. It is a process involving three different kinds of carbon dioxide reduction, which ultimately lead to the production of methane. However, within each separate pathway, there are intermediary products that are used as substrates in some other part of the cycle. The interconnectedness of products and substrates are defined by the term syntropic. The cycling substrates can be arranged into 3 groups based on the whether the autotrophic carbon dioxide (CO2) reduction was with hydrogen gas (H2), formate (CH2O2), or secondary alcohols. Some members of this genus can use formate to reduce methane; others live exclusively through the reduction of carbon dioxide with hydrogen. Optimal growth temperature Methanobacterium species typically thrive in environments with optimal growth temperatures ranging from 28 to 40 °C. Methanobacteria are widely distributed in geothermal settings like hot springs and hydrothermal vents. This mesophilic temperature range indicates that Methanobacterium organisms are adapted to moderate environmental conditions, neither extremely hot nor cold. This temperature preference allows them to inhabit a variety of anaerobic environments, including soil, sediments, and animal digestive tracts, where conditions often fall within this mesophilic range. Within these habitats, Methanobacterium species contribute to methane production through their hydrogenotrophic metabolism, utilizing hydrogen and carbon dioxide as metabolic substrates. Habitat Methanobacterium species inhabit various anaerobic environments, showcasing a versatile ecological range. They can be found in diverse habitats such as soil, wetlands, sediment layers, sewage treatment plants, and the gastrointestinal tracts of animals. Within these environments, Methanobacterium species play crucial roles in anaerobic microbial ecosystems, contributing to processes like organic matter decomposition via methane production through the methanogenesis pathway. In the human gut Methanobacterium is found in the human colon. It is involved in managing the amount of calories that is being consumed, by influencing the process of bacterial breakdown. There are two specific groups that have undergone isolation and culture from the human intestines. However, methanogens have also been discovered in colostrum and breast milk from mothers who are healthy and lactating. This was discovered from performing the techniques of quantitative polymerase chain reaction (qPCR), culture, and amplicon sequencing. A species of Methanobacterium called M. smithii is found in the human intestines. M. smithii is able to  integrate glycans within the intestines for fixing, which is used for regulating protein expression. An increase of methane concentration in human residue is correlated with BMI. Methanogens remove hydrogen that remains in the gut, based on hydrogen accumulation in the intestines that can reduce the productivity of the microbial activities. Methanogens can also be used as probiotics. This is possible since methanogens are capable of using trimethylamine as a substrate for methanogenesis. Trimethylamine is produced in the human intestines by intestinal bacteria. An increase of trimethylamine may cause cardiovascular disease. These methanogens are able to utilize hydrogen to decrease trimethylamine while it is growing in the intestines. Phylogeny The currently accepted taxonomy is based on the List of Prokaryotic names with Standing in Nomenclature and the National Center for Biotechnology Information. Unassigned species: "M. cahuitense" Dengler et al. 2023 "M. curvum" Sun, Zhou & Dong 2001 "M. propionicum" Stadtman & Barker 1951 "M. soehngenii" Barker 1936 "M. suboxydans" Stadtman & Barker 1951 M. thermaggregans M. uliginosum König 1985
Biology and health sciences
Archaea
Plants
12941686
https://en.wikipedia.org/wiki/Medical%20laboratory
Medical laboratory
A medical laboratory or clinical laboratory is a laboratory where tests are conducted out on clinical specimens to obtain information about the health of a patient to aid in diagnosis, treatment, and prevention of disease. Clinical medical laboratories are an example of applied science, as opposed to research laboratories that focus on basic science, such as found in some academic institutions. Medical laboratories vary in size and complexity and so offer a variety of testing services. More comprehensive services can be found in acute-care hospitals and medical centers, where 70% of clinical decisions are based on laboratory testing. Doctors offices and clinics, as well as skilled nursing and long-term care facilities, may have laboratories that provide more basic testing services. Commercial medical laboratories operate as independent businesses and provide testing that is otherwise not provided in other settings due to low test volume or complexity. Departments In hospitals and other patient-care settings, laboratory medicine is provided by the Department of Pathology and Medical Laboratory, and generally divided into two sections, each of which will be subdivided into multiple specialty areas. The two sections are: Anatomic pathology: areas included here are histopathology, cytopathology, electron microscopy, and gross pathology. Medical Laboratory, which typically includes the following areas: Clinical microbiology: This encompasses several different sciences, including bacteriology, virology, parasitology, immunology, and mycology. Clinical chemistry: This area typically includes automated analysis of blood specimens, including tests related to enzymology, toxicology and endocrinology. Hematology: This area includes automated and manual analysis of blood cells. It also often includes coagulation. Blood bank involves the testing of blood specimens in order to provide blood transfusion and related services. Molecular diagnostics DNA testing may be done here, along with a subspecialty known as cytogenetics. Reproductive biology testing is available in some laboratories, including Semen analysis, Sperm bank and assisted reproductive technology. Layouts of clinical laboratories in health institutions vary greatly from one facility to another. For instance, some health facilities have a single laboratory for the microbiology section, while others have a separate lab for each specialty area. The following is an example of a typical breakdown of the responsibilities of each area: Microbiology includes culturing of the bacteria in clinical specimens, such as feces, urine, blood, sputum, cerebrospinal fluid, and synovial fluid, as well as possible infected tissue. The work here is mainly concerned with cultures, to look for suspected pathogens which, if found, are further identified based on biochemical tests. Also, sensitivity testing is carried out to determine whether the pathogen is sensitive or resistant to a suggested medicine. Results are reported with the identified organism(s) and the type and amount of drug(s) that should be prescribed for the patient. Parasitology is where specimens are examined for parasites. For example, fecal samples may be examined for evidence of intestinal parasites such as tapeworms or hookworms. Virology is concerned with identification of viruses in specimens such as blood, urine, and cerebrospinal fluid. Hematology analyzes whole blood specimens to perform full blood counts, and includes the examination of blood films. Other specialized tests include cell counts on various bodily fluids. Coagulation testing determines various blood clotting times, coagulation factors, and platelet function. Clinical biochemistry commonly performs dozens of different tests on serum or plasma. These tests, mostly automated, includes quantitative testing for a wide array of substances, such as lipids, blood sugar, enzymes, and hormones. Toxicology is mainly focused on testing for pharmaceutical and recreational drugs. Urine and blood samples are the common specimens. Immunology/Serology uses the process of antigen-antibody interaction as a diagnostic tool. Compatibility of transplanted organs may also be determined with these methods. Immunohematology, or blood bank determines blood groups, and performs compatibility testing on donor blood and recipients. It also prepares blood components, derivatives, and products for transfusion. This area determines a patient's blood type and Rh status, checks for antibodies to common antigens found on red blood cells, and cross matches units that are negative for the antigen. Urinalysis tests urine for many analytes, including microscopically. If more precise quantification of urine chemicals is required, the specimen is processed in the clinical biochemistry lab. Histopathology processes solid tissue removed from the body (biopsies) for evaluation at the microscopic level. Cytopathology examines smears of cells from all over the body (such as from the cervix) for evidence of inflammation, cancer, and other conditions. Molecular diagnostics includes specialized tests involving DNA and RNA analysis. Cytogenetics involves using blood and other cells to produce a DNA karyotype. This can be helpful in cases of prenatal diagnosis (e.g. Down's syndrome) as well as in some cancers which can be identified by the presence of abnormal chromosomes. Surgical pathology examines organs, limbs, tumors, fetuses, and other tissues biopsied in surgery such as breast mastectomies. Medical laboratory staff The staff of clinical laboratories may include: Pathologist Clinical biochemist Laboratory assistant (LA) Laboratory manager Biomedical scientist (BMS) in the UK, Medical laboratory scientist (MT, MLS or CLS) in the US or Medical laboratory technologist in Canada Medical laboratory technician/clinical laboratory technician (MLT or CLT in US) Medical laboratory assistant (MLA) Phlebotomist (PBT) Histology technician Labor shortages The United States has a documented shortage of working laboratory professionals. For example, vacancy rates for Medical Laboratory Scientists ranged from 5% to 9% for various departments. The decline is primarily due to retirements, and to at-capacity educational programs that cannot expand which limits the number of new graduates. Professional organizations and some state educational systems are responding by developing ways to promote the lab professions in an effort to combat this shortage. In addition, the vacancy rates for the MLS were tested again in 2018. The percentage range for the various departments has developed a broader range of 4% to as high as 13%. The higher numbers were seen in the Phlebotomy and Immunology. Microbiology was another department that has had a struggle with vacancies. Their average in the 2018 survey was around 10-11% vacancy rate across the United States. Recruitment campaigns, funding for college programs, and better salaries for the laboratory workers are a few ways they are focusing to decrease the vacancy rate. The National Center For Workforce Analysis has estimated that by 2025 there will be a 24% increase in demand for lab professionals. Highlighted by the COVID-19 pandemic, work is being done to address this shortage including bringing pathology and laboratory medicine into the conversation surrounding access to healthcare. COVID-19 brought the laboratory to the attention of the government and the media, thus giving opportunity for the staffing shortages as well as the resource challenges to be heard and dealt with. Types of laboratory In most developed countries, there are two main types of lab processing the majority of medical specimens. Hospital laboratories are attached to a hospital, and perform tests on their patients. Private (or community) laboratories receive samples from general practitioners, insurance companies, clinical research sites and other health clinics for analysis. For extremely specialised tests, samples may go to a research laboratory. Some tests involve specimens sent between different labs for uncommon tests. For example, in some cases it may be more cost effective if a particular laboratory specializes in a less common tests, receiving specimens (and payment) from other labs, while sending other specimens to other labs for those tests they do not perform. In many countries there are specialized types of medical laboratories according to the types of investigations carried out. Organisations that provide blood products for transfusion to hospitals, such as the Red Cross, will provide access to their reference laboratory for their customers. Some laboratories specialize in Molecular diagnostic and cytogenetic testing, in order to provide information regarding diagnosis and treatment of genetic or cancer-related disorders. Specimen processing and work flow In a hospital setting, sample processing will usually start with a set of samples arriving with a test request, either on a form or electronically via the laboratory information system (LIS). Inpatient specimens will already be labeled with patient and testing information provided by the LIS. Entry of test requests onto the LIS system involves typing (or scanning where barcodes are used) in the laboratory number, and entering the patient identification, as well as any tests requested. This allows laboratory analyzers, computers and staff to recognize what tests are pending, and also gives a location (such as a hospital department, doctor or other customer) for results reporting. Once the specimens are assigned a laboratory number by the LIS, a sticker is typically printed that can be placed on the tubes or specimen containers. This label has a barcode that can be scanned by automated analyzers and test requests uploaded to the analyzer from the LIS. Specimens are prepared for analysis in various ways. For example, chemistry samples are usually centrifuged and the serum or plasma is separated and tested. If the specimen needs to go on more than one analyzer, it can be divided into separate tubes. Many specimens end up in one or more sophisticated automated analysers, that process a fraction of the sample to return one or more test results. Some laboratories use robotic sample handlers (Laboratory automation) to optimize the workflow and reduce the risk of contamination from sample handling by the staff. The work flow in a hospital laboratory is usually heaviest from 2:00 am to 10:00 am. Nurses and doctors generally have their patients tested at least once a day with common tests such as complete blood counts and chemistry profiles. These orders are typically drawn during a morning run by phlebotomists for results to be available in the patient's charts for the attending physicians to consult during their morning rounds. Another busy time for the lab is after 3:00 pm when private practice physician offices are closing. Couriers will pick up specimens that have been drawn throughout the day and deliver them to the lab. Also, couriers will stop at outpatient drawing centers and pick up specimens. These specimens will be processed in the evening and overnight to ensure results will be available the following day. Laboratory informatics The large amount of information processed in laboratories is managed by a system of software programs, computers, and terminology standards that exchange data about patients, test requests, and test results known as a Laboratory information system or LIS. The LIS is often interfaced with the hospital information system, EHR and/or laboratory instruments. Formats for terminologies for test processing and reporting are being standardized with systems such as Logical Observation Identifiers Names and Codes (LOINC) and Nomenclature for Properties and Units terminology (NPU terminology). These systems enable hospitals and labs to order the correct test requests for each patient, keep track of individual patient and specimen histories, and help guarantee a better quality of results. Results are made available to care providers electronically or by printed hard copies for patient charts. Result analysis, validation and interpretation According to various regulations, such as the international ISO 15189 norm, all pathological laboratory results must be verified by a competent professional. In some countries, staffs composed of clinical scientists do the majority of this work inside the laboratory with certain abnormal results referred to the relevant pathologist. Doctor Clinical Laboratory scientists have the responsibility for limited interpretation of testing results in their discipline in many countries. Interpretation of results can be assisted by some software in order to validate normal or non-modified results. In other testing areas, only professional medical staff (pathologist or clinical Laboratory) is involved with interpretation and consulting. Medical staff are sometimes also required in order to explain pathology results to physicians. For a simple result given by phone or to explain a technical problem, often a medical technologist or medical lab scientist can provide additional information. Medical Laboratory Departments in some countries are exclusively directed by a specialized Doctor laboratory Science. In others, a consultant, medical or non-medical, may be the head the department. In Europe and some other countries, Clinical Scientists with a Masters level education may be qualified to head the department. Others may have a PhD and can have an exit qualification equivalent to medical staff (e.g., FRCPath in the UK). In France, only medical staff (Pharm.D. and M.D. specialized in anatomical pathology or clinical Laboratory Science) can discuss Laboratory results. Medical laboratory accreditation Credibility of medical laboratories is paramount to the health and safety of the patients relying on the testing services provided by these labs. Credentialing agencies vary by country. The international standard in use today for the accreditation of medical laboratories is ISO 15189 - Medical laboratories - Requirements for quality and competence. In the United States, billions of dollars is spent on unaccredited lab tests, such as Laboratory developed tests which do not require accreditation or FDA approval; about a billion USD a year is spent on US autoimmune LDTs alone. Accreditation is performed by the Joint Commission, College of American Pathologists, AAB (American Association of Bioanalysts), and other state and federal agencies. Legislative guidelines are provided under CLIA 88 (Clinical Laboratory Improvement Amendments) which regulates Medical Laboratory testing and personnel. The accrediting body in Australia is NATA, where all laboratories must be NATA accredited to receive payment from Medicare. In France the accrediting body is the Comité français d'accréditation (COFRAC). In 2010, modification of legislation established ISO 15189 accreditation as an obligation for all clinical laboratories. In the United Arab Emirates, the Dubai Accreditation Department (DAC) is the accreditation body that is internationally recognised by the International Laboratory Accreditation Cooperation (ILAC) for many facilities and groups, including Medical Laboratories, Testing and Calibration Laboratories, and Inspection Bodies. In Hong Kong, the accrediting body is Hong Kong Accreditation Service (HKAS). On 16 February 2004, HKAS launched its medical testing accreditation programme. In Canada, laboratory accreditation is not mandatory, but is becoming more and more popular. Accreditation Canada (AC) is the national reference. Different provincial oversight bodies mandate laboratories in EQA participations like LSPQ (Quebec), IQMH (Ontario) for example. Industry The laboratory industry is a part of the broader healthcare and health technology industry. Companies exist at various levels, including clinical laboratory services, suppliers of instrumentation equipment and consumable materials, and suppliers and developers of diagnostic tests themselves (often by biotechnology companies). Clinical laboratory services includes large multinational corporations such LabCorp, Quest Diagnostics, and Sonic Healthcare but a significant portion of revenue, estimated at 60% in the United States, is generated by hospital labs. In 2018, the total global revenue for these companies was estimated to reach $146 billion by 2024. Another estimate places the market size at $205 billion, reaching $333 billion by 2023. The American Association for Clinical Chemistry (AACC) represents professionals in the field. Clinical laboratories are supplied by other multinational companies which focus on materials and equipment, which can be used for both scientific research and medical testing. The largest of these is Thermo Fisher Scientific. In 2016, global life sciences instrumentation sales were around $47 billion, not including consumables, software, and services. In general, laboratory equipment includes lab centrifuges, transfection solutions, water purification systems, extraction techniques, gas generators, concentrators and evaporators, fume hoods, incubators, biological safety cabinets, bioreactors and fermenters, microwave-assisted chemistry, lab washers, and shakers and stirrers. United States In the United States, estimated total revenue as of 2016 was $75 billion, about 2% of total healthcare spending. In 2016, an estimated 60% of revenue was done by hospital labs, with 25% done by two independent companies (LabCorp and Quest). Hospital labs may also outsource their lab, known as outreach, to run tests; however, health insurers may pay the hospitals more than they would pay a laboratory company for the same test, but as of 2016, the markups were questioned by insurers. Rural hospitals, in particular, can bill for lab outreach under the Medicare's 70/30 shell rule. Laboratory developed tests are designed and developed inside a specific laboratory and do not require FDA approval; due to technological innovations, they have become more common and are estimated at a total value of $11 billion in 2016. Due to the rise of high-deductible health plans, laboratories have sometimes struggled to collect when billing patients; consequently, some laboratories have shifted to become more "consumer-focused".
Physical sciences
Research methods
Basics and measurement
20903424
https://en.wikipedia.org/wiki/Breathing
Breathing
Breathing (spiration or ventilation) is the rhythmical process of moving air into (inhalation) and out of (exhalation) the lungs to facilitate gas exchange with the internal environment, mostly to flush out carbon dioxide and bring in oxygen. All aerobic creatures need oxygen for cellular respiration, which extracts energy from the reaction of oxygen with molecules derived from food and produces carbon dioxide as a waste product. Breathing, or external respiration, brings air into the lungs where gas exchange takes place in the alveoli through diffusion. The body's circulatory system transports these gases to and from the cells, where cellular respiration takes place. The breathing of all vertebrates with lungs consists of repetitive cycles of inhalation and exhalation through a highly branched system of tubes or airways which lead from the nose to the alveoli. The number of respiratory cycles per minute is the breathing or respiratory rate, and is one of the four primary vital signs of life. Under normal conditions the breathing depth and rate is automatically, and unconsciously, controlled by several homeostatic mechanisms which keep the partial pressures of carbon dioxide and oxygen in the arterial blood constant. Keeping the partial pressure of carbon dioxide in the arterial blood unchanged under a wide variety of physiological circumstances, contributes significantly to tight control of the pH of the extracellular fluids (ECF). Over-breathing (hyperventilation) increases the arterial partial pressure of carbon dioxide, causing a rise in the pH of the ECF. Under-breathing (hypoventilation), on the other hand, decreases the arterial partial pressure of carbon dioxide and lowers the pH of the ECF. Both cause distressing symptoms. Breathing has other important functions. It provides a mechanism for speech, laughter and similar expressions of the emotions. It is also used for reflexes such as yawning, coughing and sneezing. Animals that cannot thermoregulate by perspiration, because they lack sufficient sweat glands, may lose heat by evaporation through panting. Mechanics The lungs are not capable of inflating themselves, and will expand only when there is an increase in the volume of the thoracic cavity. In humans, as in the other mammals, this is achieved primarily through the contraction of the diaphragm, but also by the contraction of the intercostal muscles which pull the rib cage upwards and outwards as shown in the diagrams on the right. During forceful inhalation (Figure on the right) the accessory muscles of inhalation, which connect the ribs and sternum to the cervical vertebrae and base of the skull, in many cases through an intermediary attachment to the clavicles, exaggerate the pump handle and bucket handle movements (see illustrations on the left), bringing about a greater change in the volume of the chest cavity. During exhalation (breathing out), at rest, all the muscles of inhalation relax, returning the chest and abdomen to a position called the "resting position", which is determined by their anatomical elasticity. At this point the lungs contain the functional residual capacity of air, which, in the adult human, has a volume of about 2.5–3.0 liters. During heavy breathing (hyperpnea) as, for instance, during exercise, exhalation is brought about by relaxation of all the muscles of inhalation, (in the same way as at rest), but, in addition, the abdominal muscles, instead of being passive, now contract strongly causing the rib cage to be pulled downwards (front and sides). This not only decreases the size of the rib cage but also pushes the abdominal organs upwards against the diaphragm which consequently bulges deeply into the thorax. The end-exhalatory lung volume is now less air than the resting "functional residual capacity". However, in a normal mammal, the lungs cannot be emptied completely. In an adult human, there is always still at least one liter of residual air left in the lungs after maximum exhalation. Diaphragmatic breathing causes the abdomen to rhythmically bulge out and fall back. It is, therefore, often referred to as "abdominal breathing". These terms are often used interchangeably because they describe the same action. When the accessory muscles of inhalation are activated, especially during labored breathing, the clavicles are pulled upwards, as explained above. This external manifestation of the use of the accessory muscles of inhalation is sometimes referred to as clavicular breathing, seen especially during asthma attacks and in people with chronic obstructive pulmonary disease. Passage of air Upper airways Ideally, air is breathed first out and secondly in through the nose. The nasal cavities (between the nostrils and the pharynx) are quite narrow, firstly by being divided in two by the nasal septum, and secondly by lateral walls that have several longitudinal folds, or shelves, called nasal conchae, thus exposing a large area of nasal mucous membrane to the air as it is inhaled (and exhaled). This causes the inhaled air to take up moisture from the wet mucus, and warmth from the underlying blood vessels, so that the air is very nearly saturated with water vapor and is at almost body temperature by the time it reaches the larynx. Part of this moisture and heat is recaptured as the exhaled air moves out over the partially dried-out, cooled mucus in the nasal passages, during exhalation. The sticky mucus also traps much of the particulate matter that is breathed in, preventing it from reaching the lungs. Lower airways The anatomy of a typical mammalian respiratory system, below the structures normally listed among the "upper airways" (the nasal cavities, the pharynx, and larynx), is often described as a respiratory tree or tracheobronchial tree (figure on the left). Larger airways give rise to branches that are slightly narrower, but more numerous than the "trunk" airway that gives rise to the branches. The human respiratory tree may consist of, on average, 23 such branchings into progressively smaller airways, while the respiratory tree of the mouse has up to 13 such branchings. Proximal divisions (those closest to the top of the tree, such as the trachea and bronchi) function mainly to transmit air to the lower airways. Later divisions such as the respiratory bronchioles, alveolar ducts and alveoli are specialized for gas exchange. The trachea and the first portions of the main bronchi are outside the lungs. The rest of the "tree" branches within the lungs, and ultimately extends to every part of the lungs. The alveoli are the blind-ended terminals of the "tree", meaning that any air that enters them has to exit the same way it came. A system such as this creates dead space, a term for the volume of air that fills the airways at the end of inhalation, and is breathed out, unchanged, during the next exhalation, never having reached the alveoli. Similarly, the dead space is filled with alveolar air at the end of exhalation, which is the first air to be breathed back into the alveoli during inhalation, before any fresh air which follows after it. The dead space volume of a typical adult human is about 150 ml. Gas exchange The primary purpose of breathing is to refresh air in the alveoli so that gas exchange can take place in the blood. The equilibration of the partial pressures of the gases in the alveolar blood and the alveolar air occurs by diffusion. After exhaling, adult human lungs still contain 2.5–3 L of air, their functional residual capacity or FRC. On inhalation, only about 350 mL of new, warm, moistened atmospheric air is brought in and is well mixed with the FRC. Consequently, the gas composition of the FRC changes very little during the breathing cycle. This means that the pulmonary capillary blood always equilibrates with a relatively constant air composition in the lungs and the diffusion rate with arterial blood gases remains equally constant with each breath. Body tissues are therefore not exposed to large swings in oxygen and carbon dioxide tensions in the blood caused by the breathing cycle, and the peripheral and central chemoreceptors measure only gradual changes in dissolved gases. Thus the homeostatic control of the breathing rate depends only on the partial pressures of oxygen and carbon dioxide in the arterial blood, which then also maintains a constant pH of the blood. Control The rate and depth of breathing is automatically controlled by the respiratory centers that receive information from the peripheral and central chemoreceptors. These chemoreceptors continuously monitor the partial pressures of carbon dioxide and oxygen in the arterial blood. The first of these sensors are the central chemoreceptors on the surface of the medulla oblongata of the brain stem which are particularly sensitive to pH as well as the partial pressure of carbon dioxide in the blood and cerebrospinal fluid. The second group of sensors measure the partial pressure of oxygen in the arterial blood. Together the latter are known as the peripheral chemoreceptors, and are situated in the aortic and carotid bodies. Information from all of these chemoreceptors is conveyed to the respiratory centers in the pons and medulla oblongata, which responds to fluctuations in the partial pressures of carbon dioxide and oxygen in the arterial blood by adjusting the rate and depth of breathing, in such a way as to restore the partial pressure of carbon dioxide to 5.3 kPa (40 mm Hg), the pH to 7.4 and, to a lesser extent, the partial pressure of oxygen to 13 kPa (100 mm Hg). For example, exercise increases the production of carbon dioxide by the active muscles. This carbon dioxide diffuses into the venous blood and ultimately raises the partial pressure of carbon dioxide in the arterial blood. This is immediately sensed by the carbon dioxide chemoreceptors on the brain stem. The respiratory centers respond to this information by causing the rate and depth of breathing to increase to such an extent that the partial pressures of carbon dioxide and oxygen in the arterial blood return almost immediately to the same levels as at rest. The respiratory centers communicate with the muscles of breathing via motor nerves, of which the phrenic nerves, which innervate the diaphragm, are probably the most important. Automatic breathing can be overridden to a limited extent by simple choice, or to facilitate swimming, speech, singing or other vocal training. It is impossible to suppress the urge to breathe to the point of hypoxia but training can increase the ability to hold one's breath. Conscious breathing practices have been shown to promote relaxation and stress relief but have not been proven to have any other health benefits. Other automatic breathing control reflexes also exist. Submersion, particularly of the face, in cold water, triggers a response called the diving reflex. This has the initial result of shutting down the airways against the influx of water. The metabolic rate slows down. This is coupled with intense vasoconstriction of the arteries to the limbs and abdominal viscera, reserving the oxygen that is in blood and lungs at the beginning of the dive almost exclusively for the heart and the brain. The diving reflex is an often-used response in animals that routinely need to dive, such as penguins, seals and whales. It is also more effective in very young infants and children than in adults. Composition Inhaled air is by volume 78% nitrogen, 20.95% oxygen and small amounts of other gases including argon, carbon dioxide, neon, helium, and hydrogen. The gas exhaled is 4% to 5% by volume of carbon dioxide, about a hundredfold increase over the inhaled amount. The volume of oxygen is reduced by about a quarter, 4% to 5%, of total air volume. The typical composition is: 5.0–6.3% water vapor 79% nitrogen 13.6–16.0% oxygen 4.0–5.3% carbon dioxide 1% argon parts per million (ppm) of hydrogen, from the metabolic activity of microorganisms in the large intestine. ppm of carbon monoxide from degradation of heme proteins. 4.5 ppm of methanol 1 ppm of ammonia. Trace many hundreds of volatile organic compounds, especially isoprene and acetone. The presence of certain organic compounds indicates disease. In addition to air, underwater divers practicing technical diving may breathe oxygen-rich, oxygen-depleted or helium-rich breathing gas mixtures. Oxygen and analgesic gases are sometimes given to patients under medical care. The atmosphere in space suits is pure oxygen. However, this is kept at around 20% of Earthbound atmospheric pressure to regulate the rate of inspiration. Effects of ambient air pressure Breathing at altitude Atmospheric pressure decreases with the height above sea level (altitude) and since the alveoli are open to the outside air through the open airways, the pressure in the lungs also decreases at the same rate with altitude. At altitude, a pressure differential is still required to drive air into and out of the lungs as it is at sea level. The mechanism for breathing at altitude is essentially identical to breathing at sea level but with the following differences: The atmospheric pressure decreases exponentially with altitude, roughly halving with every rise in altitude. The composition of atmospheric air is, however, almost constant below 80 km, as a result of the continuous mixing effect of the weather. The concentration of oxygen in the air (mmols O2 per liter of air) therefore decreases at the same rate as the atmospheric pressure. At sea level, where the ambient pressure is about 100 kPa, oxygen constitutes 21% of the atmosphere and the partial pressure of oxygen () is 21 kPa (i.e. 21% of 100 kPa). At the summit of Mount Everest, , where the total atmospheric pressure is 33.7 kPa, oxygen still constitutes 21% of the atmosphere but its partial pressure is only 7.1 kPa (i.e. 21% of 33.7 kPa = 7.1 kPa). Therefore, a greater volume of air must be inhaled at altitude than at sea level in order to breathe in the same amount of oxygen in a given period. During inhalation, air is warmed and saturated with water vapor as it passes through the nose and pharynx before it enters the alveoli. The saturated vapor pressure of water is dependent only on temperature; at a body core temperature of 37 °C it is 6.3 kPa (47.0 mmHg), regardless of any other influences, including altitude. Consequently, at sea level, the tracheal air (immediately before the inhaled air enters the alveoli) consists of: water vapor ( = 6.3 kPa), nitrogen ( = 74.0 kPa), oxygen ( = 19.7 kPa) and trace amounts of carbon dioxide and other gases, a total of 100 kPa. In dry air, the at sea level is 21.0 kPa, compared to a of 19.7 kPa in the tracheal air (21% of [100 – 6.3] = 19.7 kPa). At the summit of Mount Everest tracheal air has a total pressure of 33.7 kPa, of which 6.3 kPa is water vapor, reducing the in the tracheal air to 5.8 kPa (21% of [33.7 – 6.3] = 5.8 kPa), beyond what is accounted for by a reduction of atmospheric pressure alone (7.1 kPa). The pressure gradient forcing air into the lungs during inhalation is also reduced by altitude. Doubling the volume of the lungs halves the pressure in the lungs at any altitude. Having the sea level air pressure (100 kPa) results in a pressure gradient of 50 kPa but doing the same at 5500 m, where the atmospheric pressure is 50 kPa, a doubling of the volume of the lungs results in a pressure gradient of the only 25 kPa. In practice, because we breathe in a gentle, cyclical manner that generates pressure gradients of only 2–3 kPa, this has little effect on the actual rate of inflow into the lungs and is easily compensated for by breathing slightly deeper. The lower viscosity of air at altitude allows air to flow more easily and this also helps compensate for any loss of pressure gradient. All of the above effects of low atmospheric pressure on breathing are normally accommodated by increasing the respiratory minute volume (the volume of air breathed in — or out — per minute), and the mechanism for doing this is automatic. The exact increase required is determined by the respiratory gases homeostatic mechanism, which regulates the arterial and . This homeostatic mechanism prioritizes the regulation of the arterial over that of oxygen at sea level. That is to say, at sea level the arterial is maintained at very close to 5.3 kPa (or 40 mmHg) under a wide range of circumstances, at the expense of the arterial , which is allowed to vary within a very wide range of values, before eliciting a corrective ventilatory response. However, when the atmospheric pressure (and therefore the atmospheric ) falls to below 75% of its value at sea level, oxygen homeostasis is given priority over carbon dioxide homeostasis. This switch-over occurs at an elevation of about . If this switch occurs relatively abruptly, the hyperventilation at high altitude will cause a severe fall in the arterial with a consequent rise in the pH of the arterial plasma leading to respiratory alkalosis. This is one contributor to high altitude sickness. On the other hand, if the switch to oxygen homeostasis is incomplete, then hypoxia may complicate the clinical picture with potentially fatal results. Breathing at depth Pressure increases with the depth of water at the rate of about one atmosphere – slightly more than 100 kPa, or one bar, for every 10 meters. Air breathed underwater by divers is at the ambient pressure of the surrounding water and this has a complex range of physiological and biochemical implications. If not properly managed, breathing compressed gasses underwater may lead to several diving disorders which include pulmonary barotrauma, decompression sickness, nitrogen narcosis, and oxygen toxicity. The effects of breathing gasses under pressure are further complicated by the use of one or more special gas mixtures. Air is provided by a diving regulator, which reduces the high pressure in a diving cylinder to the ambient pressure. The breathing performance of regulators is a factor when choosing a suitable regulator for the type of diving to be undertaken. It is desirable that breathing from a regulator requires low effort even when supplying large amounts of air. It is also recommended that it supplies air smoothly without any sudden changes in resistance while inhaling or exhaling. In the graph, right, note the initial spike in pressure on exhaling to open the exhaust valve and that the initial drop in pressure on inhaling is soon overcome as the Venturi effect designed into the regulator to allow an easy draw of air. Many regulators have an adjustment to change the ease of inhaling so that breathing is effortless. Respiratory disorders Abnormal breathing patterns include Kussmaul breathing, Biot's respiration and Cheyne–Stokes respiration. Other breathing disorders include shortness of breath (dyspnea), stridor, apnea, sleep apnea (most commonly obstructive sleep apnea), mouth breathing, and snoring. Many conditions are associated with obstructed airways. Chronic mouth breathing may be associated with illness. Hypopnea refers to overly shallow breathing; hyperpnea refers to fast and deep breathing brought on by a demand for more oxygen, as for example by exercise. The terms hypoventilation and hyperventilation also refer to shallow breathing and fast and deep breathing respectively, but under inappropriate circumstances or disease. However, this distinction (between, for instance, hyperpnea and hyperventilation) is not always adhered to, so that these terms are frequently used interchangeably. A range of breath tests can be used to diagnose diseases such as dietary intolerances. A rhinomanometer uses acoustic technology to examine the air flow through the nasal passages. Society and culture The word "spirit" comes from the Latin spiritus, meaning breath. Historically, breath has often been considered in terms of the concept of life force. The Hebrew Bible refers to God breathing the breath of life into clay to make Adam a living soul (nephesh). It also refers to the breath as returning to God when a mortal dies. The terms spirit, prana, the Polynesian mana, the Hebrew ruach and the psyche in psychology are related to the concept of breath. In tai chi, aerobic exercise is combined with breathing exercises to strengthen the diaphragm muscles, improve posture and make better use of the body's qi. Different forms of meditation, and yoga advocate various breathing methods. A form of Buddhist meditation called anapanasati meaning mindfulness of breath was first introduced by Buddha. Breathing disciplines are incorporated into meditation, certain forms of yoga such as pranayama, and the Buteyko method as a treatment for asthma and other conditions. In music, some wind instrument players use a technique called circular breathing. Singers also rely on breath control. Common cultural expressions related to breathing include: "to catch my breath", "took my breath away", "inspiration", "to expire", "get my breath back". Breathing and mood Certain breathing patterns have a tendency to occur with certain moods. Due to this relationship, practitioners of various disciplines consider that they can encourage the occurrence of a particular mood by adopting the breathing pattern that it most commonly occurs in conjunction with. For instance, and perhaps the most common recommendation is that deeper breathing which utilizes the diaphragm and abdomen more can encourage relaxation. Practitioners of different disciplines often interpret the importance of breathing regulation and its perceived influence on mood in different ways. Buddhists may consider that it helps precipitate a sense of inner-peace, holistic healers that it encourages an overall state of health and business advisers that it provides relief from work-based stress. Breathing and physical exercise During physical exercise, a deeper breathing pattern is adapted to facilitate greater oxygen absorption. An additional reason for the adoption of a deeper breathing pattern is to strengthen the body's core. During the process of deep breathing, the thoracic diaphragm adopts a lower position in the core and this helps to generate intra-abdominal pressure which strengthens the lumbar spine. Typically, this allows for more powerful physical movements to be performed. As such, it is frequently recommended when lifting heavy weights to take a deep breath or adopt a deeper breathing pattern.
Biology and health sciences
Basics_3
null
20903754
https://en.wikipedia.org/wiki/Robotics
Robotics
Robotics is the interdisciplinary study and practice of the design, construction, operation, and use of robots. Within mechanical engineering, robotics is the design and construction of the physical structures of robots, while in computer science, robotics focuses on robotic automation algorithms. Other disciplines contributing to robotics include electrical, control, software, information, electronic, telecommunication, computer, mechatronic, and materials engineering. The goal of most robotics is to design machines that can help and assist humans. Many robots are built to do jobs that are hazardous to people, such as finding survivors in unstable ruins, and exploring space, mines and shipwrecks. Others replace people in jobs that are boring, repetitive, or unpleasant, such as cleaning, monitoring, transporting, and assembling. Today, robotics is a rapidly growing field, as technological advances continue; researching, designing, and building new robots serve various practical purposes. Robotics aspects Robotics usually combines three aspects of design work to create robot systems: Mechanical construction: a frame, form or shape designed to achieve a particular task. For example, a robot designed to travel across heavy dirt or mud might use caterpillar tracks. Origami inspired robots can sense and analyze in extreme environments. The mechanical aspect of the robot is mostly the creator's solution to completing the assigned task and dealing with the physics of the environment around it. Form follows function. Electrical components that power and control the machinery. For example, the robot with caterpillar tracks would need some kind of power to move the tracker treads. That power comes in the form of electricity, which will have to travel through a wire and originate from a battery, a basic electrical circuit. Even petrol-powered machines that get their power mainly from petrol still require an electric current to start the combustion process which is why most petrol-powered machines like cars, have batteries. The electrical aspect of robots is used for movement (through motors), sensing (where electrical signals are used to measure things like heat, sound, position, and energy status), and operation (robots need some level of electrical energy supplied to their motors and sensors in order to activate and perform basic operations) Software. A program is how a robot decides when or how to do something. In the caterpillar track example, a robot that needs to move across a muddy road may have the correct mechanical construction and receive the correct amount of power from its battery, but would not be able to go anywhere without a program telling it to move. Programs are the core essence of a robot, it could have excellent mechanical and electrical construction, but if its program is poorly structured, its performance will be very poor (or it may not perform at all). There are three different types of robotic programs: remote control, artificial intelligence, and hybrid. A robot with remote control programming has a preexisting set of commands that it will only perform if and when it receives a signal from a control source, typically a human being with remote control. It is perhaps more appropriate to view devices controlled primarily by human commands as falling in the discipline of automation rather than robotics. Robots that use artificial intelligence interact with their environment on their own without a control source, and can determine reactions to objects and problems they encounter using their preexisting programming. A hybrid is a form of programming that incorporates both AI and RC functions in them. Applied robotics As many robots are designed for specific tasks, this method of classification becomes more relevant. For example, many robots are designed for assembly work, which may not be readily adaptable for other applications. They are termed "assembly robots". For seam welding, some suppliers provide complete welding systems with the robot i.e. the welding equipment along with other material handling facilities like turntables, etc. as an integrated unit. Such an integrated robotic system is called a "welding robot" even though its discrete manipulator unit could be adapted to a variety of tasks. Some robots are specifically designed for heavy load manipulation, and are labeled as "heavy-duty robots". Current and potential applications include: Manufacturing. Robots have been increasingly used in manufacturing since the 1960s. According to the Robotic Industries Association US data, in 2016 the automotive industry was the main customer of industrial robots with 52% of total sales. In the auto industry, they can amount for more than half of the "labor". There are even "lights off" factories such as an IBM keyboard manufacturing factory in Texas that was fully automated as early as 2003. Autonomous transport including airplane autopilot and self-driving cars Domestic robots including robotic vacuum cleaners, robotic lawn mowers, dishwasher loading and flatbread baking. Construction robots. Construction robots can be separated into three types: traditional robots, robotic arm, and robotic exoskeleton. Automated mining. Space exploration, including Mars rovers. Energy applications including cleanup of nuclear contaminated areas; and cleaning solar panel arrays. Medical robots and Robot-assisted surgery designed and used in clinics. Agricultural robots. The use of robots in agriculture is closely linked to the concept of AI-assisted precision agriculture and drone usage. Food processing. Commercial examples of kitchen automation are Flippy (burgers), Zume Pizza (pizza), Cafe X (coffee), Makr Shakr (cocktails), Frobot (frozen yogurts), Sally (salads), salad or food bowl robots manufactured by Dexai (a Draper Laboratory spinoff, operating on military bases), and integrated food bowl assembly systems manufactured by Spyce Kitchen (acquired by Sweetgreen) and Silicon Valley startup Hyphen. Other examples may include manufacturing technologies based on 3D Food Printing. Military robots. Robot sports for entertainment and education, including Robot combat, Autonomous racing, drone racing, and FIRST Robotics. Mechanical robotics areas Power source At present, mostly (lead–acid) batteries are used as a power source. Many different types of batteries can be used as a power source for robots. They range from lead–acid batteries, which are safe and have relatively long shelf lives but are rather heavy compared to silver–cadmium batteries which are much smaller in volume and are currently much more expensive. Designing a battery-powered robot needs to take into account factors such as safety, cycle lifetime, and weight. Generators, often some type of internal combustion engine, can also be used. However, such designs are often mechanically complex and need fuel, require heat dissipation, and are relatively heavy. A tether connecting the robot to a power supply would remove the power supply from the robot entirely. This has the advantage of saving weight and space by moving all power generation and storage components elsewhere. However, this design does come with the drawback of constantly having a cable connected to the robot, which can be difficult to manage. Potential power sources could be: pneumatic (compressed gases) Solar power (using the sun's energy and converting it into electrical power) hydraulics (liquids) flywheel energy storage organic garbage (through anaerobic digestion) nuclear Actuation Actuators are the "muscles" of a robot, the parts which convert stored energy into movement. By far the most popular actuators are electric motors that rotate a wheel or gear, and linear actuators that control industrial robots in factories. There are some recent advances in alternative types of actuators, powered by electricity, chemicals, or compressed air. Electric motors The vast majority of robots use electric motors, often brushed and brushless DC motors in portable robots or AC motors in industrial robots and CNC machines. These motors are often preferred in systems with lighter loads, and where the predominant form of motion is rotational. Linear actuators Various types of linear actuators move in and out instead of by spinning, and often have quicker direction changes, particularly when very large forces are needed such as with industrial robotics. They are typically powered by compressed and oxidized air (pneumatic actuator) or an oil (hydraulic actuator) Linear actuators can also be powered by electricity which usually consists of a motor and a leadscrew. Another common type is a mechanical linear actuator such as a rack and pinion on a car. Series elastic actuators Series elastic actuation (SEA) relies on the idea of introducing intentional elasticity between the motor actuator and the load for robust force control. Due to the resultant lower reflected inertia, series elastic actuation improves safety when a robot interacts with the environment (e.g., humans or workpieces) or during collisions. Furthermore, it also provides energy efficiency and shock absorption (mechanical filtering) while reducing excessive wear on the transmission and other mechanical components. This approach has successfully been employed in various robots, particularly advanced manufacturing robots and walking humanoid robots. The controller design of a series elastic actuator is most often performed within the passivity framework as it ensures the safety of interaction with unstructured environments. Despite its remarkable stability and robustness, this framework suffers from the stringent limitations imposed on the controller which may trade-off performance. The reader is referred to the following survey which summarizes the common controller architectures for SEA along with the corresponding sufficient passivity conditions. One recent study has derived the necessary and sufficient passivity conditions for one of the most common impedance control architectures, namely velocity-sourced SEA. This work is of particular importance as it drives the non-conservative passivity bounds in an SEA scheme for the first time which allows a larger selection of control gains. Air muscles Pneumatic artificial muscles also known as air muscles, are special tubes that expand (typically up to 42%) when air is forced inside them. They are used in some robot applications. Wire muscles Muscle wire, also known as shape memory alloy, is a material that contracts (under 5%) when electricity is applied. They have been used for some small robot applications. Electroactive polymers EAPs or EPAMs are a plastic material that can contract substantially (up to 380% activation strain) from electricity, and have been used in facial muscles and arms of humanoid robots, and to enable new robots to float, fly, swim or walk. Piezo motors Recent alternatives to DC motors are piezo motors or ultrasonic motors. These work on a fundamentally different principle, whereby tiny piezoceramic elements, vibrating many thousands of times per second, cause linear or rotary motion. There are different mechanisms of operation; one type uses the vibration of the piezo elements to step the motor in a circle or a straight line. Another type uses the piezo elements to cause a nut to vibrate or to drive a screw. The advantages of these motors are nanometer resolution, speed, and available force for their size. These motors are already available commercially and being used on some robots. Elastic nanotubes Elastic nanotubes are a promising artificial muscle technology in early-stage experimental development. The absence of defects in carbon nanotubes enables these filaments to deform elastically by several percent, with energy storage levels of perhaps 10 J/cm3 for metal nanotubes. Human biceps could be replaced with an 8 mm diameter wire of this material. Such compact "muscle" might allow future robots to outrun and outjump humans. Sensing Sensors allow robots to receive information about a certain measurement of the environment, or internal components. This is essential for robots to perform their tasks, and act upon any changes in the environment to calculate the appropriate response. They are used for various forms of measurements, to give the robots warnings about safety or malfunctions, and to provide real-time information about the task it is performing. Touch Current robotic and prosthetic hands receive far less tactile information than the human hand. Recent research has developed a tactile sensor array that mimics the mechanical properties and touch receptors of human fingertips. The sensor array is constructed as a rigid core surrounded by conductive fluid contained by an elastomeric skin. Electrodes are mounted on the surface of the rigid core and are connected to an impedance-measuring device within the core. When the artificial skin touches an object the fluid path around the electrodes is deformed, producing impedance changes that map the forces received from the object. The researchers expect that an important function of such artificial fingertips will be adjusting the robotic grip on held objects. Scientists from several European countries and Israel developed a prosthetic hand in 2009, called SmartHand, which functions like a real one —allowing patients to write with it, type on a keyboard, play piano, and perform other fine movements. The prosthesis has sensors which enable the patient to sense real feelings in its fingertips. Other Other common forms of sensing in robotics use lidar, radar, and sonar. Lidar measures the distance to a target by illuminating the target with laser light and measuring the reflected light with a sensor. Radar uses radio waves to determine the range, angle, or velocity of objects. Sonar uses sound propagation to navigate, communicate with or detect objects on or under the surface of the water. Mechanical grippers One of the most common types of end-effectors are "grippers". In its simplest manifestation, it consists of just two fingers that can open and close to pick up and let go of a range of small objects. Fingers can, for example, be made of a chain with a metal wire running through it. Hands that resemble and work more like a human hand include the Shadow Hand and the Robonaut hand. Hands that are of a mid-level complexity include the Delft hand. Mechanical grippers can come in various types, including friction and encompassing jaws. Friction jaws use all the force of the gripper to hold the object in place using friction. Encompassing jaws cradle the object in place, using less friction. Suction end-effectors Suction end-effectors, powered by vacuum generators, are very simple astrictive devices that can hold very large loads provided the prehension surface is smooth enough to ensure suction. Pick and place robots for electronic components and for large objects like car windscreens, often use very simple vacuum end-effectors. Suction is a highly used type of end-effector in industry, in part because the natural compliance of soft suction end-effectors can enable a robot to be more robust in the presence of imperfect robotic perception. As an example: consider the case of a robot vision system that estimates the position of a water bottle but has 1 centimeter of error. While this may cause a rigid mechanical gripper to puncture the water bottle, the soft suction end-effector may just bend slightly and conform to the shape of the water bottle surface. General purpose effectors Some advanced robots are beginning to use fully humanoid hands, like the Shadow Hand, MANUS, and the Schunk hand. They have powerful robot dexterity intelligence (RDI), with as many as 20 degrees of freedom and hundreds of tactile sensors. Control robotics areas The mechanical structure of a robot must be controlled to perform tasks. The control of a robot involves three distinct phases – perception, processing, and action (robotic paradigms). Sensors give information about the environment or the robot itself (e.g. the position of its joints or its end effector). This information is then processed to be stored or transmitted and to calculate the appropriate signals to the actuators (motors), which move the mechanical structure to achieve the required co-ordinated motion or force actions. The processing phase can range in complexity. At a reactive level, it may translate raw sensor information directly into actuator commands (e.g. firing motor power electronic gates based directly upon encoder feedback signals to achieve the required torque/velocity of the shaft). Sensor fusion and internal models may first be used to estimate parameters of interest (e.g. the position of the robot's gripper) from noisy sensor data. An immediate task (such as moving the gripper in a certain direction until an object is detected with a proximity sensor) is sometimes inferred from these estimates. Techniques from control theory are generally used to convert the higher-level tasks into individual commands that drive the actuators, most often using kinematic and dynamic models of the mechanical structure. At longer time scales or with more sophisticated tasks, the robot may need to build and reason with a "cognitive" model. Cognitive models try to represent the robot, the world, and how the two interact. Pattern recognition and computer vision can be used to track objects. Mapping techniques can be used to build maps of the world. Finally, motion planning and other artificial intelligence techniques may be used to figure out how to act. For example, a planner may figure out how to achieve a task without hitting obstacles, falling over, etc. Modern commercial robotic control systems are highly complex, integrate multiple sensors and effectors, have many interacting degrees-of-freedom (DOF) and require operator interfaces, programming tools and real-time capabilities. They are oftentimes interconnected to wider communication networks and in many cases are now both IoT-enabled and mobile. Progress towards open architecture, layered, user-friendly and 'intelligent' sensor-based interconnected robots has emerged from earlier concepts related to Flexible Manufacturing Systems (FMS), and several 'open or 'hybrid' reference architectures exist which assist developers of robot control software and hardware to move beyond traditional, earlier notions of 'closed' robot control systems have been proposed. Open architecture controllers are said to be better able to meet the growing requirements of a wide range of robot users, including system developers, end users and research scientists, and are better positioned to deliver the advanced robotic concepts related to Industry 4.0. In addition to utilizing many established features of robot controllers, such as position, velocity and force control of end effectors, they also enable IoT interconnection and the implementation of more advanced sensor fusion and control techniques, including adaptive control, Fuzzy control and Artificial Neural Network (ANN)-based control. When implemented in real-time, such techniques can potentially improve the stability and performance of robots operating in unknown or uncertain environments by enabling the control systems to learn and adapt to environmental changes. There are several examples of reference architectures for robot controllers, and also examples of successful implementations of actual robot controllers developed from them. One example of a generic reference architecture and associated interconnected, open-architecture robot and controller implementation was used in a number of research and development studies, including prototype implementation of novel advanced and intelligent control and environment mapping methods in real-time. Manipulation A definition of robotic manipulation has been provided by Matt Mason as: "manipulation refers to an agent's control of its environment through selective contact". Robots need to manipulate objects; pick up, modify, destroy, move or otherwise have an effect. Thus the functional end of a robot arm intended to make the effect (whether a hand, or tool) are often referred to as end effectors, while the "arm" is referred to as a manipulator. Most robot arms have replaceable end-effectors, each allowing them to perform some small range of tasks. Some have a fixed manipulator that cannot be replaced, while a few have one very general-purpose manipulator, for example, a humanoid hand. Locomotion Rolling robots For simplicity, most mobile robots have four wheels or a number of continuous tracks. Some researchers have tried to create more complex wheeled robots with only one or two wheels. These can have certain advantages such as greater efficiency and reduced parts, as well as allowing a robot to navigate in confined places that a four-wheeled robot would not be able to. Two-wheeled balancing robots Balancing robots generally use a gyroscope to detect how much a robot is falling and then drive the wheels proportionally in the same direction, to counterbalance the fall at hundreds of times per second, based on the dynamics of an inverted pendulum. Many different balancing robots have been designed. While the Segway is not commonly thought of as a robot, it can be thought of as a component of a robot, when used as such Segway refer to them as RMP (Robotic Mobility Platform). An example of this use has been as NASA's Robonaut that has been mounted on a Segway. One-wheeled balancing robots A one-wheeled balancing robot is an extension of a two-wheeled balancing robot so that it can move in any 2D direction using a round ball as its only wheel. Several one-wheeled balancing robots have been designed recently, such as Carnegie Mellon University's "Ballbot" which is the approximate height and width of a person, and Tohoku Gakuin University's "BallIP". Because of the long, thin shape and ability to maneuver in tight spaces, they have the potential to function better than other robots in environments with people. Spherical orb robots Several attempts have been made in robots that are completely inside a spherical ball, either by spinning a weight inside the ball, or by rotating the outer shells of the sphere. These have also been referred to as an orb bot or a ball bot. Six-wheeled robots Using six wheels instead of four wheels can give better traction or grip in outdoor terrain such as on rocky dirt or grass. Tracked robots Tracks provide even more traction than a six-wheeled robot. Tracked wheels behave as if they were made of hundreds of wheels, therefore are very common for outdoor off-road robots, where the robot must drive on very rough terrain. However, they are difficult to use indoors such as on carpets and smooth floors. Examples include NASA's Urban Robot "Urbie". Walking robots Walking is a difficult and dynamic problem to solve. Several robots have been made which can walk reliably on two legs, however, none have yet been made which are as robust as a human. There has been much study on human-inspired walking, such as AMBER lab which was established in 2008 by the Mechanical Engineering Department at Texas A&M University. Many other robots have been built that walk on more than two legs, due to these robots being significantly easier to construct. Walking robots can be used for uneven terrains, which would provide better mobility and energy efficiency than other locomotion methods. Typically, robots on two legs can walk well on flat floors and can occasionally walk up stairs. None can walk over rocky, uneven terrain. Some of the methods which have been tried are: ZMP technique The zero moment point (ZMP) is the algorithm used by robots such as Honda's ASIMO. The robot's onboard computer tries to keep the total inertial forces (the combination of Earth's gravity and the acceleration and deceleration of walking), exactly opposed by the floor reaction force (the force of the floor pushing back on the robot's foot). In this way, the two forces cancel out, leaving no moment (force causing the robot to rotate and fall over). However, this is not exactly how a human walks, and the difference is obvious to human observers, some of whom have pointed out that ASIMO walks as if it needs the lavatory. ASIMO's walking algorithm is not static, and some dynamic balancing is used (see below). However, it still requires a smooth surface to walk on. Hopping Several robots, built in the 1980s by Marc Raibert at the MIT Leg Laboratory, successfully demonstrated very dynamic walking. Initially, a robot with only one leg, and a very small foot could stay upright simply by hopping. The movement is the same as that of a person on a pogo stick. As the robot falls to one side, it would jump slightly in that direction, in order to catch itself. Soon, the algorithm was generalised to two and four legs. A bipedal robot was demonstrated running and even performing somersaults. A quadruped was also demonstrated which could trot, run, pace, and bound. For a full list of these robots, see the MIT Leg Lab Robots page. Dynamic balancing (controlled falling) A more advanced way for a robot to walk is by using a dynamic balancing algorithm, which is potentially more robust than the Zero Moment Point technique, as it constantly monitors the robot's motion, and places the feet in order to maintain stability. This technique was recently demonstrated by Anybots' Dexter Robot, which is so stable, it can even jump. Another example is the TU Delft Flame. Passive dynamics Perhaps the most promising approach uses passive dynamics where the momentum of swinging limbs is used for greater efficiency. It has been shown that totally unpowered humanoid mechanisms can walk down a gentle slope, using only gravity to propel themselves. Using this technique, a robot need only supply a small amount of motor power to walk along a flat surface or a little more to walk up a hill. This technique promises to make walking robots at least ten times more efficient than ZMP walkers, like ASIMO. Flying A modern passenger airliner is essentially a flying robot, with two humans to manage it. The autopilot can control the plane for each stage of the journey, including takeoff, normal flight, and even landing. Other flying robots are uninhabited and are known as unmanned aerial vehicles (UAVs). They can be smaller and lighter without a human pilot on board, and fly into dangerous territory for military surveillance missions. Some can even fire on targets under command. UAVs are also being developed which can fire on targets automatically, without the need for a command from a human. Other flying robots include cruise missiles, the Entomopter, and the Epson micro helicopter robot. Robots such as the Air Penguin, Air Ray, and Air Jelly have lighter-than-air bodies, are propelled by paddles, and are guided by sonar. Biomimetic flying robots (BFRs) BFRs take inspiration from flying mammals, birds, or insects. BFRs can have flapping wings, which generate the lift and thrust, or they can be propeller actuated. BFRs with flapping wings have increased stroke efficiencies, increased maneuverability, and reduced energy consumption in comparison to propeller actuated BFRs. Mammal and bird inspired BFRs share similar flight characteristics and design considerations. For instance, both mammal and bird inspired BFRs minimize edge fluttering and pressure-induced wingtip curl by increasing the rigidity of the wing edge and wingtips. Mammal and insect inspired BFRs can be impact resistant, making them useful in cluttered environments. Mammal inspired BFRs typically take inspiration from bats, but the flying squirrel has also inspired a prototype. Examples of bat inspired BFRs include Bat Bot and the DALER. Mammal inspired BFRs can be designed to be multi-modal; therefore, they're capable of both flight and terrestrial movement. To reduce the impact of landing, shock absorbers can be implemented along the wings. Alternatively, the BFR can pitch up and increase the amount of drag it experiences. By increasing the drag force, the BFR will decelerate and minimize the impact upon grounding. Different land gait patterns can also be implemented. Bird inspired BFRs can take inspiration from raptors, gulls, and everything in-between. Bird inspired BFRs can be feathered to increase the angle of attack range over which the prototype can operate before stalling. The wings of bird inspired BFRs allow for in-plane deformation, and the in-plane wing deformation can be adjusted to maximize flight efficiency depending on the flight gait. An example of a raptor inspired BFR is the prototype by Savastano et al. The prototype has fully deformable flapping wings and is capable of carrying a payload of up to 0.8 kg while performing a parabolic climb, steep descent, and rapid recovery. The gull inspired prototype by Grant et al. accurately mimics the elbow and wrist rotation of gulls, and they find that lift generation is maximized when the elbow and wrist deformations are opposite but equal. Insect inspired BFRs typically take inspiration from beetles or dragonflies. An example of a beetle inspired BFR is the prototype by Phan and Park, and a dragonfly inspired BFR is the prototype by Hu et al. The flapping frequency of insect inspired BFRs are much higher than those of other BFRs; this is because of the aerodynamics of insect flight. Insect inspired BFRs are much smaller than those inspired by mammals or birds, so they are more suitable for dense environments. Biologically-inspired flying robots A class of robots that are biologically inspired, but which do not attempt to mimic biology, are creations such as the Entomopter. Funded by DARPA, NASA, the United States Air Force, and the Georgia Tech Research Institute and patented by Prof. Robert C. Michelson for covert terrestrial missions as well as flight in the lower Mars atmosphere, the Entomopter flight propulsion system uses low Reynolds number wings similar to those of the hawk moth (Manduca sexta), but flaps them in a non-traditional "opposed x-wing fashion" while "blowing" the surface to enhance lift based on the Coandă effect as well as to control vehicle attitude and direction. Waste gas from the propulsion system not only facilitates the blown wing aerodynamics, but also serves to create ultrasonic emissions like that of a Bat for obstacle avoidance. The Entomopter and other biologically-inspired robots leverage features of biological systems, but do not attempt to create mechanical analogs. Snaking Several snake robots have been successfully developed. Mimicking the way real snakes move, these robots can navigate very confined spaces, meaning they may one day be used to search for people trapped in collapsed buildings. The Japanese ACM-R5 snake robot can even navigate both on land and in water. Skating A small number of skating robots have been developed, one of which is a multi-mode walking and skating device. It has four legs, with unpowered wheels, which can either step or roll. Another robot, Plen, can use a miniature skateboard or roller-skates, and skate across a desktop. Climbing Several different approaches have been used to develop robots that have the ability to climb vertical surfaces. One approach mimics the movements of a human climber on a wall with protrusions; adjusting the center of mass and moving each limb in turn to gain leverage. An example of this is Capuchin, built by Ruixiang Zhang at Stanford University, California. Another approach uses the specialized toe pad method of wall-climbing geckoes, which can run on smooth surfaces such as vertical glass. Examples of this approach include Wallbot and Stickybot. China's Technology Daily reported on 15 November 2008, that Li Hiu Yeung and his research group of New Concept Aircraft (Zhuhai) Co., Ltd. had successfully developed a bionic gecko robot named "Speedy Freelander". According to Yeung, the gecko robot could rapidly climb up and down a variety of building walls, navigate through ground and wall fissures, and walk upside-down on the ceiling. It was also able to adapt to the surfaces of smooth glass, rough, sticky or dusty walls as well as various types of metallic materials. It could also identify and circumvent obstacles automatically. Its flexibility and speed were comparable to a natural gecko. A third approach is to mimic the motion of a snake climbing a pole. Swimming (Piscine) It is calculated that when swimming some fish can achieve a propulsive efficiency greater than 90%. Furthermore, they can accelerate and maneuver far better than any man-made boat or submarine, and produce less noise and water disturbance. Therefore, many researchers studying underwater robots would like to copy this type of locomotion. Notable examples are the Robotic Fish G9, and Robot Tuna built to analyze and mathematically model thunniform motion. The Aqua Penguin, copies the streamlined shape and propulsion by front "flippers" of penguins. The Aqua Ray and Aqua Jelly emulate the locomotion of manta ray, and jellyfish, respectively. In 2014, iSplash-II was developed as the first robotic fish capable of outperforming real carangiform fish in terms of average maximum velocity (measured in body lengths/ second) and endurance, the duration that top speed is maintained. This build attained swimming speeds of 11.6BL/s (i.e. 3.7 m/s). The first build, iSplash-I (2014) was the first robotic platform to apply a full-body length carangiform swimming motion which was found to increase swimming speed by 27% over the traditional approach of a posterior confined waveform. Sailing Sailboat robots have also been developed in order to make measurements at the surface of the ocean. A typical sailboat robot is Vaimos. Since the propulsion of sailboat robots uses the wind, the energy of the batteries is only used for the computer, for the communication and for the actuators (to tune the rudder and the sail). If the robot is equipped with solar panels, the robot could theoretically navigate forever. The two main competitions of sailboat robots are WRSC, which takes place every year in Europe, and Sailbot. Computational robotics areas Control systems may also have varying levels of autonomy. Direct interaction is used for haptic or teleoperated devices, and the human has nearly complete control over the robot's motion. Operator-assist modes have the operator commanding medium-to-high-level tasks, with the robot automatically figuring out how to achieve them. An autonomous robot may go without human interaction for extended periods of time . Higher levels of autonomy do not necessarily require more complex cognitive capabilities. For example, robots in assembly plants are completely autonomous but operate in a fixed pattern. Another classification takes into account the interaction between human control and the machine motions. Teleoperation. A human controls each movement, each machine actuator change is specified by the operator. Supervisory. A human specifies general moves or position changes and the machine decides specific movements of its actuators. Task-level autonomy. The operator specifies only the task and the robot manages itself to complete it. Full autonomy. The machine will create and complete all its tasks without human interaction. Vision Computer vision is the science and technology of machines that see. As a scientific discipline, computer vision is concerned with the theory behind artificial systems that extract information from images. The image data can take many forms, such as video sequences and views from cameras. In most practical computer vision applications, the computers are pre-programmed to solve a particular task, but methods based on learning are now becoming increasingly common. Computer vision systems rely on image sensors that detect electromagnetic radiation which is typically in the form of either visible light or infra-red light. The sensors are designed using solid-state physics. The process by which light propagates and reflects off surfaces is explained using optics. Sophisticated image sensors even require quantum mechanics to provide a complete understanding of the image formation process. Robots can also be equipped with multiple vision sensors to be better able to compute the sense of depth in the environment. Like human eyes, robots' "eyes" must also be able to focus on a particular area of interest, and also adjust to variations in light intensities. There is a subfield within computer vision where artificial systems are designed to mimic the processing and behavior of biological system, at different levels of complexity. Also, some of the learning-based methods developed within computer vision have a background in biology. Environmental interaction and navigation Though a significant percentage of robots in commission today are either human controlled or operate in a static environment, there is an increasing interest in robots that can operate autonomously in a dynamic environment. These robots require some combination of navigation hardware and software in order to traverse their environment. In particular, unforeseen events (e.g. people and other obstacles that are not stationary) can cause problems or collisions. Some highly advanced robots such as ASIMO and Meinü robot have particularly good robot navigation hardware and software. Also, self-controlled cars, Ernst Dickmanns' driverless car, and the entries in the DARPA Grand Challenge, are capable of sensing the environment well and subsequently making navigational decisions based on this information, including by a swarm of autonomous robots. Most of these robots employ a GPS navigation device with waypoints, along with radar, sometimes combined with other sensory data such as lidar, video cameras, and inertial guidance systems for better navigation between waypoints. Human-robot interaction The state of the art in sensory intelligence for robots will have to progress through several orders of magnitude if we want the robots working in our homes to go beyond vacuum-cleaning the floors. If robots are to work effectively in homes and other non-industrial environments, the way they are instructed to perform their jobs, and especially how they will be told to stop will be of critical importance. The people who interact with them may have little or no training in robotics, and so any interface will need to be extremely intuitive. Science fiction authors also typically assume that robots will eventually be capable of communicating with humans through speech, gestures, and facial expressions, rather than a command-line interface. Although speech would be the most natural way for the human to communicate, it is unnatural for the robot. It will probably be a long time before robots interact as naturally as the fictional C-3PO, or Data of Star Trek, Next Generation. Even though the current state of robotics cannot meet the standards of these robots from science-fiction, robotic media characters (e.g., Wall-E, R2-D2) can elicit audience sympathies that increase people's willingness to accept actual robots in the future. Acceptance of social robots is also likely to increase if people can meet a social robot under appropriate conditions. Studies have shown that interacting with a robot by looking at, touching, or even imagining interacting with the robot can reduce negative feelings that some people have about robots before interacting with them. However, if pre-existing negative sentiments are especially strong, interacting with a robot can increase those negative feelings towards robots. Speech recognition Interpreting the continuous flow of sounds coming from a human, in real time, is a difficult task for a computer, mostly because of the great variability of speech. The same word, spoken by the same person may sound different depending on local acoustics, volume, the previous word, whether or not the speaker has a cold, etc.. It becomes even harder when the speaker has a different accent. Nevertheless, great strides have been made in the field since Davis, Biddulph, and Balashek designed the first "voice input system" which recognized "ten digits spoken by a single user with 100% accuracy" in 1952. Currently, the best systems can recognize continuous, natural speech, up to 160 words per minute, with an accuracy of 95%. With the help of artificial intelligence, machines nowadays can use people's voice to identify their emotions such as satisfied or angry. Robotic voice Other hurdles exist when allowing the robot to use voice for interacting with humans. For social reasons, synthetic voice proves suboptimal as a communication medium, making it necessary to develop the emotional component of robotic voice through various techniques. An advantage of diphonic branching is the emotion that the robot is programmed to project, can be carried on the voice tape, or phoneme, already pre-programmed onto the voice media. One of the earliest examples is a teaching robot named Leachim developed in 1974 by Michael J. Freeman. Leachim was able to convert digital memory to rudimentary verbal speech on pre-recorded computer discs. It was programmed to teach students in The Bronx, New York. Facial expression Facial expressions can provide rapid feedback on the progress of a dialog between two humans, and soon may be able to do the same for humans and robots. Robotic faces have been constructed by Hanson Robotics using their elastic polymer called Frubber, allowing a large number of facial expressions due to the elasticity of the rubber facial coating and embedded subsurface motors (servos). The coating and servos are built on a metal skull. A robot should know how to approach a human, judging by their facial expression and body language. Whether the person is happy, frightened, or crazy-looking affects the type of interaction expected of the robot. Likewise, robots like Kismet and the more recent addition, Nexi can produce a range of facial expressions, allowing it to have meaningful social exchanges with humans. Gestures One can imagine, in the future, explaining to a robot chef how to make a pastry, or asking directions from a robot police officer. In both of these cases, making hand gestures would aid the verbal descriptions. In the first case, the robot would be recognizing gestures made by the human, and perhaps repeating them for confirmation. In the second case, the robot police officer would gesture to indicate "down the road, then turn right". It is likely that gestures will make up a part of the interaction between humans and robots. A great many systems have been developed to recognize human hand gestures. Proxemics Proxemics is the study of personal space, and HRI systems may try to model and work with its concepts for human interactions. Artificial emotions Artificial emotions can also be generated, composed of a sequence of facial expressions or gestures. As can be seen from the movie Final Fantasy: The Spirits Within, the programming of these artificial emotions is complex and requires a large amount of human observation. To simplify this programming in the movie, presets were created together with a special software program. This decreased the amount of time needed to make the film. These presets could possibly be transferred for use in real-life robots. An example of a robot with artificial emotions is Robin the Robot developed by an Armenian IT company Expper Technologies, which uses AI-based peer-to-peer interaction. Its main task is achieving emotional well-being, i.e. overcome stress and anxiety. Robin was trained to analyze facial expressions and use his face to display his emotions given the context. The robot has been tested by kids in US clinics, and observations show that Robin increased the appetite and cheerfulness of children after meeting and talking. Personality Many of the robots of science fiction have a personality, something which may or may not be desirable in the commercial robots of the future. Nevertheless, researchers are trying to create robots which appear to have a personality: i.e. they use sounds, facial expressions, and body language to try to convey an internal state, which may be joy, sadness, or fear. One commercial example is Pleo, a toy robot dinosaur, which can exhibit several apparent emotions. Research robotics Much of the research in robotics focuses not on specific industrial tasks, but on investigations into new types of robots, alternative ways to think about or design robots, and new ways to manufacture them. Other investigations, such as MIT's cyberflora project, are almost wholly academic. To describe the level of advancement of a robot, the term "Generation Robots" can be used. This term is coined by Professor Hans Moravec, Principal Research Scientist at the Carnegie Mellon University Robotics Institute in describing the near future evolution of robot technology. First-generation robots, Moravec predicted in 1997, should have an intellectual capacity comparable to perhaps a lizard and should become available by 2010. Because the first generation robot would be incapable of learning, however, Moravec predicts that the second generation robot would be an improvement over the first and become available by 2020, with the intelligence maybe comparable to that of a mouse. The third generation robot should have intelligence comparable to that of a monkey. Though fourth generation robots, robots with human intelligence, professor Moravec predicts, would become possible, he does not predict this happening before around 2040 or 2050. Dynamics and kinematics The study of motion can be divided into kinematics and dynamics. Direct kinematics or forward kinematics refers to the calculation of end effector position, orientation, velocity, and acceleration when the corresponding joint values are known. Inverse kinematics refers to the opposite case in which required joint values are calculated for given end effector values, as done in path planning. Some special aspects of kinematics include handling of redundancy (different possibilities of performing the same movement), collision avoidance, and singularity avoidance. Once all relevant positions, velocities, and accelerations have been calculated using kinematics, methods from the field of dynamics are used to study the effect of forces upon these movements. Direct dynamics refers to the calculation of accelerations in the robot once the applied forces are known. Direct dynamics is used in computer simulations of the robot. Inverse dynamics refers to the calculation of the actuator forces necessary to create a prescribed end-effector acceleration. This information can be used to improve the control algorithms of a robot. In each area mentioned above, researchers strive to develop new concepts and strategies, improve existing ones, and improve the interaction between these areas. To do this, criteria for "optimal" performance and ways to optimize design, structure, and control of robots must be developed and implemented. Open source robotics Open source robotics research seeks standards for defining, and methods for designing and building, robots so that they can easily be reproduced by anyone. Research includes legal and technical definitions; seeking out alternative tools and materials to reduce costs and simplify builds; and creating interfaces and standards for designs to work together. Human usability research also investigates how to best document builds through visual, text or video instructions. Evolutionary robotics Evolutionary robots is a methodology that uses evolutionary computation to help design robots, especially the body form, or motion and behavior controllers. In a similar way to natural evolution, a large population of robots is allowed to compete in some way, or their ability to perform a task is measured using a fitness function. Those that perform worst are removed from the population and replaced by a new set, which have new behaviors based on those of the winners. Over time the population improves, and eventually a satisfactory robot may appear. This happens without any direct programming of the robots by the researchers. Researchers use this method both to create better robots, and to explore the nature of evolution. Because the process often requires many generations of robots to be simulated, this technique may be run entirely or mostly in simulation, using a robot simulator software package, then tested on real robots once the evolved algorithms are good enough. Currently, there are about 10 million industrial robots toiling around the world, and Japan is the top country having high density of utilizing robots in its manufacturing industry. Bionics and biomimetics Bionics and biomimetics apply the physiology and methods of locomotion of animals to the design of robots. For example, the design of BionicKangaroo was based on the way kangaroos jump. Swarm robotics Swarm robotics is an approach to the coordination of multiple robots as a system which consist of large numbers of mostly simple physical robots. ″In a robot swarm, the collective behavior of the robots results from local interactions between the robots and between the robots and the environment in which they act.″* Quantum computing There has been some research into whether robotics algorithms can be run more quickly on quantum computers than they can be run on digital computers. This area has been referred to as quantum robotics. Other research areas Nanorobots. Cobots (collaborative robots). Autonomous drones. High temperature crucibles allow robotic systems to automate sample analysis. The main venues for robotics research are the international conferences ICRA and IROS. Human factors Education and training Robotics engineers design robots, maintain them, develop new applications for them, and conduct research to expand the potential of robotics. Robots have become a popular educational tool in some middle and high schools, particularly in parts of the USA, as well as in numerous youth summer camps, raising interest in programming, artificial intelligence, and robotics among students. Employment Robotics is an essential component in many modern manufacturing environments. As factories increase their use of robots, the number of robotics–related jobs grow and have been observed to be steadily rising. The employment of robots in industries has increased productivity and efficiency savings and is typically seen as a long-term investment for benefactors. A study found that 47 percent of US jobs are at risk to automation "over some unspecified number of years". These claims have been criticized on the ground that social policy, not AI, causes unemployment. In a 2016 article in The Guardian, Stephen Hawking stated "The automation of factories has already decimated jobs in traditional manufacturing, and the rise of artificial intelligence is likely to extend this job destruction deep into the middle classes, with only the most caring, creative or supervisory roles remaining". The rise of robotics is thus often used as an argument for universal basic income. According to a GlobalData September 2021 report, the robotics industry was worth $45bn in 2020, and by 2030, it will have grown at a compound annual growth rate (CAGR) of 29% to $568bn, driving jobs in robotics and related industries. Occupational safety and health implications A discussion paper drawn up by EU-OSHA highlights how the spread of robotics presents both opportunities and challenges for occupational safety and health (OSH). The greatest OSH benefits stemming from the wider use of robotics should be substitution for people working in unhealthy or dangerous environments. In space, defense, security, or the nuclear industry, but also in logistics, maintenance, and inspection, autonomous robots are particularly useful in replacing human workers performing dirty, dull or unsafe tasks, thus avoiding workers' exposures to hazardous agents and conditions and reducing physical, ergonomic and psychosocial risks. For example, robots are already used to perform repetitive and monotonous tasks, to handle radioactive material or to work in explosive atmospheres. In the future, many other highly repetitive, risky or unpleasant tasks will be performed by robots in a variety of sectors like agriculture, construction, transport, healthcare, firefighting or cleaning services. Moreover, there are certain skills to which humans will be better suited than machines for some time to come and the question is how to achieve the best combination of human and robot skills. The advantages of robotics include heavy-duty jobs with precision and repeatability, whereas the advantages of humans include creativity, decision-making, flexibility, and adaptability. This need to combine optimal skills has resulted in collaborative robots and humans sharing a common workspace more closely and led to the development of new approaches and standards to guarantee the safety of the "man-robot merger". Some European countries are including robotics in their national programs and trying to promote a safe and flexible cooperation between robots and operators to achieve better productivity. For example, the German Federal Institute for Occupational Safety and Health (BAuA) organises annual workshops on the topic "human-robot collaboration". In the future, cooperation between robots and humans will be diversified, with robots increasing their autonomy and human-robot collaboration reaching completely new forms. Current approaches and technical standards aiming to protect employees from the risk of working with collaborative robots will have to be revised. User experience Great user experience predicts the needs, experiences, behaviors, language and cognitive abilities, and other factors of each user group. It then uses these insights to produce a product or solution that is ultimately useful and usable. For robots, user experience begins with an understanding of the robot's intended task and environment, while considering any possible social impact the robot may have on human operations and interactions with it. It defines that communication as the transmission of information through signals, which are elements perceived through touch, sound, smell and sight. The author states that the signal connects the sender to the receiver and consists of three parts: the signal itself, what it refers to, and the interpreter. Body postures and gestures, facial expressions, hand and head movements are all part of nonverbal behavior and communication. Robots are no exception when it comes to human-robot interaction. Therefore, humans use their verbal and nonverbal behaviors to communicate their defining characteristics. Similarly, social robots need this coordination to perform human-like behaviors. Careers Robotics is an interdisciplinary field, combining primarily mechanical engineering and computer science but also drawing on electronic engineering and other subjects. The usual way to build a career in robotics is to complete an undergraduate degree in one of these established subjects, followed by a graduate (masters') degree in Robotics. Graduate degrees are typically joined by students coming from all of the contributing disciplines, and include familiarization of relevant undergraduate level subject matter from each of them, followed by specialist study in pure robotics topics which build upon them. As an interdisciplinary subject, robotics graduate programmes tend to be especially reliant on students working and learning together and sharing their knowledge and skills from their home discipline first degrees. Robotics industry careers then follow the same pattern, with most roboticists working as part of interdisciplinary teams of specialists from these home disciplines followed by the robotics graduate degrees which enable them to work together. Workers typically continue to identify as members of their home disciplines who work in robotics, rather than as 'roboticists'. This structure is reinforced by the nature of some engineering professions, which grant chartered engineer status to members of home disciplines rather than to robotics as a whole. Robotics careers are widely predicted to grow in the 21st century, as robots replace more manual and intellectual human work. Some workers who lose their jobs to robotics may be well-placed to retrain to build and maintain these robots, using their domain-specific knowledge and skills. History
Technology
Tools and machinery
null
20905960
https://en.wikipedia.org/wiki/Nut%20%28hardware%29
Nut (hardware)
A nut is a type of fastener with a threaded hole. Nuts are almost always used in conjunction with a mating bolt to fasten multiple parts together. The two partners are kept together by a combination of their threads' friction (with slight elastic deformation), a slight stretching of the bolt, and compression of the parts to be held together. In applications where vibration or rotation may work a nut loose, various locking mechanisms may be employed: lock washers, jam nuts, eccentric double nuts, specialist adhesive thread-locking fluid such as Loctite, safety pins (split pins) or lockwire in conjunction with castellated nuts, nylon inserts (nyloc nut), or slightly oval-shaped threads. Square nuts, as well as bolt heads, were the first shape made and used to be the most common largely because they were much easier to manufacture, especially by hand. While rare today due to the reasons stated below for the preference of hexagonal nuts, they are occasionally used in some situations when a maximum amount of torque and grip is needed for a given size: the greater length of each side allows a spanner to be applied with a larger surface area and more leverage at the nut. The most common shape today is hexagonal, for similar reasons as the bolt head: six sides give a good granularity of angles for a tool to approach from (good in tight spots), but more (and smaller) corners would be vulnerable to being rounded off. It takes only one sixth of a rotation to obtain the next side of the hexagon and grip is optimal. However, polygons with more than six sides do not give the requisite grip and polygons with fewer than six sides take more time to be given a complete rotation. Other specialized shapes exist for certain needs, such as wingnuts for finger adjustment and captive nuts (e.g. cage nuts) for inaccessible areas. History Nuts and bolts were originally hand-crafted together, so that each nut matched its own bolt, but they were not interchangeable. This made it virtually impossible to replace lost or damaged fixers, as they were all different. Joseph Whitworth in 1841 proposed that a standard should be set, but it did not happen immediately. In 1851 the Great Exhibition of the Works of Industry of All Nations was to be held in Hyde Park, London, England, and it was decided to build the Crystal Palace as part; this had to be done in 190 days, and at reasonable cost. Research into the remains of the destroyed building in 2024 revealed a major innovation that made this possible. The construction firm responsible, Fox Henderson, decided to use nuts and bolts, but to use standardised sizes, a revolutionary method at the time. This enabled the building to be completed in time. The use of interchangeable nuts and bolts was so successful that the Whitworth standard was widely adopted. A British standard was not formally adopted until 1905. Types There is a wide variety of nuts, from household hardware versions to specialized industry-specific designs that are engineered to meet various technical standards. Fasteners used in automotive, engineering, and industrial applications usually need to be tightened to a specific torque setting, using a torque wrench. Nuts are graded with strength ratings compatible with their respective bolts; for example, an ISO property class 10 nut will be able to support the bolt proof strength load of an ISO property class 10.9 bolt without stripping. Locknuts Many specialised types of nut exist to resist loosening of bolted joints, either by providing a prevailing torque against the male fastener or by gripping against the bolted components. These are generally referred to as locknuts. Castellated nut Distorted thread locknut Centerlock nut Elliptical offset locknut Toplock nut Interfering thread nut Tapered thread nut Jam nut Jet nut (K-nut) Keps nut (K-nut or washer nut) with a star-type lock washer Nyloc plate nut Polymer insert nut (Nyloc) Security locknut Serrated face nut Serrated flange nut Speed nut (Sheet metal nut or Tinnerman nut) Split beam nut BINX nut Gallery Standard nut sizes Metric hex nuts Note that flat (spanner or wrench) sizes differ between industry standards. For example, wrench sizes of fastener used in Japanese built cars comply with JIS automotive standard. SAE hex nuts Classifications Hex nuts, recognized by their six-sided shape, and square nuts, with a square form, are commonly used. Steel nuts are strong and great for construction, while stainless steel ones resist rust, perfect for outdoor use. Brass nuts, corrosion-resistant, find their place in electrical and plumbing work. Lock nuts, like nylon-insert or prevailing torque types, prevent loosening due to vibration or torque, catering to specific needs across industries. Manufacture The manufacturing process of nuts involves several steps. It begins with the selection of raw materials like steel, stainless steel, or brass, depending on the desired type of nut. The chosen material undergoes heating to make it more malleable, followed by forming or forging processes to create the basic shape of the nut. Threads are then cut or formed onto the nut using specialized machinery. After threading, nuts may undergo additional treatments such as heat treatment or surface finishing to enhance their strength, durability, or appearance. Quality control checks are performed throughout the manufacturing process to ensure that the nuts meet industry standards and specifications.
Technology
Components_2
null
4831440
https://en.wikipedia.org/wiki/Fish%20meal
Fish meal
Fish meal, sometimes spelt fishmeal, is a commercial product made from whole wild-caught fish, bycatch, and fish by-products to feed farm animals, e.g., pigs, poultry, and farmed fish. Because it is calorically dense and cheap to produce, fishmeal has played a critical role in the growth of factory farms and the number of farm animals it is possible to breed and feed. Fishmeal takes the form of powder or cake. This form is obtained by drying the fish or fish trimmings, and then grinding it. If the fish used is a fatty fish it is first pressed to extract most of the fish oil. The production and large-scale use of fishmeal are controversial. The lucrative market for fishmeal as a feed encourages corporate fisheries not to limit their yields of by-catch (from which fish meal is made), and thus leads to depletion of ecosystems, environmental damage, and the collapse of local fisheries. Its role in facilitating the breeding and over-feeding of millions of pigs and chickens on factory farms has also been criticized by animal rights and animal welfare groups. Manufacturers of fishmeal counter that fishmeal's role in the feeding and breeding of millions of farm animals leads to the production of more food and the feeding of millions of people around the world. History Fish byproducts have been used historically to feed poultry, pigs, and other farmed fish. A primitive form of fishmeal is mentioned in The Travels of Marco Polo at the beginning of the 14th century: "they accustom their cattle, cows, sheep, camels, and horses to feed upon dried fish, which being regularly served to them, they eat without any sign of dislike." The use of herring as an industrial raw material started as early as about 800 AD in Norway; a very primitive process of pressing the oil out of herring by means of wooden boards and stones was employed. Use Prior to 1910, fish meal was primarily used as fertilizer, at least in the UK. Fish meal is now primarily used as a protein supplement in compound feed. As of 2010, about 56% of fish meal was used to feed farmed fish, about 20% was used in pig feed, about 12% in poultry feed, and about 12% in other uses, which included fertilizer. Fishmeal and fish oil are the principal sources of omega-3 long-chain polyunsaturated fatty acids (eicosapentaenoic acid [EPA] and docosahexaenoic acid [DHA]) in animal diets. The cost of 65% protein fishmeal has varied between around $385 to $554 per ton since 2000, which is about two to three times the price of soybean meal. The rising demand for fish, as people in the developed world turn away from red meat and toward other sources of meat protein, has increased demand for farmed fish, with farmed fish accounting for half the fish consumed worldwide as of 2016. Demand for fish meal has increased accordingly, but harvests are regulated and supply cannot expand. This has led to a trend towards use of other ingredients such as soybean meal, cottonseed meal, leftovers from processing from corn and wheat, legumes, and algae, and an increase in research to find alternatives to fish meal and alternate strategic uses (for instance, in the growth phase, after newborn fish are established). Fish used Fishmeal can be made from almost any type of seafood, but is generally manufactured from wild-caught, small marine fish that contain a high percentage of bones and oil. Previously, these fish have been considered unsuitable for direct human consumption, but more recent research indicates the vast majority of fishmeal made from whole wild-caught fish is made from fish suitable for direct human consumption. Other sources of fishmeal are from bycatch and byproducts of trimmings made during processing (fish waste or offal) of various seafood products destined for direct human consumption. The main fish sources by country are: Chile: anchovies, horse mackerel China: various species Denmark: pout, sand eel, sprat European Union: pout, capelin, sand eel, and mackerel Iceland and Norway: capelin, herring, blue whiting Japan: sardine, pilchard, sauries, mackerels Peru: anchovies South Africa: pilchard Thailand: various species United States: menhaden, pollock It takes 4 to 5 tons of fish to produce one ton of fish meal; about 6 million tons of fish are harvested each year solely to make fish meal. Environmental impact Fish meal production is a significant contributor of over-fishing, and risks pushing fisheries beyond their replacement rate. Some areas of the world, such as Western Africa, have seen a large increase in fish meal production which in turn is hurting local fisheries and driving fisheries into collapse. Processing Fishmeal is made by cooking, pressing, drying, and grinding of fish or fish waste into a solid. Most of the water and some or all of the oil is removed. Four or five tonnes of fish are needed to manufacture one tonne of dry fishmeal. Of the several ways of making fishmeal from raw fish, the simplest is to let the fish dry out in the sun before grinding and pressing. This method is still used in some parts of the world where processing plants are not available, but the end product is poor quality in comparison with ones made by modern methods. Today, all industrial fish meal is made by the following processes: Cooking: The fish are moved through a commercial cooker — a long, steam-jacketed cylinder — by a screw conveyor. This is a critical stage in preparing the fishmeal, as incomplete cooking means the liquid from the fish cannot be pressed out satisfactorily and overcooking makes the material too soft for pressing. No drying occurs in the cooking stage. Pressing: The cooked fish is compressed inside a perforated tube, expelling some of its liquids, leaving "press cake". Water content is reduced from 70% to about 50% and oil down to 4%. Drying: The press cake is dried by tumbling inside a heated drum. Under-drying may result in the growth of molds or bacteria; over-drying can cause scorching and reduction in the meal's nutritional value. Two alternative methods of drying are used: Direct: Very hot air at a temperature of 500 °C (932 °F) is passed over the material as it is tumbled rapidly in a cylindrical drum. While quicker, heat damage is much more likely if the process is not carefully controlled. Indirect: The meal is tumbled inside a cylinder containing steam-heated discs. Grinding: The dried meal is ground to remove any lumps or bone particles. Nutrient composition Any complete diet must contain some protein, but the nutritional value of the protein relates directly to its amino acid composition and digestibility. High-quality fishmeal normally contains between 60% and 72% crude protein by weight. Typical diets for fish may contain from 32% to 45% total protein by weight. Risks Unmodified fish meal can spontaneously combust from heat generated by oxidation of the polyunsaturated fatty acids in the meal. In the past, factory ships have sunk because of such fires. That danger has been eliminated by adding antioxidants to the meal. As of 2001, ethoxyquin was the most commonly used antioxidant, usually in the range 200–1000 mg/kg. There has been some speculation that ethoxyquin in pet foods might be responsible for multiple health problems. To date, the U.S. Food and Drug Administration has only found a verifiable connection between ethoxyquin and buildup of protoporphyrin IX in the liver, as well as elevations in liver-related enzymes in some animals, but with no known health consequences from these effects. In 1997, the Center for Veterinary Medicine asked pet food manufacturers to voluntarily limit ethoxyquin levels to 75 ppm until further evidence is reported. However, most pet foods that contain ethoxyquin have never exceeded this amount. Ethoxyquin has been shown to be slightly toxic to fish. Though it has been approved for use in foods in the US, and as a spray insecticide for fruits, ethoxyquin has not been thoroughly tested for its carcinogenic potential. Ethoxyquin has long been suggested to be a possible carcinogen, and a very closely related chemical, 1,2-dihydro-2,2,4-trimethylquinoline, has been shown to have carcinogenic activity in rats, and a potential for carcinogenic effect to fishmeal prior to storage or transportation. Globally, most of the fishmeal products are characterised by possessing a certain level of plastics pollution. A recent study showed that a wide range of plastics content was found, ranging from 0 to 526.7 n/kg in samples from 26 different fishmeal products, from 11 countries on four continents and Antarctica.
Technology
Animal husbandry
null
4838571
https://en.wikipedia.org/wiki/Position%20operator
Position operator
In quantum mechanics, the position operator is the operator that corresponds to the position observable of a particle. When the position operator is considered with a wide enough domain (e.g. the space of tempered distributions), its eigenvalues are the possible position vectors of the particle. In one dimension, if by the symbol we denote the unitary eigenvector of the position operator corresponding to the eigenvalue , then, represents the state of the particle in which we know with certainty to find the particle itself at position . Therefore, denoting the position operator by the symbol we can write for every real position . One possible realization of the unitary state with position is the Dirac delta (function) distribution centered at the position , often denoted by . In quantum mechanics, the ordered (continuous) family of all Dirac distributions, i.e. the family is called the (unitary) position basis, just because it is a (unitary) eigenbasis of the position operator in the space of tempered distributions. It is fundamental to observe that there exists only one linear continuous endomorphism on the space of tempered distributions such that for every real point . It's possible to prove that the unique above endomorphism is necessarily defined by for every tempered distribution , where denotes the coordinate function of the position line defined from the real line into the complex plane by Introduction Consider representing the quantum state of a particle at a certain instant of time by a square integrable wave function . For now, assume one space dimension (i.e. the particle "confined to" a straight line). If the wave function is normalized, then the square modulus represents the probability density of finding the particle at some position of the real-line, at a certain time. That is, if then the probability to find the particle in the position range is Hence the expected value of a measurement of the position for the particle is where is the coordinate function which is simply the canonical embedding of the position-line into the complex plane. Strictly speaking, the observable position can be point-wisely defined as for every wave function and for every point of the real line. In the case of equivalence classes the definition reads directly as follows That is, the position operator multiplies any wave-function by the coordinate function . Three dimensions The generalisation to three dimensions is straightforward. The space-time wavefunction is now and the expectation value of the position operator at the state is where the integral is taken over all space. The position operator is Basic properties In the above definition, which regards the case of a particle confined upon a line, the careful reader may remark that there does not exist any clear specification of the domain and the co-domain for the position operator. In literature, more or less explicitly, we find essentially three main directions to address this issue. The position operator is defined on the subspace of formed by those equivalence classes whose product by the embedding lives in the space . In this case the position operator reveals not continuous (unbounded with respect to the topology induced by the canonical scalar product of ), with no eigenvectors, no eigenvalues and consequently with empty point spectrum. The position operator is defined on the Schwartz space (i.e. the nuclear space of all smooth complex functions defined upon the real-line whose derivatives are rapidly decreasing). In this case the position operator reveals continuous (with respect to the canonical topology of ), injective, with no eigenvectors, no eigenvalues and consequently with empty point spectrum. It is (fully) self-adjoint with respect to the scalar product of in the sense that The position operator is defined on the dual space of (i.e. the nuclear space of tempered distributions). As is a subspace of , the product of a tempered distribution by the embedding always lives . In this case the position operator reveals continuous (with respect to the canonical topology of ), surjective, endowed with complete families of generalized eigenvectors and real generalized eigenvalues. It is self-adjoint with respect to the scalar product of in the sense that its transpose operator is self-adjoint, that is The last case is, in practice, the most widely adopted choice in Quantum Mechanics literature, although never explicitly underlined. It addresses the possible absence of eigenvectors by extending the Hilbert space to a rigged Hilbert space: thereby providing a mathematically rigorous notion of eigenvectors and eigenvalues. Eigenstates The eigenfunctions of the position operator (on the space of tempered distributions), represented in position space, are Dirac delta functions. Informal proof. To show that possible eigenvectors of the position operator should necessarily be Dirac delta distributions, suppose that is an eigenstate of the position operator with eigenvalue . We write the eigenvalue equation in position coordinates, recalling that simply multiplies the wave-functions by the function , in the position representation. Since the function is variable while is a constant, must be zero everywhere except at the point . Clearly, no continuous function satisfies such properties, and we cannot simply define the wave-function to be a complex number at that point because its -norm would be 0 and not 1. This suggest the need of a "functional object" concentrated at the point and with integral different from 0: any multiple of the Dirac delta centered at . The normalized solution to the equation is or better such that Indeed, recalling that the product of any function by the Dirac distribution centered at a point is the value of the function at that point times the Dirac distribution itself, we obtain immediately Although such Dirac states are physically unrealizable and, strictly speaking, are not functions, Dirac distribution centered at can be thought of as an "ideal state" whose position is known exactly (any measurement of the position always returns the eigenvalue ). Hence, by the uncertainty principle, nothing is known about the momentum of such a state. Momentum space Usually, in quantum mechanics, by representation in the momentum space we intend the representation of states and observables with respect to the canonical unitary momentum basis In momentum space, the position operator in one dimension is represented by the following differential operator where: the representation of the position operator in the momentum basis is naturally defined by , for every wave function (tempered distribution) ; represents the coordinate function on the momentum line and the wave-vector function is defined by . Formalism in L2(R, C) Consider the case of a spinless particle moving in one spatial dimension. The state space for such a particle contains ; the Hilbert space of complex-valued, square-integrable functions on the real line. The position operator is defined as the self-adjoint operator with domain of definition and coordinate function sending each point to itself, such that for each pointwisely defined and . Immediately from the definition we can deduce that the spectrum consists of the entire real line and that has a strictly continuous spectrum, i.e., no discrete set of eigenvalues. The three-dimensional case is defined analogously. We shall keep the one-dimensional assumption in the following discussion. Measurement theory in L2(R, C) As with any quantum mechanical observable, in order to discuss position measurement, we need to calculate the spectral resolution of the position operator which is where is the so-called spectral measure of the position operator. Let denote the indicator function for a Borel subset of . Then the spectral measure is given by i.e., as multiplication by the indicator function of . Therefore, if the system is prepared in a state , then the probability of the measured position of the particle belonging to a Borel set is where is the Lebesgue measure on the real line. After any measurement aiming to detect the particle within the subset B, the wave function collapses to either or where is the Hilbert space norm on .
Physical sciences
Quantum mechanics
Physics
23916629
https://en.wikipedia.org/wiki/Ebook
Ebook
An ebook (short for electronic book), also spelled as e-book or eBook, is a book publication made available in electronic form, consisting of text, images, or both, readable on the flat-panel display of computers or other electronic devices. Although sometimes defined as "an electronic version of a printed book", some e-books exist without a printed equivalent. E-books can be read on dedicated e-reader devices, also on any computer device that features a controllable viewing screen, including desktop computers, laptops, tablets and smartphones. In the 2000s, there was a trend of print and e-book sales moving to the Internet, where readers buy traditional paper books and e-books on websites using e-commerce systems. With print books, readers are increasingly browsing through images of the covers of books on publisher or bookstore websites and selecting and ordering titles online. The paper books are then delivered to the reader by mail or any other delivery service. With e-books, users can browse through titles online, select and order titles, then the e-book can be sent to them online or the user can download the e-book. By the early 2010s, e-books had begun to overtake hardcover by overall publication figures in the U.S. The main reasons people buy e-books are possibly because of lower prices, increased comfort (as they can buy from home or on the go with mobile devices) and a larger selection of titles. With e-books, "electronic bookmarks make referencing easier, and e-book readers may allow the user to annotate pages." "Although fiction and non-fiction books come in e-book formats, technical material is especially suited for e-book delivery because it can be digitally searched" for keywords. In addition, for programming books, code examples can be copied. In the U.S., the amount of e-book reading is increasing. By 2014, 28% of adults had read an e-book, compared to 23% in 2013. By 2014, 50% of American adults had an e-reader or a tablet, compared to 30% owning such devices in 2013. Besides published books and magazines that have a digital equivalent, there are also digital textbooks that are intended to serve as the text for a class and help in technology-based education. Terminology E-books are also referred to as "ebooks", "eBooks", "Ebooks", "e-Books", "e-journals", "e-editions", or "digital books". A device that is designed specifically for reading e-books is called an "e-reader", "ebook device", or "eReader". History The Readies (1930) Some trace the concept of an e-reader, a device that would enable the user to view books on a screen, to a 1930 manifesto by Bob Brown, written after watching his first "talkie" (movie with sound). He titled it The Readies, playing off the idea of the "talkie". In his book, Brown says movies have outmaneuvered the book by creating the "talkies" and, as a result, reading should find a new medium: Brown's notion, however, was much more focused on reforming orthography and vocabulary, than on medium. He says: "It is time to pull out the stopper" and begin "a bloody revolution of the word," introducing huge numbers of portmanteau symbols to replace normal words, and punctuation to simulate action or movement, so it is not clear whether this fits into the history of "e-books" or not. Later e-readers never followed a model at all like Brown's. However, he correctly predicted the miniaturization and portability of e-readers. In an article, Jennifer Schuessler writes: "The machine, Brown argued, would allow readers to adjust the type size, avoid paper cuts and save trees, all while hastening the day when words could be 'recorded directly on the palpitating ether.'" Brown believed that the e-reader (and his notions for changing the text itself) would bring a completely new life to reading. Schuessler correlates it with a DJ spinning bits of old songs to create a beat or an entirely new song, as opposed to just a remix of a familiar song. Inventor The inventor of the first e-book is not widely agreed upon. Some notable candidates include the following: Roberto Busa (1946–1970) The first e-book may be the Index Thomisticus, a heavily annotated electronic index to the works of Thomas Aquinas, prepared by Roberto Busa, S.J. beginning in 1946 and completed in the 1970s. Although originally stored on a single computer, a distributable CD-ROM version appeared in 1989. However, this work is sometimes omitted. Maybe this is because the digitized text was a means for studying written texts and developing linguistic concordances, rather than as a published edition in its own right. In 2005, the Index was published online. Ángela Ruiz Robles (1949) In 1949, Ángela Ruiz Robles, a teacher from Ferrol, Spain, patented the Enciclopedia Mecánica, or the Mechanical Encyclopedia, a mechanical device which operated on compressed air where text and graphics were contained on spools that users would load onto rotating spindles. Her idea was to create a device which would decrease the number of books that her pupils carried to school. The final device was planned to include audio recordings, a magnifying glass, a calculator, and an electric light for night reading. Her device was never put into production but a prototype is on display at the National Museum of Science and Technology in A Coruña. Douglas Engelbart and Andries van Dam (1960s) Alternatively, some historians consider electronic books to have started in the early 1960s, with the NLS project headed by Douglas Engelbart at Stanford Research Institute (SRI), and the Hypertext Editing System and FRESS projects headed by Andries van Dam at Brown University. FRESS documents ran on IBM main frames and were structure-oriented rather than line-oriented. They were formatted dynamically for different users, display hardware, window sizes, and so on, as well as having automated tables of contents, indexes, and so on. All these systems also provided extensive hyperlinking, graphics, and other capabilities. Van Dam is generally thought to have coined the term "electronic book", and it was established enough to use in an article title by 1985. FRESS was used for reading extensive primary texts online, as well as for annotation and online discussions in several courses, including English Poetry and Biochemistry. Brown's faculty made extensive use of FRESS. For example the philosopher Roderick Chisholm used it to produce several of his books. Thus in the Preface to Person and Object (1979) he writes: "The book would not have been completed without the epoch-making File Retrieval and Editing System..." Brown University's work in electronic book systems continued for many years, including US Navy funded projects for electronic repair-manuals; a large-scale distributed hypermedia system known as InterMedia; a spinoff company Electronic Book Technologies that built DynaText, the first SGML-based e-reader system; and the Scholarly Technology Group's extensive work on the Open eBook standard. Michael S. Hart (1971) Despite the extensive earlier history, several publications report Michael S. Hart as the inventor of the e-book. In 1971, the operators of the Xerox Sigma V mainframe at the University of Illinois gave Hart extensive computer time. Seeking a worthy use of this resource, he created his first electronic document by typing the United States Declaration of Independence into a computer in plain text. Hart planned to create documents using plain text to make them as easy as possible to download and view on devices. After Hart first adapted the U.S. Declaration of Independence into an electronic document in 1971, Project Gutenberg was launched to create electronic copies of more texts, especially books. Early hardware implementations Dedicated hardware devices for ebook reading began to appear in the 70s and 80s, in addition to the main frame and laptop solutions, and collections of data per se. One early e-book implementation was the desktop prototype for a proposed notebook computer, the Dynabook, in the 1970s at PARC: a general-purpose portable personal computer capable of displaying books for reading. In 1980, the U.S. Department of Defense began a concept development for a portable electronic delivery device for technical maintenance information called project PEAM, the Portable Electronic Aid for Maintenance. Detailed specifications were completed in FY 1981/82, and prototype development began with Texas Instruments that same year. Four prototypes were produced and delivered for testing in 1986, and tests were completed in 1987. The final summary report was produced in 1989 by the U.S. Army Research Institute for the Behavioral and Social Sciences, authored by Robert Wisher and J. Peter Kincaid. A patent application for the PEAM device, titled "Apparatus for delivering procedural type instructions", was submitted by Texas Instruments on December 4, 1985, listing John K. Harkins and Stephen H. Morriss as inventors. In 1992, Sony launched the Data Discman, an electronic book reader that could read e-books that were stored on CDs. One of the electronic publications that could be played on the Data Discman was called Library of the Future. Early e-books were generally written for specialty areas and a limited audience, meant to be read only by small and devoted interest groups. The scope of the subject matter of these e-books included technical manuals for hardware, manufacturing techniques, and other subjects. In the 1990s, the general availability of the Internet made transferring electronic files much easier, including e-books. In 1993, Paul Baim released a freeware HyperCard stack, called EBook, that allowed easy import of any text file to create a pageable version similar to an electronic paperback book. A notable feature was automatic tracking of the last page read so that on returning to the 'book' you were taken back to where you had previously left off reading. The title of this stack may have helped popularize the term 'ebook'. E-book formats As e-book formats emerged and proliferated, some garnered support from major software companies, such as Adobe with its PDF format that was introduced in 1993. Unlike most other formats, PDF documents are generally tied to a particular dimension and layout, rather than adjusting dynamically to the current page, window, or another size. Different e-reader devices followed different formats, most of them accepting books in only one or a few formats, thereby fragmenting the e-book market even more. Due to the exclusiveness and limited readerships of e-books, the fractured market of independent publishers and specialty authors lacked consensus regarding a standard for packaging and selling e-books. Meanwhile, scholars formed the Text Encoding Initiative, which developed consensus guidelines for encoding books and other materials of scholarly interest for a variety of analytic uses as well as reading. Countless literary and other works have been developed using the TEI approach. In the late 1990s, a consortium formed to develop the Open eBook format as a way for authors and publishers to provide a single source-document which many book-reading software and hardware platforms could handle. Several scholars from the TEI were closely involved in the early development of Open eBook, including Allen Renear, Elli Mylonas, and Steven DeRose, all from Brown. Focused on portability, Open eBook as defined required subsets of XHTML and CSS; a set of multimedia formats (others could be used, but there must also be a fallback in one of the required formats), and an XML schema for a "manifest", to list the components of a given e-book, identify a table of contents, cover art, and so on. This format led to the open format EPUB. Google Books has converted many public domain works to this open format. In 2010, e-books continued to gain in their own specialist and underground markets. Many e-book publishers began distributing books that were in the public domain. At the same time, authors with books that were not accepted by publishers offered their works online so they could be seen by others. Unofficial (and occasionally unauthorized) catalogs of books became available on the web, and sites devoted to e-books began disseminating information about e-books to the public. Nearly two-thirds of the U.S. Consumer e-book publishing market are controlled by the "Big Five". The "Big Five" publishers are: Hachette, HarperCollins, Macmillan, Penguin Random House and Simon & Schuster. Libraries U.S. libraries began to offer free e-books to the public in 1998 through their websites and associated services, although the e-books were primarily scholarly, technical, or professional in nature, and could not be downloaded. In 2003, libraries began offering free downloadable popular fiction and non-fiction e-books to the public, launching an e-book lending model that worked much more successfully for public libraries. The number of library e-book distributors and lending models continued to increase over the next few years. From 2005 to 2008, libraries experienced a 60% growth in e-book collections. In 2010, a Public Library Funding and Technology Access Study by the American Library Association found that 66% of public libraries in the U.S. were offering e-books, and a large movement in the library industry began to seriously examine the issues relating to e-book lending, acknowledging a "tipping point" when e-book technology would become widely established. Content from public libraries can be downloaded to e-readers using application software like Overdrive and Hoopla. The U.S. National Library of Medicine has for many years provided PubMed, a comprehensive bibliography of medical literature. In early 2000, NLM set up the PubMed Central repository, which stores full-text e-book versions of many medical journal articles and books, through co-operation with scholars and publishers in the field. Pubmed Central also now provides archiving and access to over 4.1 million articles, maintained in a standard XML format known as the Journal Article Tag Suite (JATS). Despite the widespread adoption of e-books, some publishers and authors have not endorsed the concept of electronic publishing, citing issues with user demand, copyright infringement and challenges with proprietary devices and systems. In a survey of interlibrary loan (ILL) librarians, it was found that 92% of libraries held e-books in their collections and that 27% of those libraries had negotiated ILL rights for some of their e-books. This survey found significant barriers to conducting interlibrary loan for e-books. Patron-driven acquisition (PDA) has been available for several years in public libraries, allowing vendors to streamline the acquisition process by offering to match a library's selection profile to the vendor's e-book titles. The library's catalog is then populated with records for all of the e-books that match the profile. The decision to purchase the title is left to the patrons, although the library can set purchasing conditions such as a maximum price and purchasing caps so that the dedicated funds are spent according to the library's budget. The 2012 meeting of the Association of American University Presses included a panel on the PDA of books produced by university presses, based on a preliminary report by Joseph Esposito, a digital publishing consultant who has studied the implications of PDA with a grant from the Andrew W. Mellon Foundation. Challenges Although the demand for e-book services in libraries has grown in the first two decades of the 21st century, difficulties keep libraries from providing some e-books to clients. Publishers will sell e-books to libraries, but in most cases they will only give libraries a limited license to the title, meaning that the library does not own the electronic text but is allowed to circulate it for either a certain period of time, or a certain number of check outs, or both. When a library purchases an e-book license, the cost is at least three times what it would be for a personal consumer. E-book licenses are more expensive than paper-format editions because publishers are concerned that an e-book that is sold could theoretically be read and/or checked out by a huge number of users, potentially damaging sales. However, some studies have found the opposite effect to be true (for example, Hilton and Wikey 2010). Archival storage The Internet Archive and Open Library offer more than six million fully accessible public domain e-books. Project Gutenberg has over 52,000 freely available public domain e-books. Dedicated hardware readers and mobile software An e-reader, also called an e-book reader or e-book device, is a mobile electronic device that is designed primarily for the purpose of reading e-books and digital periodicals. An e-reader is similar in form, but more limited in purpose than a tablet. In comparison to tablets, many e-readers are better than tablets for reading because they are more portable, have better readability in sunlight and have longer battery life. In July 2010, online bookseller Amazon.com reported sales of e-books for its proprietary Kindle, outnumbered sales of hardcover books for the first time ever during the second quarter of 2010, saying it sold 140 e-books for every 100 hardcover books, including hardcovers for which there was no digital edition. By January 2011, e-book sales at Amazon had surpassed its paperback sales. In the overall US market, paperback book sales are still much larger than either hardcover or e-book. The American Publishing Association estimated e-books represented 8.5% of sales as of mid-2010, up from 3% a year before. At the end of the first quarter of 2012, e-book sales in the United States surpassed hardcover book sales for the first time. Until late 2013, use of an e-reader was not allowed on airplanes during takeoff and landing by the FAA. In November 2013, the FAA allowed use of e-readers on airplanes at all times if it is in Airplane Mode, which means all radios turned off, and Europe followed this guidance the next month. In 2014, The New York Times predicted that by 2018 e-books will make up over 50% of total consumer publishing revenue in the United States and Great Britain. Applications Some of the major book retailers and multiple third-party developers offer free (and in some third-party cases, premium paid) e-reader software applications (apps) for the Mac and PC computers as well as for Android, Blackberry, iPad, iPhone, Windows Phone and Palm OS devices to allow the reading of e-books and other documents independently of dedicated e-book devices. Examples are apps for the Amazon Kindle, Barnes & Noble Nook, iBooks, Kobo eReader and Sony Reader. Timeline Before the 1980s Ángela Ruiz Robles patents the idea of the electronic book, called the Mechanical Encyclopedia, in Galicia, Spain. Roberto Busa begins planning the Index Thomisticus. Douglas Engelbart starts the NLS (and later Augment) projects. c. 1965 Andries van Dam starts the HES (and later FRESS) projects, with assistance from Ted Nelson, to develop and use electronic textbooks for humanities and in pedagogy. 1971 Michael S. Hart types the US Declaration of Independence into a computer to create the first e-book available on the Internet and launches Project Gutenberg in order to create electronic copies of more books. c. 1979 Roberto Busa finishes the Index Thomisticus, a complete lemmatisation of the 56 printed volumes of Saint Thomas Aquinas and of a few related authors. 1980s and 1990s 1986 Judy Malloy writes and programmes the first online hypertext fiction, Uncle Roger, with links that take the narrative in different directions depending on the reader's choice. 1989 Franklin Computer releases an electronic edition of the Bible that can only be read with a stand-alone device. 1990 Eastgate Systems publishes the first hypertext fiction released on floppy disk, afternoon, a story, by Michael Joyce. Electronic Book Technologies releases DynaText, the first SGML-based system for delivering large-scale books such as aircraft technical manuals. It was later tested on a US aircraft carrier as replacement for paper manuals. Sony launches the Data Discman e-book player. 1991 Voyager Company develops Expanded Books, which are books on CD-ROM in a digital format. 1992 F. Crugnola and I. Rigamonti design and create the first e-reader, called Incipit, as a thesis project at the Polytechnic University of Milan. Apple starts using its Doc Viewer format "to distribute documentation to developers in an electronic form", which effectively meant Inside Macintosh books. 1993 Peter James publishes his novel Host on two floppy disks, which at the time was called the "world's first electronic novel", a copy of it is stored at the Science Museum. Hugo Award and Nebula Award nominee works are included on a CD-ROM by Brad Templeton. Launch of Bibliobytes, a website for obtaining e-books, both for free and for sale on the Internet. Paul Baim releases the EBook 1.0 HyperCard stack that allows the user to easily convert any text file into a HyperCard based pageable book. 1994 C & M Online is founded in Raleigh, North Carolina and begins publishing e-books through its imprint, Boson Books; authors include Fred Chappell, Kelly Cherry, Leon Katz, Richard Popkin, and Robert Rodman. More than two dozen volumes of Inside Macintosh are published together on a single CD-ROM in Apple Doc Viewer format. Apple subsequently switches to using Adobe Acrobat. The popular format for publishing e-books changes from plain text to HTML. 1995 Online poet Alexis Kirke discusses the need for wireless internet electronic paper readers in his article "The Emuse". 1996 Project Gutenberg reaches 1,000 titles. Joseph Jacobson works at MIT to create electronic ink, a high-contrast, low-cost, read/write/erase medium to display e-books. 1997 E Ink Corporation is co-founded by MIT undergraduates J.D. Albert, Barrett Comiskey, MIT professor Joseph Jacobson, as well as Jeremy Rubin and Russ Wilcox to create an electronic printing technology. This technology is later used on the displays of the Sony Reader, Barnes & Noble Nook, and Amazon Kindle. 1998 Nuvo Media releases the first handheld e-reader, the Rocket eBook. SoftBook launches its SoftBook reader. This e-reader, with expandable storage, could store up to 100,000 pages of content, including text, graphics and pictures. The Cybook is sold and manufactured at first by Cytale (1998–2003) and later by Bookeen. 1999 The NIST releases the Open eBook format based on XML to the public domain; most future e-book formats derive from Open eBook. Publisher Simon & Schuster creates a new imprint called iBooks and becomes the first trade publisher to simultaneously publish some of its titles in e-book and print format. Oxford University Press makes a selection of its books available as e-books through netLibrary. Publisher Baen Books opens up the Baen Free Library to make available Baen titles as free e-books. Kim Blagg, via her company Books OnScreen, begins selling multimedia-enhanced e-books on CDs through retailers including Amazon, Barnes & Noble and Borders. 2000s 2000 Joseph Jacobson, Barrett O. Comiskey and Jonathan D. Albert are granted US patents related to displaying electronic books, these patents are later used in the displays for most e-readers. Stephen King releases his novella Riding the Bullet exclusively online and it became the first mass-market e-book, selling 500,000 copies in 48 hours. Microsoft releases the Microsoft Reader with ClearType for increased readability on PCs and handheld devices. Microsoft and Amazon work together to sell e-books that can be purchased on Amazon, and using Microsoft software downloaded to PCs and handhelds. A digitized version of the Gutenberg Bible is made available online at the British Library. 2001 Adobe releases Adobe Acrobat Reader 5.0 allowing users to underline, take notes and bookmark. 2002 Palm, Inc and OverDrive, Inc make Palm Reader e-books available worldwide, offering over 5,000 e-books in several languages; these could be read on Palm PDAs or using a computer application. Random House and HarperCollins start to sell digital versions of their titles in English. 2004 Sony Librie, the first e-reader using an E Ink display is released; it has a six-inch screen. Google announces plans to digitize the holdings of several major libraries, as part of what would later be called the Google Books Library Project. 2005 Amazon buys Mobipocket, the creator of the mobi e-book file format and e-reader software. Google is sued for copyright infringement by the Authors Guild for scanning books still in copyright. 2006 Sony Reader PRS-500, with an E Ink screen and two weeks of battery life, is released. LibreDigital launches BookBrowse as an online reader for publisher content. 2007 The International Digital Publishing Forum releases EPUB to replace Open eBook. In November, Amazon.com releases the Kindle e-reader with 6-inch E Ink screen in the US and it sells outs in 5.5 hours. Simultaneously, the Kindle Store opens, with initially more than 88,000 e-books available. Bookeen launches Cybook Gen3 in Europe; it can display e-books and play audiobooks. 2008 Adobe and Sony agree to share their technologies (Adobe Reader and DRM) with each other. Sony sells the Sony Reader PRS-505 in UK and France. 2009 Bookeen releases the Cybook Opus in the US and Europe. Sony releases the Reader Pocket Edition and Reader Touch Edition. Amazon releases the Kindle 2 that includes a text-to-speech feature. Amazon releases the Kindle DX that has a 9.7-inch screen in the U.S. Barnes & Noble releases the Nook e-reader in the US. Amazon releases the Kindle for PC application in late 2009, making the Kindle Store library available for the first time outside Kindle hardware. 2010s 2010 January – Amazon releases the Kindle DX International Edition worldwide. April – Apple releases the iPad bundled with an e-book app called iBooks. May – Kobo Inc. releases its Kobo eReader to be sold at Indigo/Chapters in Canada and Borders in the United States. July – Amazon reports that its e-book sales outnumbered sales of hardcover books for the first time during the second quarter of 2010. August – PocketBook expands its line with an Android e-reader. August – Amazon releases the third generation Kindle, available in Wi-Fi and 3G & Wi-Fi versions. October – Bookeen reveals the Cybook Orizon at CES. October – Kobo Inc. releases an updated Kobo eReader, which includes Wi-Fi capability. November – The Sentimentalists wins the prestigious national Giller Prize in Canada; due to the small scale of the novel's publisher, the book is not widely available in printed form, so the e-book edition becomes the top-selling title on Kobo devices for 2010. November – Barnes & Noble releases the Nook Color, a color LCD tablet. December – Google launches Google eBooks offering over three million titles, becoming the world's largest e-book store to date. 2011 May – Amazon.com announces that its e-book sales in the US now exceed all of its printed book sales. June – Barnes & Noble releases the Nook Simple Touch e-reader and Nook Tablet. August – Bookeen launches its own e-books store, BookeenStore.com, and starts to sell digital versions of titles in French. September – Nature Publishing releases the pilot version of Principles of Biology, a customizable, modular textbook, with no corresponding paper edition. June/November – As the e-reader market grows in Spain, companies like Telefónica, Fnac, and Casa del Libro launch their e-readers with the Spanish brand "bq readers". November – Amazon launches the Kindle Fire and Kindle Touch, both devices designed for e-reading. 2012 E-book sales in the US market collect over three billion in revenue. January – Apple releases iBooks Author, software for creating iPad e-books to be directly published in its iBooks bookstore or to be shared as PDF files. January – Apple opens a textbook section in its iBooks bookstore. February – Nature Publishing announces the worldwide release of Principles of Biology, following the success of the pilot version some months earlier. February – Library.nu (previously called ebooksclub.org and gigapedia.com, a popular linking website for downloading e-books) is accused of copyright infringement and closed down by court order. March – The publishing companies Random House, Holtzbrinck, and arvato bring to market an e-book library called Skoobe. March – US Department of Justice prepares anti-trust lawsuit against Apple, Simon & Schuster, Hachette Book Group, Penguin Group, Macmillan, and HarperCollins, alleging collusion to increase the price of books sold on Amazon. March – PocketBook releases the PocketBook Touch, an E Ink Pearl e-reader, winning awards from German magazines Tablet PC and Computer Bild. June – Kbuuk releases the cloud-based e-book self-publishing SaaS platform on the Pubsoft digital publishing engine. September – Amazon releases the Kindle Paperwhite, its first e-reader with built-in front LED lights. 2013 April – Kobo releases the Kobo Aura HD with a 6.8-inch screen, which is larger than the current models produced by its US competitors. May – Mofibo launches the first Scandinavian unlimited access e-book subscription service. June – Association of American Publishers announces that e-books now account for about 20% of book sales. Barnes & Noble estimates it has a 27% share of the US e-book market. June – Barnes & Noble announces its intention to discontinue manufacturing Nook tablets, but to continue producing black-and-white e-readers such as the Nook Simple Touch. June – Apple executive Keith Moerer testifies in the e-book price fixing trial that the iBookstore held approximately 20% of the e-book market share in the United States within the months after launch – a figure that Publishers Weekly reports is roughly double many of the previous estimates made by third parties. Moerer further testified that iBookstore acquired about an additional 20% by adding Random House in 2011. Five major US e-book publishers, as part of their settlement of a price-fixing suit, are ordered to refund about $3 for every electronic copy of a New York Times best-seller that they sold from April 2010 to May 2012. This could equal $160 million in settlement charges. Barnes & Noble releases the Nook Glowlight, which has a 6-inch touchscreen using E Ink Pearl and Regal, with built-in front LED lights. July – US District Court Judge Denise Cote finds Apple guilty of conspiring to raise the retail price of e-books and schedules a trial in 2014 to determine damages. August – Kobo releases the Kobo Aura, a baseline touchscreen six-inch e-reader. September – Oyster launches its unlimited access e-book subscription service. November – US District Judge Chin sides with Google in Authors Guild v. Google, citing fair use. The authors said they would appeal. December – Scribd launches the first public unlimited access subscription service for e-books. 2014 April – Kobo releases the Aura H₂0, the world's first waterproof commercially produced e-reader. June – US District Court Judge Cote grants class action certification to plaintiffs in a lawsuit over Apple's alleged e-book price conspiracy; the plaintiffs are seeking $840 million in damages. Apple appeals the decision. June – Apple settles the e-book antitrust case that alleged Apple conspired to e-book price fixing out of court with the States; however if Judge Cote's ruling is overturned in appeal the settlement would be reversed. July – Amazon launches Kindle Unlimited, an unlimited-access e-book and audiobook subscription service. 2015 June – The 2nd US Circuit Court of Appeals with a 2:1 vote concurs with Judge Cote that Apple conspired to e-book price fixing and violated federal antitrust law. Apple appealed the decision. June – Amazon releases the Kindle Paperwhite (3rd generation) that is the first e-reader to feature Bookerly, a font exclusively designed for e-readers. September – Oyster announces its unlimited access e-book subscription service would be shut down in early 2016 and that it would be acquired by Google. September – Malaysian e-book company, e-Sentral, introduces for the first time geo-location distribution technology for e-books via bluetooth beacon. It was first demonstrated in a large scale at Kuala Lumpur International Airport. October – Amazon releases the Kindle Voyage that has a 6-inch, 300 ppi E Ink Carta HD display, which was the highest resolution and contrast available in e-readers as of 2014. It also features adaptive LED lights and page turn sensors on the sides of the device. October – Barnes & Noble releases the Glowlight Plus, its first waterproof e-reader. October – The US appeals court sides with Google instead of the Authors' Guild, declaring that Google did not violate copyright law in its book scanning project. December – Playster launches an unlimited-access subscription service including e-books and audiobooks. By the end of 2015, Google Books scanned more than 25 million books. By 2015, over 70 million e-readers had been shipped worldwide. 2016 March – The Supreme Court of the United States declines to hear Apple's appeal against the court's decision of July 2013 that the company conspired to e-book price fixing, hence the previous court decision stands, obliging Apple to pay $450 million. April – The Supreme Court declines to hear the Authors Guild's appeal of its book scanning case, so the lower court's decision stands; the result means that Google can scan library books and display snippets in search results without violating US copyright law. April – Amazon releases the Kindle Oasis, its first e-reader in five years to have physical page turn buttons and, as a premium product, it includes a leather case with a battery inside; without including the case, it is the lightest e-reader on the market to date. August – Kobo releases the Aura One, the first commercial e-reader with a 7.8-inch E Ink Carta HD display. By the end of the year, smartphones and tablets have both individually overtaken e-readers as methods for reading an e-book, and paperback book sales are now higher than e-book sales. 2017 February – The Association of American Publishers releases data showing that the US adult e-book market declined 16.9% in the first nine months of 2016 over the same period in 2015, and Nielsen Book determines that the e-book market had an overall total decline of 16% in 2016 over 2015, including all age groups. This decline is partly due to widespread e-book price increases by major publishers, which has increased the average e-book price from $6 to almost $10. February – The US version of Kindle Unlimited comprises more than 1.5 million titles, including over 290,000 foreign language titles. March – The Guardian reports that sales of physical books are outperforming digital titles in the UK, since it can be cheaper to buy the physical version of a book when compared to the digital version due to Amazon's deal with publishers that allows agency pricing. April – The Los Angeles Times reports that, in 2016, sales of hardcover books were higher than e-books for the first time in five years. October – Amazon releases the Oasis 2, the first Kindle to be IPX8 rated meaning that it is water resistant up to 2 meters for up to 60 minutes; it is also the first Kindle to enable white text on a black background, a feature that may be helpful for nighttime reading. 2018 January – U.S. public libraries report record-breaking borrowing of OverDrive e-books over the course of the year, with more than 274 million e-books loaned to card holders, a 22% increase over the 2017 figure. October – The EU allowed its member countries to charge the same VAT for ebooks as for paper books. 2019 May – Barnes & Noble releases the GlowLight Plus e-reader, the largest Nook e-reader to date with a 7.8-inch E Ink screen. Formats Writers and publishers have many formats to choose from when publishing e-books. Each format has advantages and disadvantages. The most popular e-readers and their natively supported formats are shown below: Digital rights management Most e-book publishers do not warn their customers about the possible implications of the digital rights management tied to their products. Generally, they claim that digital rights management is meant to prevent illegal copying of the e-book. However, in many cases, it is also possible that digital rights management will result in the complete denial of access by the purchaser to the e-book. The e-books sold by most major publishers and electronic retailers, which are Amazon.com, Google, Barnes & Noble, Kobo Inc. and Apple Inc., are DRM-protected and tied to the publisher's e-reader software or hardware. The first major publisher to omit DRM was Tor Books, one of the largest publishers of science fiction and fantasy, in 2012. Smaller e-book publishers such as O'Reilly Media, Carina Press and Baen Books had already forgone DRM previously. Production Some e-books are produced simultaneously with the production of a printed format, as described in electronic publishing, though in many instances they may not be put on sale until later. Often, e-books are produced from pre-existing hard-copy books, generally by document scanning, sometimes with the use of robotic book scanners, having the technology to quickly scan books without damaging the original print edition. Scanning a book produces a set of image files, which may additionally be converted into text format by an OCR program. Occasionally, as in some projects, an e-book may be produced by re-entering the text from a keyboard. Sometimes only the electronic version of a book is produced by the publisher. It is possible to release an e-book chapter by chapter as each chapter is written. This is useful in fields such as information technology where topics can change quickly in the months that it takes to write a typical book. It is also possible to convert an electronic book to a printed book by print on demand. However, these are exceptions as tradition dictates that a book be launched in the print format and later if the author wishes an electronic version is produced. The New York Times keeps a list of best-selling e-books, for both fiction and non-fiction. Reading data All of the e-readers and reading apps are capable of tracking e-book reading data, and what the data could contain which e-books users open, how long the users spend reading each e-book and how much of each e-book is finished. In December 2014, Kobo released e-book reading data collected from over 21 million of its users worldwide. Some of the results were that only 44.4% of UK readers finished the bestselling e-book The Goldfinch and the 2014 top selling e-book in the UK, "One Cold Night", was finished by 69% of readers. This is evidence that while popular e-books are being completely read, some e-books are only sampled. Comparison to printed books Advantages In the space that a comparably sized physical book takes up, an e-reader can contain thousands of e-books, limited only by its memory capacity. Depending on the device, an e-book may be readable in low light or even total darkness. Many e-readers have a built-in light source, can enlarge or change fonts, use text-to-speech software to read the text aloud for visually impaired, elderly or dyslexic people or just for convenience. Additionally, e-readers allow readers to look up words or find more information about the topic immediately using an online dictionary. Amazon reports that 85% of its e-book readers look up a word while reading. A 2017 study found that even when accounting for the emissions created in manufacturing the e-reader device, substituting more than 4.7 print books a year resulted in less greenhouse gas emissions than print. While an e-reader costs more than most individual books, e-books may have a lower cost than paper books. E-books may be made available for less than the price of traditional books using on-demand book printers. Moreover, numerous e-books are available online free of charge on sites such as Project Gutenberg. For example, all books printed before 1928 are in the public domain in the United States, which enables websites to host ebook versions of such titles for free. Depending on possible digital rights management, e-books (unlike physical books) can be backed up and recovered in the case of loss or damage to the device on which they are stored, a new copy can be downloaded without incurring an additional cost from the distributor. Readers can synchronize their reading location, highlights and bookmarks across several devices. Disadvantages There may be a lack of privacy for the user's e-book reading activities. For example, Amazon knows the user's identity, what the user is reading, whether the user has finished the book, what page the user is on, how long the user has spent on each page, and which passages the user may have highlighted. One obstacle to wide adoption of the e-book is that a large portion of people value the printed book as an object itself, including aspects such as the texture, smell, weight and appearance on the shelf. Print books are also considered valuable cultural items, and symbols of liberal education and the humanities. Kobo found that 60% of e-books that are purchased from their e-book store are never opened and found that the more expensive the book is, the more likely the reader would at least open the e-book. Joe Queenan has written about the pros and cons of e-books: Apart from all the emotional and habitual aspects, there are also some readability and usability issues that need to be addressed by publishers and software developers. Many e-book readers who complain about eyestrain, lack of overview and distractions could be helped if they could use a more suitable device or a more user-friendly reading application, but when they buy or borrow a DRM-protected e-book, they often have to read the book on the default device or application, even if it has insufficient functionality. While a paper book is vulnerable to various threats, including water damage, mold and theft, e-books files may be corrupted, deleted or otherwise lost as well as pirated. Where the ownership of a paper book is fairly straightforward (albeit subject to restrictions on renting or copying pages, depending on the book), the purchaser of an e-book's digital file has conditional access with the possible loss of access to the e-book due to digital rights management provisions, copyright issues, the provider's business failing or possibly if the user's credit card expired. Market share United States According to the Association of American Publishers 2018 annual report, ebooks accounted for 12.4% of the total trade revenue. Publishers of books in all formats made $22.6 billion in print form and $2.04 billion in e-books, according to the Association of American Publishers' annual report 2019. Canada Spain In 2013, Carrenho estimates that e-books would have a 15% market share in Spain in 2015. UK According to Nielsen Book Research, e-book share went up from 20% to 33% between 2012 and 2014, but down to 29% in the first quarter of 2015. Amazon-published and self-published titles accounted for 17 million of those books (worth £58m) in 2014, representing 5% of the overall book market and 15% of the digital market. The volume and value sales, although similar to 2013, had seen a 70% increase since 2012. Germany The Wischenbart Report 2015 estimates the e-book market share to be 4.3%. Brazil The Brazilian e-book market is only emerging. Brazilians are technology savvy, and that attitude is shared by the government. In 2013, around 2.5% of all trade titles sold were in digital format. This was a 400% growth over 2012 when only 0.5% of trade titles were digital. In 2014, the growth was slower, and Brazil had 3.5% of its trade titles being sold as e-books. China The Wischenbart Report 2015 estimates the e-book market share to be around 1%. Public domain books Public domain books are those whose copyrights have expired, meaning they can be copied, edited, and sold freely without restrictions. Many of these books can be downloaded for free from websites like the Internet Archive, in formats that many e-readers support, such as PDF, TXT, and EPUB. Books in other formats may be converted to an e-reader-compatible format using e-book writing software, for example Calibre. vBook A vBook is an eBook that is digital first media with embedded video, images, graphs, tables, text, and other useful media.
Technology
Printing
null
23921051
https://en.wikipedia.org/wiki/Kilometres%20per%20hour
Kilometres per hour
The kilometre per hour (SI symbol: km/h; non-SI abbreviations: kph, km/hr) is a unit of speed, expressing the number of kilometres travelled in one hour. History Although the metre was formally defined in 1799, the term "kilometres per hour" did not come into immediate use – the myriametre () and myriametre per hour were preferred to kilometres and kilometres per hour. In 1802 the term "myriamètres par heure" appeared in French literature. The Dutch on the other hand adopted the kilometre in 1817 but gave it the local name of the mijl (Dutch mile). Notation history The SI representations, classified as symbols, are "km/h", "" and "". Several other abbreviations of "kilometres per hour" have been used since the term was introduced and many are still in use today; for example, dictionaries list "kph", "kmph" and "km/hr" as English abbreviations. While these forms remain widely used, the International Bureau of Weights and Measures uses "km/h" in describing the definition and use of the International System of Units. The entries for "kph" and "kmph" in the Oxford Advanced Learner's Dictionary state that "the correct scientific unit is km/h and this is the generally preferred form". Abbreviations Abbreviations for "kilometres per hour" did not appear in the English language until the late nineteenth century. The kilometre, a unit of length, first appeared in English in 1810, and the compound unit of speed "kilometers per hour" was in use in the US by 1866. "Kilometres per hour" did not begin to be abbreviated in print until many years later, with several different abbreviations existing near-contemporaneously. With no central authority to dictate the rules for abbreviations, various publishing houses and standards bodies have their own rules that dictate whether to use upper-case letters, lower-case letters, periods and so on, reflecting both changes in fashion and the image of the publishing house concerned, In contrast to the "symbols" designated for use with the SI system, news organisations such as Reuters and The Economist require "kph". In informal Australian usage, km/h is more commonly pronounced "kays" or "kays an hour". In military usage, "klicks" is used, though written as km/h. Unit symbols In 1879, four years after the signing of the Treaty of the Metre, the International Committee for Weights and Measures (CIPM) proposed a range of symbols for the various metric units then under the auspices of the General Conference on Weights and Measures (CGPM). Among these were the use of the symbol "km" for "kilometre". In 1948, as part of its preparatory work for the SI, the CGPM adopted symbols for many units of measure that did not have universally agreed symbols, one of which was the symbol "h" for "hours". At the same time the CGPM formalised the rules for combining units quotients could be written in one of three formats resulting in , and being valid representations of "kilometres per hour". The SI standards, which were MKS-based rather than CGS-based, were published in 1960 and have since then have been adopted by many authorities around the globe including academic publishers and legal authorities. The SI explicitly states that unit symbols are not abbreviations and are to be written using a very specific set of rules. M. Danloux-Dumesnils provides the following justification for this distinction: SI, and hence the use of (or or ) has now been adopted around the world in many areas related to health and safety and in metrology in addition to the SI unit metres per second (, or ). SI is also the preferred system of measure in academia and in education. Non-SI abbreviations in official use km/j or km/jam (Indonesia and Malaysia) km/t or km/tim (Norway, Denmark and Sweden; also use km/h) kmph (Sri Lanka and India) กม./ชม. (Thailand; also uses km/hr) كم/س or كم/ساعة (Arabic-speaking countries, also use km/h) קמ"ש (Israel) км/ч (Russia and Belarus in a Russian-language context) км/г (Belarus in a Belarusian-language context) км/год (Ukraine) km/st (Azerbaijan) km/godz (Poland) Regulatory use During the early years of the motor car, each country developed its own system of road signs. In 1968 the Vienna Convention on Road Signs and Signals was drawn up under the auspices of the United Nations Economic and Social Council to harmonise road signs across the world. Many countries have since signed the convention and adopted its proposals. Speed limits signs that are either directly authorised by the convention or have been influenced by the convention are shown below: In 1972 the EU published a directive (overhauled in 1979 to take British and Irish interests into account) that required member states to abandon CGS-based units in favour of SI. The use of SI implicitly required that member states use "km/h" as the shorthand for "kilometres per hour" on official documents. Another EU directive, published in 1975, regulates the layout of speedometers within the European Union, and requires the text "km/h" in all languages, even where that is not the natural abbreviation for the local version of "kilometres per hour". Examples include: Dutch: "" ("hour" is "" – does not start with "h"), Portuguese: "" ("kilometre" is "" – does not start with "k") Irish: "" Greek: "" (a different script). In 1988 the United States National Highway Traffic Safety Administration promulgated a rule stating that "MPH and/or km/h" were to be used in speedometer displays. On May 15, 2000, this was clarified to read "MPH, or MPH and km/h". However, the Federal Motor Vehicle Safety Standard number 101 ("Controls and Displays") allows "any combination of upper- and lowercase letters" to represent the units. Conversions ≡ , the SI unit of speed, metre per second ≈ ≈ ≈ ≡ (exactly) ≡
Physical sciences
Speed
Basics and measurement
23923686
https://en.wikipedia.org/wiki/Mudcrack
Mudcrack
Mudcracks (also known as mud cracks, desiccation cracks or cracked mud) are sedimentary structures formed as muddy sediment dries and contracts. Crack formation also occurs in clay-bearing soils as a result of a reduction in water content. Formation of mudcrack Naturally forming mudcracks start as wet, muddy sediment dries up and contracts. A strain is developed because the top layer shrinks while the material below stays the same size. When this strain becomes large enough, channel cracks form in the dried-up surface to relieve the strain. Individual cracks spread and join up, forming a polygonal, interconnected network of forms called "tesselations." If the strain continues to build, the polygons start to curl upwards. This characteristic can be used in geology to understand the original orientation of a rock. Cracks may later be filled with sediment and form casts over the base. Typically, the initial crack pattern is dominated by T-shaped junctions. If a mudfield is repeatedly wetted and dried, it can be annealed to a pattern dominated by Y-shaped junctions, as it is thermodynamically favored like columnar jointing and polygonal patterned ground. Syneresis cracks are broadly similar features that form from underwater shrinkage of muddy sediment caused by differences in salinity or chemical conditions, rather than aerial exposure and desiccation. Syneresis cracks can be distinguished from mudcracks because they tend to be discontinuous, sinuous, and trilete or spindle-shaped. Morphology and classification of mudcrack Mudcracks are generally polygonal when seen from above and v-shaped in cross section. The "v" opens towards the top of the bed and the crack tapers downward. Allen (1982) proposed a classification scheme for mudcracks based on their completeness, orientation, shape, and type of infill. Completeness of mudcrack Complete mudcracks form an interconnected tessellating network. The connection of cracks often occurs when individual cracks join together forming a larger continuous crack. Incomplete mudcracks are not connected to each other but still form in the same region or location as the other cracks. Plan-view geometry Orthogonal intersections can have a preferred orientation or may be random. In oriented orthogonal cracks, the cracks are usually complete and bond to one another forming irregular polygonal shapes and often rows of irregular polygons. In random orthogonal cracks, the cracks are incomplete and unoriented therefore they do not connect or make any general shapes. Although they do not make general shapes they are not perfectly geometric. Non-orthogonal mudcracks have a geometric pattern. In uncompleted non-orthogonal cracks they form as a single three-point star shape that is composed of three cracks. They could also form with more than three cracks but three cracks in commonly considered the minimum. In completed non-orthogonal cracks, they form a very geometric pattern. The pattern resembles small polygonal shaped tiles in a repetitive pattern. Mud curls Mud curls form during one of the final stages in desiccation. Mud curls commonly occur on the exposed top layer of very thinly bedded mud rocks. When mud curls form, the water that is inside the sediment begins to evaporate causing the stratified layers to separate. The individual top layer is much weaker than multiple layers and is therefore able to contract and form curls as desiccation occurs. If transported by later currents, mud curls may be preserved as mud-chip rip-up clasts. Environments and substrates Naturally occurring mudcracks form in sediment that was once saturated with water. Abandoned river channels, floodplain muds, and dried ponds are localities that form mudcracks. Mudcracks can also be indicative of a predominately sunny or shady environment of formation. Rapid drying, which occurs in sunny environments, results in widely spaced, irregular mudcracks, while closer spaced, more regular mudcracks indicate that they were formed in a shady place. Similar features also occur in frozen ground, lava flows (as columnar basalt), and igneous dykes and sills. In technology Polygonal crack networks similar to mudcracks can form in human-made materials such as ceramic glazes, paint film, and poorly made concrete. Mudcrack patterning at smaller scales can also be observed studied using technological thin films deposited using micro and nanotechnologies. Preservation of mudcrack Mudcracks can be preserved as v-shaped cracks on the top of a bed of muddy sediment or as casts on the base of the overlying bed. When they are preserved on the top of a bed, the cracks look as they did at the time of formation. When they are preserved on the bottom of the bedrock, the cracks are filled in with younger, overlying sediment. In most bottom-of-bed examples, the cracks are the part that sticks out most. Bottom-of-bed preservation occurs when mudcracks that have already formed and are completely dried are covered with fresh, wet sediment and are buried. Through burial and pressure, the new wet sediment is further pushed into the cracks, where it dries and hardens. The mudcracked rock is then later exposed to erosion. In these cases, the original mud cracks will erode faster than the newer material that fills the spaces. This type of mudcrack is used by geologists to determine the vertical orientation of rock samples that have been altered through folding or faulting. Gallery
Physical sciences
Sedimentology
Earth science
23924014
https://en.wikipedia.org/wiki/Coral%20reef%20fish
Coral reef fish
Coral reef fish are fish which live amongst or in close relation to coral reefs. Coral reefs form complex ecosystems with tremendous biodiversity. Among the myriad inhabitants, the fish stand out as colourful and interesting to watch. Hundreds of species can exist in a small area of a healthy reef, many of them hidden or well camouflaged. Reef fish have developed many ingenious specialisations adapted to survival on the reefs. Coral reefs occupy less than 1% of the surface area of the world oceans, but provide a home for 25% of all marine fish species. Reef habitats are a sharp contrast to the open water habitats that make up the other 99% of the world oceans. However, loss and degradation of coral reef habitat, increasing pollution, and overfishing including the use of destructive fishing practices, are threatening the survival of the coral reefs and the associated reef fish. Overview Coral reefs are the result of millions of years of coevolution among algae, invertebrates and fish. They have become crowded and complex environments, and the fish have evolved many ingenious ways of surviving. Most fishes found on coral reefs are ray-finned fishes, known for the characteristic sharp, bony rays and spines in their fins. These spines provide formidable defences, and when erected they can usually be locked in place or are venomous. Many reef fish have also evolved cryptic coloration to confuse predators. Reef fish have also evolved complex adaptive behaviours. Small reef fish get protection from predators by hiding in reef crevices or by shoaling and schooling. Many reef fish confine themselves to one small neighbourhood where every hiding place is known and can be immediately accessed. Others cruise the reefs for food in shoals, but return to a known area to hide when they are inactive. Resting small fish are still vulnerable to attack by crevice predators, so many fish, such as triggerfish, squeeze into a small hiding place and wedge themselves by erecting their spines. As an example of the adaptations made by reef fish, the yellow tang is a herbivore which feeds on benthic turf algae. They also provide cleaner services to marine turtles, by removing algal growth from their shells. They do not tolerate other fish with the same colour or shape. When alarmed, the usually placid yellow tang can erect spines in its tail and slash at its opponent with rapid sideways movements. Diversity and distribution Coral reefs contain the most diverse fish assemblages to be found anywhere on earth, with perhaps as many as 6,000–8,000 species dwelling within coral reef ecosystems of the world's oceans. The mechanisms that first led to, and continue to maintain, such concentrations of fish species on coral reefs has been widely debated over the last 50 years. While many reasons have been proposed, there is no general scientific consensus on which of these is the most influential, but it seems likely that a number of factors contribute. These include the rich habitat complexity and diversity inherent in coral reef ecosystems, the wide variety and temporal availability of food resources available to coral reef fishes, a host of pre and post-larval settlement processes, and as yet unresolved interactions between all these factors. The wealth of fishes on reefs is filled by tiny, bottom-dwelling reef fishes. There are two major regions of coral reef development recognized; the Indo-Pacific (which includes the Pacific and Indian Oceans as well as the Red Sea), and the tropical western Atlantic (also known as the "wider" or "greater" Caribbean). Each of these two regions contains its own unique coral reef fish fauna with no natural overlap in species. Of the two regions, the richest by far in terms of reef fish diversity is the Indo-Pacific where there are an estimated 4,000–5,000 species of fishes associated with coral reef habitats. Another 500–700 species can be found in the greater Caribbean region. Reef fish adaptations Body shape Most reef fishes have body shapes that are different from open water fishes. Open water fish are usually built for speed in the open sea, streamlined like torpedoes to minimise friction as they move through the water. Reef fish are operating in the relatively confined spaces and complex underwater landscapes of coral reefs. For this manoeuvrability is more important than straight line speed, so coral reef fish have developed bodies which optimize their ability to dart and change direction. They outwit predators by dodging into fissures in the reef or playing hide and seek around coral heads. Many reef fish, such as butterflyfish and angelfishes, have evolved bodies which are deep and laterally compressed like a pancake. Their pelvic and pectoral fins are designed differently, so they act together with the flattened body to optimise manoeuvrability. Colouration Coral reef fishes exhibit a huge variety of dazzling and sometimes bizarre colours and patterns. This is in marked contrasts to open water fishes which are usually countershaded with silvery colours. The patterns have different functions. Sometimes they camouflage the fish when the fish rests in places with the right background. Colouration can also be used to help species recognition during mating. Some unmistakable contrasting patterns are used to warn predators that the fish has venomous spines or poisonous flesh. The foureye butterflyfish gets its name from a large dark spot on the rear portion of each side of the body. This spot is surrounded by a brilliant white ring, resembling an eyespot. A black vertical bar on the head runs through the true eye, making it hard to see. This can result in a predator thinking the fish is bigger than it is, and confusing the back end with the front end. The butterflyfish's first instinct when threatened is to flee, putting the false eyespot closer to the predator than the head. Most predators aim for the eyes, and this false eyespot tricks the predator into believing that the fish will flee tail first. When escape is not possible, the butterflyfish will sometimes turn to face its aggressor, head lowered and spines fully erect, like a bull about to charge. This may serve to intimidate the other animal or may remind the predator that the butterflyfish is too spiny to make a comfortable meal. The psychedelic Synchiropus splendidus (right) is not easily seen due to its bottom-feeding habit and its small size, reaching only about 6 cm. It feeds primarily on small crustaceans and other invertebrates, and is popular in the aquarium trade. Just as some prey species evolved cryptic colouration and patterns to help avoid predators, some ambush predators evolved camouflage that lets them ambush their prey. The tasseled scorpionfish is an ambush predator that looks like part of a sea floor encrusted with coral and algae. It lies in wait on the sea floor for crustaceans and small fish, such as gobies, to pass by. Another ambush predator is the striated frogfish (right). They lie on the bottom and wave a conspicuous worm-like lure strategically attached above their mouth. Normally about 10 cm (4 in) long, they can also inflate themselves like puffers. Gobies avoid predators by tucking themselves into coral crevices or partly burying themselves in sand. They continually scan for predators with eyes that swivel independently. The camouflage of the tasseled scorpionfish can prevent gobies from seeing them until it's too late. The clown triggerfish has strong jaws for crushing and eating sea urchins, crustaceans and hard-shell molluscs. Its ventral (lower) surface has large, white spots on a dark background, and its dorsal (upper) surface has black spots on yellow. This is a form of countershading: from below, the white spots look like the lighted surface of the water above; and from above, the fish blends more with the coral reef below. The brightly painted yellow mouth may deter potential predators. Feeding strategies Many reef fish species have evolved different feeding strategies accompanied by specialized mouths, jaws and teeth particularly suited to deal with their primary food sources found in coral reef ecosystems. Some species even shift their dietary habits and distributions as they mature. This is not surprising, given the huge variety in the types of prey on offer around coral reefs. For example, the primary food source of butterflyfishes are the coral polyps themselves or the appendages of polychaetes and other small invertebrate animals. Their mouths protrude like forceps, and are equipped with fine teeth that allow them to nip off such exposed body parts of their prey. Parrotfishes eat algae growing on reef surfaces, utilizing mouths like beaks well adapted to scrape off their food. Other fish, like snapper, are generalized feeders with more standard jaw and mouth structures that allow them to forage on a wide range of animal prey types, including small fishes and invertebrates. Generalized carnivores Carnivores are the most diverse of feeding types among coral reef fishes. There are many more carnivore species on the reefs than herbivores. Competition among carnivores is intense, resulting in a treacherous environment for their prey. Hungry predators lurk in ambush or patrol every part of the reef, night and day. Some fishes associated with reefs are generalized carnivores that feed on a variety of animal prey. These typically have large mouths that can be rapidly expanded, thereby drawing in nearby water and any unfortunate animals contained within the inhaled water mass. The water is then expelled through the gills with the mouth closed, thereby trapping the helpless prey For example, the bluestripe snapper has a varied diet, feeding on fishes, shrimps, crabs, stomatopods, cephalopods and planktonic crustaceans, as well as plant and algae material. Diet varies with age, location and the prevalent prey items locally. Goatfish are tireless benthic feeders, using a pair of long chemosensory barbels (whiskers) protruding from their chins to rifle through the sediments in search of a meal. Like goats, they seek anything edible: worms, crustaceans, molluscs and other small invertebrates are staples. The yellowfin goatfish (Mulloidichthys vanicolensis) often schools with the blue-striped snapper. The yellowfins change their colouration to match that of the snapper. Presumably this is for predator protection, since goatfish are a more preferred prey than bluestripe snapper. By night the schools disperse and individual goatfish head their separate ways to loot the sands. Other nocturnal feeders shadow the active goatfish, waiting patiently for overlooked morsels. Moray eels and coral groupers (Plectropomus pessuliferus) are known to cooperate with each other when hunting. Grouper are protogynous hermaphrodites, who school in harems that can vary greatly in size according to the population size and reef habitat. When no male is available, in each school the largest female shifts sex to male. If the final male disappears, changes to the largest female occur, with male behavior occurring within several hours and sperm production occurring within ten days. Specialised carnivores Large schools of forage fish, such as surgeonfish and cardinalfish, move around the reef feeding on tiny zooplankton. The forage fish are, in turn, eaten by larger fish, such as the bigeye trevally. Fish receive many benefits from schooling behaviour, including defence against predators through better predator detection, since each fish is on the lookout. Schooling fish have developed remarkable displays of precise choreography which confuse and evade predators. For this they have evolved special pressure sensors along their sides, called lateral lines, that let them feel each other's movements and stay synchronized. Bigeye trevally also form schools. They are swift predators who patrol the reef in hunting packs. When they find a school of forage fish, such as cardinalfish, they surround them and herd them close to the reef. This panics the prey fish, and their schooling becomes chaotic, leaving them open to attack by the trevally. The titan triggerfish can move relatively large rocks when feeding and is often followed by smaller fishes that feed on leftovers. They also use a jet of water to uncover sand dollars buried in sand. Barracuda are ferocious predators on other fishes, with razor-sharp conical teeth which make it easy for them to rip their prey to shreds. Barracuda patrol the outer reef in large schools, and are extremely fast swimmers with streamlined, torpedo-shaped bodies. Porcupinefish are medium to large sized, and are usually found swimming among or near coral reefs. They inflate their body by swallowing water, reducing potential predators to those with much bigger mouths. Fish can not groom themselves. Some fish specialise as cleaner fish, and establish cleaning stations where other fish can come to have their parasites nibbled away. The "resident fish doctor and dentist on the reef is the bluestreak cleaner wrasse". The bluestreak is marked with a conspicuous bright blue stripe and behaves in a stereotypical way which attracts larger fish to its cleaning station. As the bluestreak snacks on the parasites it gently tickles its client. This seems to bring the larger fish back again for regular servicing. The reef lizardfish secretes a mucus coating which reduces drag when they swim and also protects it from some parasites. But other parasites find the mucus itself good to eat. So lizardfish visit the cleaner wrasse, which clean the parasites from the skin, gills and mouth. Herbivores Herbivores feed on plants. The four largest groups of coral reef fishes that feed on plants are the parrotfishes, damselfishes, rabbitfishes, and surgeonfishes. All feed primarily on microscopic and macroscopic algae growing on or near coral reefs. Algae can drape reefs in kaleidoscopes of colours and shapes. Algae are primary producers, which means they are plants synthesising food directly from solar energy and carbon dioxide and other simple nutrient molecules. Without algae, everything on the reef would die. One important algal group, the bottom dwelling (benthic) algae, grows over dead coral and other inert surfaces, and provides grazing fields for herbivores such as parrotfish. Parrotfish are named for their parrot-like beaks and bright colours. They are large herbivores that graze on the algae that grows on hard dead corals. Equipped with two pairs of crushing jaws and their beaks, they pulverize chunks of algae-coated coral, digesting the algae and excreting the coral as fine sand. Smaller parrotfish are relatively defenceless herbivores, poorly defended against predators like barracuda. They have evolved to find protection by schooling, sometimes with other species like shoaling rabbitfish. Spinefoot rabbitfish are named for their defensive venomous spines, and they are seldom attacked by predators. Spines are a last-ditch defence. It is better to avoid predator detection in the first place, and avoid being thrust into risky spine-to-fang battles. So rabbitfish have also evolved skilful colour changing abilities. Damselfish are a group of species that feed on zooplankton and algae, and are an important reef forage fish for larger predators. They are small, typically five centimetres (two inches) long. Many species are aggressive towards other fishes which also graze on algae, such as surgeonfish. Surgeonfish sometimes use schooling as a countermeasure to defensive attacks by solitary damselfish. Symbiosis Symbiosis refers to two species that have a close relationship with each other. The relationship can be mutualistic, when both species benefit from the relationship, commensalistic, when one species benefits and the other is unaffected, and parasitistic, when one species benefits, and the other is harmed. An example of commensalism occurs between the hawkfish and fire coral. Thanks to their large, skinless pectoral fins, hawkfish can perch on fire corals without harm. Fire corals are not true corals, but are hydrozoans possessing stinging cells called nematocysts which would normally prevent close contact. The protection fire corals offer hawkfish means the hawkfish has the high ground of the reef, and can safely survey its surroundings like a hawk. Hawkfish usually stay motionless, but dart out and grab crustaceans and other small invertebrates as they pass by. They are mostly solitary, although some species form pairs and share a head of coral. A more bizarre example of commensalism occurs between the slim, eel-shaped pinhead pearlfish and a particular species of sea cucumber. The pearlfish enters the sea cucumber through its anus, and spends the day safely protected inside the sea cucumber's alimentary tract. At night it emerges the same way and feeds on small crustaceans. Sea anemones are common on reefs. The tentacles of sea anemones bristle with tiny harpoons (nematocysts) primed with toxins, and are an effective deterrent against most predators. However, saddle butterflyfish, which are up to 30 cm (12 in) long, have developed a resistance to these toxins. Saddle butterflyfish usually flutter gently rather than swim. However, in the presence of their preferred food, sea anemones, this gentleness disappears, and the butterflyfish dash in and out, ripping off the anemone tentacles. There is a mutualistic relationship between sea anemones and clownfish. This gives the sea anemones a second line of defence. They are guarded by fiercely territorial clownfish, who are also immune to the anemone toxins. To get their meal, butterflyfish must get past these protective clownfish who, although smaller, are not intimidated. An anemone without its clownfish will quickly be eaten by butterflyfish. In return, the anemones provide the clownfish protection from their predators, who are not immune to anemone stings. As a further benefit to the anemone, waste ammonia from the clownfish feed symbiotic algae found in the anemone's tentacles. As with all fish, coral reef fish harbour parasites. Since coral reef fish are characterized by high biodiversity, parasites of coral reef fish show tremendous variety. Parasites of coral reef fish include nematodes, Platyhelminthes (cestodes, digeneans, and monogeneans), leeches, parasitic crustaceans such as isopods and copepods, and various microorganisms such as myxosporidia and microsporidia. Some of these fish parasites have heteroxenous life cycles (i.e. they have several hosts) among which sharks (certain cestodes) or molluscs (digeneans). The high biodiversity of coral reefs increases the complexity of the interactions between parasites and their various and numerous hosts. Numerical estimates of parasite biodiversity have shown that certain coral fish species have up to 30 species of parasites. The mean number of parasites per fish species is about ten. This has a consequence in term of co-extinction. Results obtained for the coral reef fish of New Caledonia suggest that extinction of a coral reef fish species of average size would eventually result in the co-extinction of at least ten species of parasites. Toxicity Many reef fish are toxic. Toxic fish are fish which contain strong toxins in their bodies. There is a distinction between poisonous fish and venomous fish. Both types of fish contain strong toxins, but the difference is in the way the toxin is delivered. Venomous fish deliver their toxins (called venom) by biting, stinging, or stabbing, causing an envenomation. Venomous fish do not necessarily cause poisoning if they are eaten, since the venom is often destroyed in the digestive system. By contrast, poisonous fish contain strong toxins which are not destroyed by the digestive system. This makes them poisonous to eat. Venomous fish carry their venom in venom glands and use various delivery systems, such as spines or sharp fins, barbs or spikes, and fangs. Venomous fish tend to be either very visible, using flamboyant colours to warn enemies, or skilfully camouflaged and maybe buried in the sand. Apart from the defence or hunting value, venom might have value for bottom dwelling fish by killing the bacteria that try to invade their skin. Few of these venoms have been studied. They are a yet to be tapped resource for bioprospecting to find drugs with medical uses. The most venomous known fish is the reef stonefish. It has a remarkable ability to camouflage itself amongst rocks. It is an ambush predator that sits on the bottom waiting for prey to come close. It does not swim away if disturbed, but erects 13 venomous spines along its back. For defence, it can shoot venom from each or all of these spines. Each spine is like a hypodermic needle, delivering the venom from two sacs attached to the spine. The stonefish has control over whether to shoot its venom, and does so when provoked or frightened. The venom results in severe pain, paralysis and tissue death, and can be fatal if not treated. Despite its formidable defence, the stonefish does have predators. Some bottom feeding rays and sharks with crushing teeth feed on them, as does the Stokes' seasnake Unlike the stonefish which can shoot venom, the lionfish can only release venom when something strikes its spines. Although not native to the US coast, lionfish have appeared around Florida and have spread up the coast to New York. They are attractive aquarium fish, sometimes used to stock ponds, and may have been washed into the sea during a hurricane. Lionfish can aggressively dart at scuba divers and attempt to puncture the facemask with their venomous spines. The spotted trunkfish is a reef fish which secretes a colourless ciguatera toxin from glands on its skin when touched. The toxin is only dangerous when ingested, so there's no immediate harm to divers. However, predators as large as nurse sharks can die as a result of eating a trunkfish. Ciguatera toxins appear to accumulate in top predators of coral reefs. Many of the Caribbean groupers and the barracuda for example may contain enough of this toxin to cause severe symptoms in humans who eat them. What makes the situation particularly dangerous is that such species may be toxic only at certain sizes or locations, making it difficult to know whether or when they are or are not safe to eat. In some locations this leads to many cases of ciguatera poisoning among tropical islanders. The stargazer buries itself in sand and can deliver electric shocks as well as venom. It is a delicacy in some cultures (the venom is destroyed when it is cooked), and can be found for sale in some fish markets with the electric organ removed. They have been called "the meanest things in creation". The giant moray is a reef fish at the top of the food chain. Like many other apex reef fish, it is likely to cause ciguatera poisoning if eaten. Outbreaks of ciguatera poisoning in the 11th to 15th centuries from large, carnivorous reef fish, caused by harmful algal blooms, could be a reason why Polynesians migrated to Easter Island, New Zealand, and possibly Hawaii. Reef sharks and rays Whitetip, blacktip and grey reef sharks dominate the ecosystems of coral reefs in the Indo-Pacific. Coral reefs in the western Atlantic Ocean are dominated by the Caribbean reef shark. These sharks, all species of requiem shark, all have the robust, streamlined bodies typical of the requiem shark. As fast-swimming, agile predators, they feed primarily on free-swimming bony fishes and cephalopods. Other species of reef sharks include the Galapagos shark, the tawny nurse shark and hammerheads. The whitetip reef shark is a small shark usually less than in length. It is found almost exclusively around coral reefs where it can be encountered around coral heads and ledges with high vertical relief, or over sandy flats, in lagoons, or near drop-offs to deeper water. Whitetips prefer very clear water and rarely swim far from the bottom. They spend most of the daytime resting inside caves. Unlike other requiem sharks, which usually rely on ram ventilation and must constantly swim to breathe, these sharks can pump water over their gills and lie still on the bottom. They have slender, lithe bodies, which allow them to wriggle into crevices and holes and extract prey inaccessible to other reef sharks. On the other hand, they are rather clumsy when attempting to take food suspended in open water. Whitetip reef sharks do not frequent very shallow water like the blacktip reef shark, nor the outer reef like the grey reef shark. They generally remain within a highly localized area. An individual shark may use the same cave for months to years. The daytime home range of a whitetip reef shark is limited to about ; at night this range increases to . The whitetip reef shark is highly responsive to olfactory, acoustic, and electrical cues given off by potential prey. Its visual system is attuned more to movement and/or contrast than to object details. It is especially sensitive to natural and artificial low-frequency sounds in the 25–100 Hz range, which evoke struggling fish. Whitetips hunt primarily at night, when many fishes are asleep and easily taken. After dusk, a group of sharks may target the same prey item, covering every exit route from a particular coral head. Each shark hunts for itself and in competition with the others in its group. They feed mainly on bony fishes, including eels, squirrelfishes, snappers, damselfishes, parrotfishes, surgeonfishes, triggerfishes, and goatfishes, as well as octopus, spiny lobsters, and crabs. Important predators of the whitetip reef shark include tiger sharks and Galapagos sharks. The blacktip reef shark is typically about long. It is usually found over reef ledges and sandy flats, though it can also enter brackish and freshwater environments. This species likes shallow water, while the whitetip and the grey reef shark are prefer deeper water. Younger sharks favour shallow sandy flats, and older sharks spend more time around reef ledges and near reef drop-offs. Blacktip reef sharks are strongly attached to their own area, where they may remain for up to several years. A tracking study off Palmyra Atoll in the central Pacific has found that the blacktip reef shark had a home range of about , among the smallest of any shark species. The size and location of the range does not change with time of day. The blacktip reef shark swims alone or in small groups. Large social aggregations have also been observed. They are active predators of small bony fishes, cephalopods, and crustaceans, and also feed on sea snakes and seabirds. Blacktip reef sharks are preyed on by groupers, grey reef sharks, tiger sharks, and members of their own species. At Palmyra Atoll, adult blacktip reef sharks avoid patrolling tiger sharks by staying out of the central, deeper lagoon. Grey reef sharks are usually less than 1.9 metres (6.2 ft) long. Despite their moderate size, grey reef sharks actively expel most other shark species from favored habitats. In areas where this species co-exists with the blacktip reef shark, the latter species occupy the shallow flats while the grey reef sharks stay in deeper water. Many grey reef sharks have a home range on a specific area of the reef, to which they continually return. However, they are social rather than territorial. During the day, these sharks often form groups of 5–20 individuals near coral-reef drop-offs, splitting up in the evening as the sharks begin to hunt. They are found over continental and insular shelves, preferring the leeward (away from the direction of the current) sides of coral reefs with clear water and rugged topography. They are frequently found near the drop-offs at the outer edges of the reef, and less commonly within lagoons. On occasion, this shark may venture several kilometers out into the open ocean. Shark researcher Leonard Compagno comments on the relationship between the three species. "[The grey reef shark] ...shows microhabitat separation from the blacktip reef sharks; around islands where both species occur, the blacktip occupies shallow flats, while the grey reef shark is usually found in deeper areas, but where the blacktip is absent, the grey reef shark is commonly found on the flats... [The grey reef shark] complements the whitetip shark as it is far more adapt at catching off-bottom fish than the whitetip, but the later is far more competent in extracting prey from crevices and holes in reefs." The Caribbean reef shark is up to 3 metres (10 ft) long, one of the largest apex predators in the reef ecosystem. Like the whitetip reef shark, they have been documented resting motionless on the sea bottom or inside caves - unusual behaviour for requiem sharks. Caribbean reef sharks play a major role in shaping Caribbean reef communities. They are more active at night, with no evidence of seasonal changes in activity or migration. Juveniles tend to remain in a localized area throughout the year, while adults range over a wider area. The Caribbean reef shark feeds on a wide variety of reef-dwelling bony fishes and cephalopods, as well as some elasmobranchs such as eagle rays and yellow stingrays . Young sharks feed on small fishes, shrimps, and crabs. In turn, young sharks are preyed on by larger sharks such as the tiger shark and the bull shark.
Biology and health sciences
Fishes
null
809539
https://en.wikipedia.org/wiki/Taproot
Taproot
A taproot is a large, central, and dominant root from which other roots sprout laterally. Typically a taproot is somewhat straight and very thick, is tapering in shape, and grows directly downward. In some plants, such as the carrot, the taproot is a storage organ so well developed that it has been cultivated as a vegetable. The taproot system contrasts with the adventitious- or fibrous-root system of plants with many branched roots, but many plants that grow a taproot during germination go on to develop branching root structures, although some that rely on the main root for storage may retain the dominant taproot for centuries—for example, Welwitschia. Description Dicots, one of the two divisions of flowering plants (angiosperms), start with a taproot, which is one main root forming from the enlarging radicle of the seed. The tap root can be persistent throughout the life of the plant but is most often replaced later in the plant's development by a fibrous root system. A persistent taproot system forms when the radicle keeps growing and smaller lateral roots form along the taproot. The shape of taproots can vary but the typical shapes include: Conical root: this type of root tuber is conical in shape, i.e. widest at the top and tapering steadily towards the bottom: e.g. carrot. Fusiform root: this root is widest in the middle and tapers towards the top and the bottom: e.g. radish. Napiform root: the root has a top-like appearance. It is very broad at the top and tapers suddenly like a tail at the bottom: e.g. turnip. Many taproots are modified into storage organs. Some plants with taproots: Beetroot Burdock Carrot Sugar beet Dandelion Parsley Parsnip Poppy mallow Radish Sagebrush Turnip Common milkweed trees such as oaks, elms, pines and firs Development Taproots develop from the radicle of a seed, forming the primary root. It branches off to secondary roots, which in turn branch to form tertiary roots. These may further branch to form rootlets. For most plants species the radicle dies some time after seed germination, causing the development of a fibrous root system, which lacks a main downward-growing root. Most trees begin life with a taproot, but after one to a few years the main root system changes to a wide-spreading fibrous root system with mainly horizontal-growing surface roots and only a few vertical, deep-anchoring roots. A typical mature tree 30–50 m tall has a root system that extends horizontally in all directions as far as the tree is tall or more, but as much as 100% of the roots are in the top 50 cm of soil. Soil characteristics strongly influence the architecture of taproots; for example, deep and rich soils favour the development of vertical taproots in many oak species such as Quercus kelloggii, while clay soils promote the growth of multiple taproots. Horticultural considerations Many plants with taproots are difficult to transplant, or even to grow in containers, because the root tends to grow deep rapidly and in many species comparatively slight obstacles or damage to the taproot will stunt or kill the plant. Among weeds with taproots dandelions are typical; being deep-rooted, they are hard to uproot and if the taproot breaks off near the top, the part that stays in the ground often resprouts such that, for effective control, the taproot needs to be severed at least several centimetres below ground level. Gallery
Biology and health sciences
Plant anatomy and morphology: General
Biology
810077
https://en.wikipedia.org/wiki/X-ray%20pulsar
X-ray pulsar
X-ray pulsars or accretion-powered pulsars are a class of astronomical objects that are X-ray sources displaying strict periodic variations in X-ray intensity. The X-ray periods range from as little as a fraction of a second to as much as several minutes. Characteristics An X-ray pulsar is a type of binary star system consisting of a typical star (stellar companion) in orbit around a magnetized neutron star. The magnetic field strength at the surface of the neutron star is typically about 108 Tesla, over a trillion times stronger than the strength of the magnetic field measured at the surface of the Earth (60 μT). Gas is accreted from the stellar companion and is channeled by the neutron star's magnetic field on to the magnetic poles producing two or more localized X-ray hot spots, similar to the two auroral zones on Earth, but far hotter. At these hotspots the infalling gas can reach half the speed of light before it impacts the neutron star surface. So much gravitational potential energy is released by the infalling gas, that the hotspots, which are estimated to about one square kilometer in area, can be ten thousand times, or more, as luminous as the Sun. Temperatures of millions of degrees are produced so the hotspots emit mostly X-rays. As the neutron star rotates, pulses of X-rays are observed as the hotspots move in and out of view if the magnetic axis is tilted with respect to the spin axis. Gas supply The gas that supplies the X-ray pulsar can reach the neutron star by a variety of ways that depend on the size and shape of the neutron star's orbital path and the nature of the companion star. Some companion stars of X-ray pulsars are very massive young stars, usually OB supergiants (see stellar classification), that emit a radiation driven stellar wind from their surface. The neutron star is immersed in the wind and continuously captures gas that flows nearby. Vela X-1 is an example of this kind of system. In other systems, the neutron star orbits so closely to its companion that its strong gravitational force can pull material from the companion's atmosphere into an orbit around itself, a mass transfer process known as Roche lobe overflow. The captured material forms a gaseous accretion disc and spirals inwards to ultimately fall onto the neutron star as in the binary system Cen X-3. For still other types of X-ray pulsars, the companion star is a Be star that rotates very rapidly and apparently sheds a disk of gas around its equator. The orbits of the neutron star with these companions are usually large and very elliptical in shape. When the neutron star passes nearby or through the Be circumstellar disk, it will capture material and temporarily become an X-ray pulsar. The circumstellar disk around the Be star expands and contracts for unknown reasons, so these are transient X-ray pulsars that are observed only intermittently, often with months to years between episodes of observable X-ray pulsation. Spin behaviors Radio pulsars (rotation-powered pulsars) and X-ray pulsars exhibit very different spin behaviors and have different mechanisms producing their characteristic pulses although it is accepted that both kinds of pulsar are manifestations of a rotating magnetized neutron star. The rotation cycle of the neutron star in both cases is identified with the pulse period. The major differences are that radio pulsars have periods on the order of milliseconds to seconds, and all radio pulsars are losing angular momentum and slowing down. In contrast, the X-ray pulsars exhibit a variety of spin behaviors. Some X-ray pulsars are observed to be continuously spinning faster and faster or slower and slower (with occasional reversals in these trends) while others show either little change in pulse period or display erratic spin-down and spin-up behavior. The explanation of this difference can be found in the physical nature of the two pulsar classes. Over 99% of radio pulsars are single objects that radiate away their rotational energy in the form of relativistic particles and magnetic dipole radiation, lighting up any nearby nebulae that surround them. In contrast, X-ray pulsars are members of binary star systems and accrete matter from either stellar winds or accretion disks. The accreted matter transfers angular momentum to (or from) the neutron star causing the spin rate to increase or decrease at rates that are often hundreds of times faster than the typical spin down rate in radio pulsars. Exactly why the X-ray pulsars show such varied spin behavior is still not clearly understood. Observations X-ray pulsars are observed using X-ray telescopes that are satellites in low Earth orbit although some observations have been made, mostly in the early years of X-ray astronomy, using detectors carried by balloons or sounding rockets. The first X-ray pulsar to be discovered was Centaurus X-3, in 1971 with the Uhuru X-ray satellite. Anomalous X-ray pulsars Magnetars, isolated and highly-magnetised neutron stars, can be observed as relatively slow x-ray pulsars with periods of a few seconds. These are referred to as anomalous X-ray pulsars, but are unrelated to binary X-ray pulsars.
Physical sciences
Stellar astronomy
Astronomy
810120
https://en.wikipedia.org/wiki/Ring%20galaxy
Ring galaxy
A ring galaxy is a galaxy with a circle-like appearance. Hoag's Object, discovered by Arthur Hoag in 1950, is an example of a ring galaxy. The ring contains many massive, relatively young blue stars, which are extremely bright. The central region contains relatively little luminous matter. Some astronomers believe that ring galaxies are formed when a smaller galaxy passes through the center of a larger galaxy. Because most of a galaxy consists of empty space, this "collision" rarely results in any actual collisions between stars. However, the gravitational disruptions caused by such an event could cause a wave of star formation to move through the larger galaxy. Other astronomers think that rings are formed around some galaxies when external accretion takes place. Star formation would then take place in the accreted material because of the shocks and compressions of the accreted material. Formation theories Ring galaxies are theorized to be formed through various methods including, but not limited to, the following scenarios: Bar instability A phenomenon where the rotational velocity of the bar in a barred spiral galaxy increases to the point of spiral spin-out. Under typical conditions, gravitational density waves would favor the creation of spiral arms. When the bar is unstable, these density waves are instead migrated out into a ring-structure by the pressure, force, and gravitational influence of the baryonic and dark matter furiously orbiting about the bar. This migration forces the stars, gas and dust found within the former arms into a torus-like region, forming a ring, and often igniting star formation. Galaxies with this structure have been found where the bar dominates, and essentially "carves out" the ring of the disc as it rotates. Oppositely, ring galaxies have been found where the bar has collapsed or disintegrated into a highly-flattened bulge. Despite this, observations suggest that bars, rings and spiral arms have the ability to fall apart and reform over the span of hundreds of millions of years, particularly in dense intergalactic environments, such as galaxy groups and clusters, where gravitational influences are more likely to play a role in the morphological and physical evolution of a galaxy without the influence of collisions and mergers. Galactic collisions Another observed way that ring galaxies can form is through the process of two or more galaxies colliding. The Cartwheel Galaxy, galaxy pair AM 2026-424, and Arp 147 are all examples of ring galaxies thought to be formed by this process. In pass-through galactic collisions, or bullseye collisions, an often smaller galaxy will pass directly through the disc of an often larger spiral, causing an outward push of the arms from the gravity of the smaller galaxy, as if dropping a rock into a pond of still water. These collisions can either launch the bulge and core away from the main disk, creating an almost empty ring appearance as the shockwave pushes the spiral arms out, or shove the core out towards the disk, often creating an oval-shaped ring with the bulge still somewhat intact. In side-swipe and head-on collisions, the appearance of a perfect ring are less likely, with chaotic and warped appearances dominating. Rings formed through collision processes are believed to be transient features of the affected galaxies, lasting only a few ten to hundred million years (a relatively short timeframe considering some mergers can take over a billion years to complete) before disintegrating, reforming into spiral arms, or succumbing to further disturbance from gravitational influence. Intergalactic medium accretion This method has been inferred through the existence of Hoag's object, along with UV observations of several other large and ultra-large super spiral galaxies and current formation theories of spiral galaxies. UV-light observations show several cases of faint, ring-like and spiral structures of hot young stars that have formed along the network of cooled inflowing gas, extending far from the visible luminous galactic disc. If conditions are favorable, a ring can form in the place of a spiral structure. Since some spiral galaxies are theorized to have formed from massive clouds of intergalactic gas collapsing and then rotationally forming into a disc structure, one could assume that a ring disc could form in place of a spiral disc if, as mentioned before, conditions are favorable. This holds true for protogalaxies, or galaxies just throughout to be forming, and old galaxies that have migrated into a section of space with a higher gas content than its previous locations. Gallery
Physical sciences
Galaxy classification
Astronomy
810266
https://en.wikipedia.org/wiki/Bonito
Bonito
Bonitos are a tribe of medium-sized, ray-finned predatory fish in the family Scombridae – a family it shares with the mackerel, tuna, and Spanish mackerel tribes, and also the butterfly kingfish. Also called the tribe Sardini, it consists of eight species across four genera; three of those four genera are monotypic, having a single species each. Bonitos closely resemble the skipjack tuna, which is often called a bonito, especially in Japanese contexts. Etymology The fish's name comes from the Portuguese and Spanish bonito (there's no evidence of the origin of the name), identical to the adjective meaning 'pretty'. However, the noun referring to the fish seems to come from the low and medieval Latin form boniton, a word with a strange structure and an obscure origin, related to the word byza, a possible borrowing from the Greek βῦζα, 'owl'. Species Genus Sarda (Cuvier, 1832) Australian bonito, S. australis (Macleay, 1881) Sarda chiliensis (Cuvier, 1832) Eastern Pacific bonito, S. c. chiliensis (Cuvier, 1832) Pacific bonito, S. c. lineolata (Girard, 1858) Striped bonito, S. orientalis (Temminck & Schlegel, 1844) Atlantic bonito, S. sarda (Bloch, 1793) Genus Cybiosarda (Whitley, 1935) Leaping bonito, C. elegans (Whitley, 1935) Genus Gymnosarda Gill, 1862 Dogtooth tuna, G. unicolor (Rüppell, 1836) Genus Orcynopsis Gill, 1862 Plain bonito, O. unicolor (Geoffroy Saint-Hilaire, 1817) As food Pacific and Atlantic bonito meat has a firm texture and a darkish color, as well as a moderate fat content. The meat of young or small bonito can be of light color, close to that of skipjack tuna, and is sometimes used as a cheap substitute for skipjack, especially for canning purposes, and occasionally in the production of cheap varieties of katsuobushi that are sold as bonito flakes. Bonito may not, however, be marketed as tuna in all countries. The Atlantic bonito is also found in the Mediterranean and the Black Sea, where it is a popular food fish, eaten grilled, pickled (lakerda), or baked.
Biology and health sciences
Acanthomorpha
Animals
811349
https://en.wikipedia.org/wiki/Japanese%20spider%20crab
Japanese spider crab
The Japanese spider crab (Macrocheira kaempferi) is a species of marine crab and is the biggest one that lives in the waters around Japan. At around 3.7 meters, it has the largest leg-span of any arthropod. The Japanese name for this species is taka-ashi-gani, (Japanese: タカアシガニ), literally translating to "tall legs crab". It goes through three main larval stages along with a prezoeal stage to grow to its great size. The genus Macrocheira contains multiple species. 7,001 fossil species of this genus have been found: M. ginzanensis and M. yabei, both from the Miocene of Japan. Its diverse taxonomic history is an important part of what these creatures are and how they evolved to be what they are today. They are sought by crab fisheries, and are considered a delicacy in Japan. To prevent overexploitation from harming the species, conservation efforts have been put in place to protect them and their population from overfishing. The Japanese spider crab is similar in appearance to the much smaller European spider crab (Maja squinado), though the latter, while within the same superfamily, belongs to a different family: the Majidae. Description The Japanese spider crab has the greatest leg span of any known arthropod, reaching up to from claw to claw. The body may grow to in carapace width and the whole crab can weigh up to —second in mass only to the American lobster among all living arthropod species. The males have the longer chelipeds; females have much shorter chelipeds, which are shorter than the following pair of legs. Apart from its large size, the Japanese spider crab differs from other crabs in a number of ways. The first pleopods of males are unusually twisted, and the larvae appear primitive. The crab is orange with white spots along the legs. It is reported to have a gentle disposition despite its ferocious appearance. The Japanese Spider Crab also has a unique molting behavior that occurs for about 100 minutes, in which the crab loses its mobility and starts molting its carapace rear and ends with molting its walking legs. The Japanese spider crab has an armored exoskeleton that helps protect it from larger predators such as octopuses, but also uses camouflage. The crab's bumpy carapace blends into the rocky ocean floor. To further the deception, a spider crab adorns its shell with sponges and other animals. The way in which a spider crab is able to pick up and cover itself with such organisms is by following a specific routine behavior. Upon picking up the object with the crab's slender chelipeds, the chelae are used to twist and tear off the organism, such as a worm tube or sponge, from the substrate on which it currently resides. Once the material is picked up, it is brought to the crab's mouthparts to specifically orient and shape it before it is attached to the exoskeleton. Then, through mechanical adhesion and secretions, the materials attach to the crab, and are able to regenerate, and colonize on the crab. Unlike other species of crab, such as the Chilean crab Acanthonyx petiveri, the Japanese spider crab does not specifically look for matching colors to blend into its environment; it simply camouflages in a way that disguises its entire structure. This is most likely because Japanese spider crabs are nocturnally active, so instead of trying to disguise themselves when catching prey, they are actually just trying to avoid predators at night. Distribution and habitat Japanese spider crabs are mostly found off the southern coasts of the Japanese island of Honshū, from Tokyo Bay to Kagoshima Prefecture. Outlying populations have been found in Iwate Prefecture and off Su-ao in Taiwan. Adults are found at depths between . They like to inhabit vents and holes in the deeper parts of the ocean. The temperature preference of adults is unknown, but the species is regular at a depth of in Suruga Bay, where the water generally is about . Based on results from public aquaria, Japanese spider crabs tolerate temperatures between , but are typically maintained at . The Japanese spider crab is an omnivore, consuming both plant-matter and animals. It also sometimes acts as a scavenger, consuming dead animals. Some have been known to scrape the ocean floor for plants and algae, while others pry open the shells of mollusks. Lifecycle Female crabs carry the fertilized eggs attached to their abdominal appendages until they hatch into tiny planktonic larvae. They can lay up to 1.5 million eggs per season, and these eggs hatch in 10 days on average. Once hatched, these larvae undergo four stages of development before they mature into adulthood. The first, or prezoeal, stage lasts only a matter of minutes, with most molting within 15 minutes to enter the first zoeal stage. It looks very different from its parents at this stage, with a small, transparent body. M. kaempferi undergoes two zoeal stages and a megalopa stage before it reaches adulthood. Each of these stages is influenced greatly by temperature, both in terms of survival and stage length. The optimum rearing temperature for all larval stages is thought to be between 15 and 18 °C, with survival temperatures ranging from . At these temperatures, the zoeal stages can last 7 to 18 days, with the megalopa stage lasting 25 to 45 days. Colder water is associated with longer durations in each stage. During the larval stages, M. kaempferi is found near the surface, as the planktonic forms drift with ocean currents. This surface water ranges between 12 and 15 °C during the hatching season (January to March). This is much warmer than the waters at depths below , where adults are found, with waters steadily around . Optimal temperatures have a 70% survival through the first zoeal stage, which is greatly reduced to a 30% survival in the second zoeal and megalopa stages. Taxonomic history The Japanese spider crab was originally described by Western science in 1836 by Coenraad Jacob Temminck under the name Maja kaempferi, based on material from Philipp Franz von Siebold collected near the artificial island Dejima. The specific epithet commemorates Engelbert Kaempfer, a German naturalist who lived in Japan from 1690 to 1692 and wrote about the country's natural history. It was moved to the genus Inachus by Wilhem de Haan in 1839, but placed in a new subgenus, Macrocheira. That subgenus was raised to the rank of the genus in 1886 by Edward J. Miers. Placed in the family Inachidae, M. kaempferi does not fit cleanly into that group, and it may be necessary to erect a new family just for the genus Macrocheira. Four species of the genus Macrocheira are known from fossils: Macrocheira sp. – Pliocene Takanabe Formation, Japan M. ginzanensis – Miocene Ginzan Formation, Japan M. yabei – Miocene Yonekawa Formation, Japan M. teglandi – Oligocene, east of Twin River, Washington, United States However, some evidence indicates that the genus Macrocheira does come from this family in some way due to its anatomical arrangements. This genus is similar in anatomical arrangement to the genus Oncinopus, seeming to preserve the earliest stage of anatomical evolution in the family Inachidae. The genus Onicinopus has a semi-hardened body, which allows the basal segment of the antennae, which articulates with the head capsule, to move. The antennulae which are segmented appendages between and below the eye stalks are connected. Like Oncinopus, the genus Macrocheira also has a seven-segmented abdomen and a basal segment of antennae that is mobile. Macrocheira also has orbital parts, the eye socket and features around it, that are similar to differentiated genera. Another differentiating feature is the supraorbital eave. It is part of the orbital region above the eyestalks. It projects laterally and becomes part of the spine. From the anatomical observations of this genus and others in the family Inachidae, Macrochiera was placed in the family Inachidae, descending from the genus Oncinpus and from it descending the genera Oreconia, Parapleisticantha, and Pleistincantha. Anatomy M. kaempferi is a giant crab with a pear-shaped carapace that is 350 mm (13.7 in) when measured on the median line. Its surface is covered in small spike-like projections or tubercles. The spine of an adult giant crab is short and curves outward at the tip. The spines in young giant crabs, though, are long compared to their carapaces, along with an uncurved spine. This proportionality explains, as in other decapod crustaceans, that spine size decreases as specimens grow older. As mentioned in the taxonomic section, this genus contains the family's primitive feature of a movable antenna at the basal segment, but "the development of a spine at the posterior angle of the supraocular eave, and the presence of intercalated spine and antennulary septum seem to attribute a rather high position to this genus." Lastly, differences are seen between the sexes. Adult males have very long front legs where the claws are located, but they are still shorter than the ambulatory legs of females, located in the back of the carapace. Fishery and conservation Temminck, in his original description, noted that the crab was known to the Japanese for the serious injuries it can cause with its strong claws. The Japanese spider crab is "occasionally collected for food", and even considered a delicacy in many parts of Japan and other areas in the region. In total, were collected in 1976, but fell to only in 1985. The fishery is centred on Suruga Bay. The crabs are typically caught using small trawling nets. The population has decreased in number due to overfishing, forcing fishermen into exploring deeper waters to catch them. The average size caught by fishermen is a legspan of . Populations of this species of crab have diminished over recent years and many efforts are being made to protect them. One of the primary methods of recovery of the species being used is restocking artificially cultured juvenile crabs in fisheries. Additionally, laws have been put into place in Japan that prohibit fishermen from harvesting spider crabs from January through April, during their typical mating season when they are in shallower waters and more vulnerable to being caught. This protection method seeks to keep natural populations growing, and enables time for juvenile spider crabs to go through the early stages of their lifecycle.
Biology and health sciences
Crabs and hermit crabs
Animals
811693
https://en.wikipedia.org/wiki/Laboratory%20mouse
Laboratory mouse
The laboratory mouse or lab mouse is a small mammal of the order Rodentia which is bred and used for scientific research or feeders for certain pets. Mice used in laboratories are usually of the species Mus musculus. They are the most commonly used mammalian research model and are used for research in genetics, physiology, psychology, medicine and other scientific disciplines. Mice belong to the Euarchontoglires clade, which includes humans. This close relationship, the associated high homology with humans, their ease of maintenance and handling, and their high reproduction rate, make mice particularly suitable models for human-oriented research. The laboratory mouse genome has been sequenced and many mouse genes have human homologues. Lab mice are sold at pet stores for snake food and can also be kept as pets. Other mouse species sometimes used in laboratory research include two American species, the white-footed mouse (Peromyscus leucopus) and the North American deer mouse (Peromyscus maniculatus). History as a biological model Mice have been used in biomedical research since the 17th century when William Harvey used them for his studies on reproduction and blood circulation and Robert Hooke used them to investigate the biological consequences of an increase in air pressure. During the 18th century Joseph Priestley and Antoine Lavoisier both used mice to study respiration. In the 19th century Gregor Mendel carried out his early investigations of inheritance on mouse coat color but was asked by his superior to stop breeding in his cell "smelly creatures that, in addition, copulated and had sex". He then switched his investigations to peas but, as his observations were published in a somewhat obscure botanical journal, they were virtually ignored for over 35 years until they were rediscovered in the early 20th century. In 1902 Lucien Cuénot published the results of his experiments using mice which showed that Mendel's laws of inheritance were also valid for animals — results that were soon confirmed and extended to other species. In the early part of the 20th century, Harvard undergraduate Clarence Cook Little was conducting studies on mouse genetics in the laboratory of William Ernest Castle. Little and Castle collaborated closely with Abbie Lathrop who was a breeder of fancy mice and rats which she marketed to rodent hobbyists and keepers of exotic pets, and later began selling in large numbers to scientific researchers. Together they generated the DBA (Dilute, Brown and non-Agouti) inbred mouse strain and initiated the systematic generation of inbred strains. The mouse has since been used extensively as a model organism and is associated with many important biological discoveries of the 20th and 21st Centuries. The Jackson Laboratory in Bar Harbor, Maine is currently one of the world's largest suppliers of laboratory mice, at around 3 million mice a year. The laboratory is also the world's source for more than 8,000 strains of genetically defined mice and is home of the Mouse Genome Informatics database. Reproduction Breeding onset occurs at about 50 days of age in both females and males, although females may have their first estrus at 25–40 days. Mice are polyestrous and breed year round; ovulation is spontaneous. The duration of the estrous cycle is 4–5 days and lasts about 12 hours, occurring in the evening. Vaginal smears are useful in timed matings to determine the stage of the estrous cycle. Mating can be confirmed by the presence of a copulatory plug in the vagina up to 24 hours post-copulation. The presence of sperm on a vaginal smear is also a reliable indicator of mating. The average gestation period is 20 days. A fertile postpartum estrus occurs 14–24 hours following parturition, and simultaneous lactation and gestation prolongs gestation by 3–10 days owing to delayed implantation. The average litter size is 10–12 during optimum production, but is highly strain-dependent. As a general rule, inbred mice tend to have longer gestation periods and smaller litters than outbred and hybrid mice. The young are called pups and weigh at birth, are hairless, and have closed eyelids and ears. Pups are weaned at 3 weeks of age when they weigh about . If the female does not mate during the postpartum estrus, she resumes cycling 2–5 days post-weaning. Newborn males are distinguished from newborn females by noting the greater anogenital distance and larger genital papilla in the male. This is best accomplished by lifting the tails of littermates and comparing perinea. Genetics and strains Mice are mammals of the clade (a group consisting of an ancestor and all its descendants) Euarchontoglires, which means they are amongst the closest non-primate relatives of humans along with lagomorphs, treeshrews, and flying lemurs. Laboratory mice are the same species as the house mouse; however, they are often very different in behaviour and physiology. There are hundreds of established inbred, outbred, and transgenic strains. A strain, in reference to rodents, is a group in which all members are as nearly as possible genetically identical. In laboratory mice, this is accomplished through inbreeding. By having this type of population, it is possible to conduct experiments on the roles of genes, or conduct experiments that exclude genetic variation as a factor. In contrast, outbred populations are used when identical genotypes are unnecessary or a population with genetic variation is required, and are usually referred to as stocks rather than strains. Over 400 standardized, inbred strains have been developed. Most laboratory mice are hybrids of different subspecies, most commonly of Mus musculus domesticus and Mus musculus musculus. Laboratory mice can have a variety of coat colours, including agouti, black and albino. Many (but not all) laboratory strains are inbred. The different strains are identified with specific letter-digit combinations; for example C57BL/6 and BALB/c. The first such inbred strains were produced in 1909 by Clarence Cook Little, who was influential in promoting the mouse as a laboratory organism. In 2011, an estimated 83% of laboratory rodents supplied in the U.S. were C57BL/6 laboratory mice. Genome Sequencing of the laboratory mouse genome was completed in late 2002 using the C57BL/6 strain. This was only the second mammalian genome to be sequenced after humans. The haploid genome is about three billion base pairs long (3,000 Mb distributed over 19 autosomal chromosomes plus 1 respectively 2 sex chromosomes), therefore equal to the size of the human genome. Estimating the number of genes contained in the mouse genome is difficult, in part because the definition of a gene is still being debated and extended. The current count of primary coding genes in the laboratory mouse is 23,139. compared to an estimated 20,774 in humans. Mutant and transgenic strains Various mutant strains of mice have been created by a number of methods. A small selection from the many available strains includes - Mice resulting from ordinary breeding and inbreeding: Non-obese diabetic (NOD) mice, which develop diabetes mellitus type 1. Murphy Roths large (MRL) mice, with unusual regenerative capacities Japanese waltzing mice, which walk in a circular pattern due to a mutation adversely affecting their inner ears Immunodeficient nude mice, lacking hair and a thymus: these mice do not produce T lymphocytes; therefore, they do not mount cellular immune responses. They are used for research in immunology and transplantation. Severe combined immunodeficiency (SCID) mice, with an almost completely defective immune system FVB mice, whose large litter sizes and large oocyte pronuclei expedite use in genetic research , which fail to recruit nutrient copper into milk causing pup death. It is caused by an autosomal recessive mutation tx which arose in an inbred. Theophilos et al. 1996 found this to be genetic and localized to chromosome 8, near the centromere. Transgenic mice, with foreign genes inserted into their genome: Abnormally large mice, with an inserted rat growth hormone gene Oncomice, with an activated oncogene, so as to significantly increase the incidence of cancer Doogie mice, with enhanced NMDA receptor function, resulting in improved memory and learning Knockout mice, where a specific gene was made inoperable by a technique known as gene knockout: the purpose is to study the function of the gene's product or to simulate a human disease Fat mice, prone to obesity due to a carboxypeptidase E deficiency Strong muscular mice, with a disabled myostatin gene, nicknamed "mighty mice". Since 1998, it has been possible to clone mice from cells derived from adult animals. Commonly used inbred strains There are many strains of mice used in research, however, inbred strains are usually the animals of choice for most fields. Inbred mice are defined as being the product of at least 20 generations of brother X sister mating, with all individuals being derived from a single breeding pair. Inbred mice have several traits that make them ideal for research purposes. They are isogenic, meaning that all animals are nearly genetically identical. Approximately 98.7% of the genetic loci in the genome are homozygous, so there are probably no "hidden" recessive traits that could cause problems. They also have very unified phenotypes due to this stability. Many inbred strains have well documented traits that make them ideal for specific types of research. The following table shows the top 10 most popular strains according to Jackson Laboratories. Jackson Labs DO project The Jackson Labs DO (Diversity Outbred) project is a mouse breeding program using multiple inbred founder strains to create a genetically diverse population of mice for use in scientific research. These mice are designed for fine genetic mapping, and capture a large portion of the genetic diversity of the mouse genome. This project has resulted in over 1,000 genetically diverse mice which have been used to identify genetic factors for diseases such as obesity, cancer, diabetes, and alcohol use disorder. Appearance and behaviour Laboratory mice have retained many of the physical and behavioural characteristics of house mice; however, due to many generations of artificial selection, some of these characteristics now vary markedly. Due to the large number of strains of laboratory mice, it is impractical to comprehensively describe the appearance and behaviour of all of them; however, they are described below for two of the most commonly used strains. C57BL/6 C57BL/6 mice have a dark brown, nearly black coat. They are more sensitive to noise and odours and are more likely to bite than the more docile laboratory strains such as BALB/c. Group-housed C57BL/6 mice (and other strains) display barbering behaviour, which used to be seen as a sign of dominance. However, it is now known that this is more of a stereotypical behaviour triggered by stress, comparable to trichotillomania in humans or feather plucking in parrots. Mice that have been barbered extensively can have large bald patches on their bodies, commonly around the head, snout, and shoulders, although barbering may appear anywhere on the body. Also self-barbering can occure. Both hair and vibrissae may be removed. Barbering is more frequently seen in female mice; male mice are more likely to display dominance through fighting. C57BL/6 has several unusual characteristics which make it useful for some research studies but inappropriate for others: It is unusually sensitive to pain and to cold, and analgesic medications are less effective in this strain. Unlike most laboratory mouse strains, the C57BL/6 drinks alcoholic beverages voluntarily. It is more susceptible than average to morphine addiction, atherosclerosis, and age-related hearing loss. When compared directly to BALB/c mice, C57BL/6 mice also express both a robust response to social rewards and empathy. BALB/c BALB/c is an albino laboratory-bred strain from which a number of common substrains are derived. With over 200 generations bred since 1920, BALB/c mice are distributed globally and are among the most widely used inbred strains used in animal experimentation. BALB/c are noted for displaying high levels of anxiety and for being relatively resistant to diet-induced atherosclerosis, making them a useful model for cardiovascular research. Male BALB/c mice are aggressive and will fight other males if housed together. However, the BALB/Lac substrain is much more docile. Most BALB/c mice substrains have a long reproductive life-span. There are noted differences between different BALB/c substrains, though these are thought to be due to mutation rather than genetic contamination. The BALB/cWt is unusual in that 3% of progeny display true hermaphroditism. Tg2576 A useful model for Alzheimer's disease (AD) in the lab is the Tg2576 strain of mice. The K670M and N671L double mutations seen in the human 695 splice-variant of the amyloid precursor protein (APP) are expressed by this strain. A hamster prion protein gene promoter, predominantly in neurons, drives the expression. When compared to non-transgenic littermates, Tg2576 mice show a five-fold rise in Aβ40 and a 10- to 15-fold increase in Aβ42/43. These mice develop senile plaques linked to cellular inflammatory responses because their brains have approximately five times as much transgenic mutant human APP than indigenous mouse APP. The mice exhibit main characteristics of Alzheimer's disease (AD), such as increased generation of amyloid fibrils with aging, plaque formation, and impaired hippocampus learning and memory. Tg2576 mice are a good model for early-stage AD because they show amyloidogenesis and working memory impairments linked to age but do not show neuronal degeneration. The absence of cell death suggests that changes in typical cellular signaling cascades involved in learning and synaptic plasticity are probably linked to the memory phenotype. Associative learning impairments are exacerbated when Tg2576 mice are crossed with PS1 transgenic animals that possess the A246E FAD mutation. This crosses promotes the build-up of amyloid and plaque development in the CNS. This lends credence to the theory that AD pathogenesis is influenced by the interplay between APP and PS-1 gene products. Although Tg2576 mice do not perfectly replicate late-stage AD with cell death, they do offer a platform for researching the physiology and biochemistry of the illness.With the help of transgenic mouse models, researchers can make progress in AD research by understanding the intricate relationships between gene products that are involved in the production of Aβ peptide.e physiology and biochemistry of the illness. Husbandry Handling Traditionally, laboratory mice have been picked up by the base of the tail. However, recent research has shown that this type of handling increases anxiety and aversive behaviour. Instead, handling mice using a tunnel or cupped hands is advocated. In behavioural tests, tail-handled mice show less willingness to explore and to investigate test stimuli, as opposed to tunnel-handled mice which readily explore and show robust responses to test stimuli. Nutrition In nature, mice are usually herbivores, consuming a wide range of fruit or grain. However, in laboratory studies it is usually necessary to avoid biological variation and to achieve this, laboratory mice are almost always fed only commercial pelleted mouse feed. Food intake is approximately per of body weight per day; water intake is approximately per 100 g of body weight per day. Injection procedures Routes of administration of injections in laboratory mice are mainly subcutaneous, intraperitoneal and intravenous. Intramuscular administration is not recommended due to small muscle mass. Intracerebral administration is also possible. Each route has a recommended injection site, approximate needle gauge and recommended maximum injected volume at a single time at one site, as given in the table below: To facilitate intravenous injection into the tail, laboratory mice can be carefully warmed under heat lamps to vasodilate the vessels. Anaesthesia A common regimen for general anesthesia for the house mouse is ketamine (in the dose of 100 mg per kg body weight) plus xylazine (in the dose of 5–10 mg per kg), injected by the intraperitoneal route. It has a duration of effect of about 30 minutes. Euthanasia Approved procedures for euthanasia of laboratory mice include compressed gas, injectable barbiturate anesthetics, inhalable anesthetics, such as Halothane, and physical methods, such as cervical dislocation and decapitation. In 2013, the American Veterinary Medical Association issued new guidelines for induction, stating that a flow rate of 10% to 30% volume/min is optimal for euthanasing laboratory mice. Pathogen susceptibility A recent study detected a murine astrovirus in laboratory mice held at more than half of the US and Japanese institutes investigated. Murine astrovirus was found in nine mice strains, including NSG, NOD-SCID, NSG-3GS, C57BL6-Timp-3−/−, uPA-NOG, B6J, ICR, Bash2, and BALB/C, with various degrees of prevalence. The pathogenicity of the murine astrovirus was not known. Legislation in research United Kingdom In the U.K., as with all other vertebrates and some invertebrates, any scientific procedure which is likely to cause "pain, suffering, distress or lasting harm" is regulated by the Home Office under the Animals (Scientific Procedures) Act 1986. U.K. regulations are considered amongst the most comprehensive and rigorous in the world. Detailed data on the use of laboratory mice (and other species) in research in the U.K. are published each year. In the U.K. in 2013, there were a total of 3,077,115 regulated procedures undertaken on mice in scientific procedure establishments, licensed under the Act. United States In the U.S., laboratory mice are not regulated under the Animal Welfare Act administered by the USDA APHIS. However, the Public Health Service Act (PHS) as administered by the National Institutes of Health does offer a standard for their care and use. Compliance with the PHS is required for a research project to receive federal funding. PHS policy is administered by the Office of Laboratory Animal Welfare. Many academic research institutes seek accreditation voluntarily, often through the Association for Assessment and Accreditation of Laboratory Animal Care, which maintains the standards of care found within The Guide for the Care and Use of Laboratory Animals and the PHS policy. This accreditation is, however, not a prerequisite for federal funding, unlike the actual compliance. Limitations While mice are by far the most widely used animals in biomedical research, recent studies have highlighted their limitations. For example, the utility of rodents in testing for sepsis, burns, inflammation, stroke, ALS, Alzheimer's disease, diabetes, cancer, multiple sclerosis, Parkinson's disease, and other illnesses has been called into question by a number of researchers. Regarding experiments on mice, some researchers have complained that "years and billions of dollars have been wasted following false leads" as a result of a preoccupation with the use of these animals in studies. Mice differ from humans in several immune properties: mice are more resistant to some toxins than humans; have a lower total neutrophil fraction in the blood, a lower neutrophil enzymatic capacity, lower activity of the complement system, and a different set of pentraxins involved in the inflammatory process; and lack genes for important components of the immune system, such as IL-8, IL-37, TLR10, ICAM-3, etc. Laboratory mice reared in specific-pathogen-free (SPF) conditions usually have a rather immature immune system with a deficit of memory T cells. These mice may have limited diversity of the microbiota, which directly affects the immune system and the development of pathological conditions. Moreover, persistent virus infections (for example, herpesviruses) are activated in humans, but not in SPF mice with septic complications and may change the resistance to bacterial coinfections. "Dirty" mice are possibly better suitable for mimicking human pathologies. In addition, inbred mouse strains are used in the overwhelming majority of studies, while the human population is heterogeneous, pointing to the importance of studies in interstrain hybrid, outbred, and nonlinear mice. An article in The Scientist notes, "The difficulties associated with using animal models for human disease result from the metabolic, anatomic, and cellular differences between humans and other creatures, but the problems go even deeper than that" including issues with the design and execution of the tests themselves. In addition, the caging of laboratory animals may render them irrelevant models of human health because these animals lack day-to-day variations in experiences, agency, and challenges that they can overcome. The impoverished environments inside small mouse cages can have deleterious influences on biomedical results, especially with respect to studies of mental health and of systems that depend upon healthy psychological states. For example, researchers have found that many mice in laboratories are obese from excess food and minimal exercise, which alters their physiology and drug metabolism. Many laboratory animals, including mice, are chronically stressed, which can also negatively affect research outcomes and the ability to accurately extrapolate findings to humans. Researchers have also noted that many studies involving mice are poorly designed, leading to questionable findings. Some studies suggests that inadequate published data in animal testing may result in irreproducible research, with missing details about how experiments are done are omitted from published papers or differences in testing that may introduce bias. Examples of hidden bias include a 2014 study from McGill University which suggests that mice handled by men rather than women showed higher stress levels. Another study in 2016 suggested that gut microbiomes in mice may have an impact upon scientific research. Market size The worldwide market for gene-altered mice is predicted to grow to $1.59 billion by 2022, growing at a rate of 7.5 percent per year.
Biology and health sciences
Rodents
Animals
812186
https://en.wikipedia.org/wiki/Poison%20dart%20frog
Poison dart frog
Poison dart frog (also known as dart-poison frog, poison frog or formerly known as poison arrow frog) is the common name of a group of frogs in the family Dendrobatidae which are native to tropical Central and South America. These species are diurnal and often have brightly colored bodies. This bright coloration is correlated with the toxicity of the species, making them aposematic. Some species of the family Dendrobatidae exhibit extremely bright coloration along with high toxicity — a feature derived from their diet of ants, mites and termites— while species which eat a much larger variety of prey have cryptic coloration with minimal to no amount of observed toxicity. Many species of this family are threatened due to human infrastructure encroaching on their habitats. These amphibians are often called "dart frogs" due to the aboriginal South Americans' use of their toxic secretions to poison the tips of blowdarts. However, out of over 170 species, only four have been documented as being used for this purpose (curare plants are more commonly used for aboriginal South American darts) all of which come from the genus Phyllobates, which is characterized by the relatively large size and high levels of toxicity of its members. Characteristics Most species of poison dart frogs are small, sometimes less than in adult length, although a few grow up to in length. They weigh 1 oz. on average. Most poison dart frogs are brightly colored, displaying aposematic patterns to warn potential predators. Their bright coloration is associated with their toxicity and levels of alkaloids. For example, frogs of the genus Dendrobates have high levels of alkaloids, whereas Colostethus species are cryptically colored and are not toxic. Poison dart frogs are an example of an aposematic organism. Their bright coloration advertises unpalatability to potential predators. Aposematism is currently thought to have originated at least four times within the poison dart family according to phylogenetic trees, and dendrobatid frogs have since undergone dramatic divergences – both interspecific and intraspecific – in their aposematic coloration. This is surprising given the frequency-dependent nature of this type of defense mechanism. Adult frogs lay their eggs in moist places, including on leaves, in plants, among exposed roots, and elsewhere. Once the eggs hatch, the adult piggybacks the tadpoles, one at a time, to suitable water: either a pool, or the water gathered in the throat of bromeliads or other plants. The tadpoles remain there until they metamorphose, in some species fed by unfertilized eggs laid at regular intervals by the mother. Habitat Poison dart frogs are endemic to humid, tropical environments of Central and South America. These frogs are generally found in tropical rainforests, including in Bolivia, Costa Rica, Brazil, Colombia, Ecuador, Venezuela, Suriname, French Guiana, Peru, Panama, Guyana, Nicaragua, and Hawaii (introduced). Natural habitats include moist, lowland forests (subtropical and tropical), high-altitude shrubland (subtropical and tropical), moist montanes and rivers (subtropical and tropical), freshwater marshes, intermittent freshwater marshes, lakes and swamps. Other species can be found in seasonally wet or flooded lowland grassland, arable land, pastureland, rural gardens, plantations, moist savanna and heavily degraded former forest. Premontane forests and rocky areas have also been known to hold frogs. Dendrobatids tend to live on or close to the ground, but also in trees as much as from the ground. Taxonomy Dart frogs are the focus of major phylogenetic studies, and undergo taxonomic changes frequently. The family Dendrobatidae currently contains 16 genera, with about 200 species. Color morphs Some poison dart frogs species include a number of conspecific color morphs that emerged as recently as 6,000 years ago. Therefore, species such as Dendrobates tinctorius, Oophaga pumilio, and Oophaga granulifera can include color pattern morphs that can be interbred (colors are under polygenic control, while the actual patterns are probably controlled by a single locus). Differing coloration has historically misidentified single species as separate, and there is still controversy among taxonomists over classification. Variation in predation regimens may have influenced the evolution of polymorphism in Oophaga granulifera, while sexual selection appears to have contributed to differentiation among the Bocas del Toro populations of Oophaga pumilio. Toxicity and medicine The chemical defense mechanisms of the Dendrobates family are the result of exogenous means. Essentially, this means that their ability to defend has come through the consumption of a particular diet – in this case, toxic arthropods – from which they absorb and reuse the consumed toxins. The secretion of these chemicals is released by the granular glands of the frog. The chemicals secreted by the Dendrobatid family of frogs are alkaloids that differ in chemical structure and toxicity. Many poison dart frogs secrete lipophilic alkaloid toxins such as allopumiliotoxin 267A, batrachotoxin, epibatidine, histrionicotoxin, and pumiliotoxin 251D through their skin. Alkaloids in the skin glands of poison dart frogs serve as a chemical defense against predation, and they are therefore able to be active alongside potential predators during the day. About 28 structural classes of alkaloids are known in poison dart frogs. The most toxic of poison dart frog species is Phyllobates terribilis. It is believed that dart frogs do not synthesize their poisons, but sequester the chemicals from arthropod prey items, such as ants, centipedes and mites – the diet-toxicity hypothesis. Because of this, captive-bred animals do not possess significant levels of toxins as they are reared on diets that do not contain the alkaloids sequestered by wild populations. Nonetheless, the captive-bred frogs retain the ability to accumulate alkaloids when they are once again provided an alkaloidal diet. Despite the toxins used by some poison dart frogs, some predators have developed the ability to withstand them. One is the snake Erythrolamprus epinephalus, which has developed immunity to the poison. Chemicals extracted from the skin of Epipedobates tricolor may have medicinal value. Scientists use this poison to make a painkiller. One such chemical is a painkiller 200 times as potent as morphine, called epibatidine; however, the therapeutic dose is very close to the fatal dose. A derivative, ABT-594, developed by Abbott Laboratories, was named as Tebanicline and got as far as Phase II trials in humans, but was dropped from further development due to dangerous gastrointestinal side effects. Secretions from dendrobatids are also showing promise as muscle relaxants, heart stimulants and appetite suppressants. The most poisonous of these frogs, the golden poison frog (Phyllobates terribilis), has enough toxin on average to kill ten to twenty men or about twenty thousand mice. Most other dendrobatids, while colorful and toxic enough to discourage predation, pose far less risk to humans or other large animals. Conspicuousness Conspicuous coloration in these frogs is further associated with diet specialization, body mass, aerobic capacity, and chemical defense. Conspicuousness and toxicity may be inversely related, as polymorphic poison dart frogs that are less conspicuous are more toxic than the brightest and most conspicuous species. Energetic costs of producing toxins and bright color pigments lead to potential trade-offs between toxicity and bright coloration, and prey with strong secondary defenses have less to gain from costly signaling. Therefore, prey populations that are more toxic are predicted to manifest less bright signals, opposing the classical view that increased conspicuousness always evolves with increased toxicity. Aposematism Skin toxicity evolved alongside bright coloration, perhaps preceding it. Toxicity may have relied on a shift in diet to alkaloid-rich arthropods, which likely occurred at least four times among the dendrobatids. Either aposematism and aerobic capacity preceded greater resource gathering, making it easier for frogs to go out and gather the ants and mites required for diet specialization, contrary to classical aposematic theory, which assumes that toxicity from diet arises before signaling. Alternatively, diet specialization preceded higher aerobic capacity, and aposematism evolved to allow dendrobatids to gather resources without predation. Prey mobility could also explain the initial development of aposematic signaling. If prey have characteristics that make them more exposed to predators, such as when some dendrobatids shifted from nocturnal to diurnal behavior, then they have more reason to develop aposematism. After the switch, the frogs had greater ecological opportunities, causing dietary specialization to arise. Thus, aposematism is not merely a signaling system, but a way for organisms to gain greater access to resources and increase their reproductive success. Other factors Dietary conservatism (long-term neophobia) in predators could facilitate the evolution of warning coloration, if predators avoid novel morphs for a long enough period of time. Another possibility is genetic drift, the so-called gradual-change hypothesis, which could strengthen weak pre-existing aposematism. Sexual selection may have played a role in the diversification of skin color and pattern in poison frogs. With female preferences in play, male coloration could evolve rapidly. Sexual selection is influenced by many things. The parental investment may shed some light on the evolution of coloration in relation to female choice. In Oophaga pumilio, the female provides care for the offspring for several weeks whereas the males provides care for a few days, implying a strong female preference. Sexual selection increases phenotypic variation drastically. In populations of O. pumilio that participated in sexual selection, the phenotypic polymorphism was evident. The lack of sexual dimorphism in some dendrobatid populations however suggests that sexual selection is not a valid explanation. Functional trade-offs are seen in poison frog defense mechanisms relating to toxin resistance. Poison dart frogs containing epibatidine have undergone a 3 amino acid mutation on receptors of the body, allowing the frog to be resistant to its own poison. Epibatidine-producing frogs have evolved poison resistance of body receptors independently three times. This target-site insensitivity to the potent toxin epibatidine on nicotinic acetylcholine receptors provides a toxin resistance while reducing the affinity of acetylcholine binding. Diet The diet of Dendrobatidae is what gives them the alkaloids/toxins that are found in their skin. The diet that is responsible for these characteristics consists primarily of small and leaf-litter arthropods found in its general habitat, typically ants. Their diet, however, is typically separated into two distinct categories. The first is the primary portion of Dendrobatidae's diet which include prey that are slow-moving, large in number, and small in size. This typically consists of ants, while also including mites, small beetles, and minor litter-dwelling taxa. The second category of prey are much rarer finds and are much larger in body size, and they tend to have high palatability and mobility. These typically consist of the orthopteroids, lepidopteran larvae, and spiders. The natural diet of an individual dendrobatid depends on its species and prey abundance in its location, amongst other factors. Behavior Aggressive behavior and territoriality The Dendrobatidae are a family of species very well known for their territorial and aggressive behavior not only as tadpoles, but as adults too. These aggression behaviors are not only limited to males, as many female Dendrobatidae also are known to defend their own native territory very aggressively. Dendrobatidae are especially aggressive in defending regions that serve as male calling sites. Males wrestle with intruders of their territory in order to defend their calling sites as well as their vegetation. While vocalization and various behavioral displays serve as a way of exhibiting one's strength or fitness, territorial disputes and fights often escalate to physical combat and aggression. Physical violence and aggression are particularly common at times of calling. If it an intruder is detected making calls in the territory of a Dendrobatidae frog, the resident frog attempts to eliminate the competition to claim the territory and the females in it for himself. The resident frog initially makes its presence known by the means of vocalization and various behavioral displays as a way to exert dominance, but if this does not scare away the intruder, then the resident frog moves towards the intruder and strikes them. These encounters immediately escalate into a full on fight where both strike each other and grasp each other's limbs. Similarly, the females also often get into fights and display aggressive behaviors in disputes over territory or a mating conflict. It has also been observed that females who are going after the same male, after hearing their call, chase each other down and wrestle to fight for the male. After a female courts with a male, they are also very likely to exhibit aggressive behavior towards any females that approach that male. Both the males and females bout their own respective sexes for each other in a fairly similar fashion. Reproduction Many species of poison dart frogs are dedicated parents. Many poison dart frogs in the genera Oophaga and Ranitomeya carry their newly hatched tadpoles into the canopy; the tadpoles stick to the mucus on the backs of their parents. Once in the upper reaches of the rainforest trees, the parents deposit their young in the pools of water that accumulate in epiphytic plants, such as bromeliads. The tadpoles feed on invertebrates in their nursery, and their mother will even supplement their diet by depositing eggs into the water. Other poison frogs lay their eggs on the forest floor, hidden beneath the leaf litter. Poison frogs fertilize their eggs externally; the female lays a cluster of eggs and a male fertilizes them afterward, in the same manner as most fish. Poison frogs can often be observed clutching each other, similar to the manner most frogs copulate. However, these demonstrations are actually territorial wrestling matches. Both males and females frequently engage in disputes over territory. A male will fight for the most prominent roosts from which to broadcast his mating call; females fight over desirable nests, and even invade the nests of other females to devour competitor's eggs. The operational sex ratio in the poison dart frog family is mostly female biased. This leads to a few characteristic behaviors and traits found in organisms with an uneven sex ratio. In general, females have a choice of mate. In turn, males show brighter coloration, are territorial, and are aggressive toward other males. Females select mates based on coloration (mainly dorsal), calling perch location, and territory. Mating behavior Observations of the Dendrobatidae family suggest that males of the species typically make their mating call in morning between the times of 6:30 am to 11:30 am. The males are usually on average one meter above the ground on limbs, trunks, and stems, or logs of trees so that their call travels further and so they can be seen by potential mates. The calls are signaled towards the stream where females are located. After the call is received, the female makes its way to the male and fertilization occurs. This observed fertilization is not accomplished through amplexus. Upon meeting, courtship is generally initiated by the female. The female strokes, climbs, and jumps on the male in tactile courtship and are by far the more active sex. The duration of courtship in poison frogs is long and females may occasionally reject males, even after an entire day of active pursuit. In the majority of cases, the males choose the oviposition site and lead the females there. In some Dendrobatidae species, such as strawberry poison frog, visual cues under high light intensity are also used to identify individuals from the same population. Different species use different cues to identify individuals from their same population during mating and courtship. Post-mating behavior Typically in many species the larger portion of parental investment falls on the shoulders of the female sex, whereas the male sex has a much smaller portion. However, it has been studied that in the family of Dendrobatidae, many of the species exhibit sex role reversal in which the females are competing for a limited number of males and the males are the choosers and their parental investment is much larger than the females. This theory also says that the female will typically produce eggs at an exceedingly fast rate that the males cannot possibly take full care of them which then leads to some of the males becoming unreceptive. Dendrobatidae also exhibit the parental quality hypothesis. This is where the females mating with the males try to ensure that their male mates with as few individuals as possible so that their number of offspring is limited, and thus each individual offspring receives a larger portion of care, attention, and resources. However, this creates an interesting dynamic of balance as there is a limited number of males available, and with many females competing for a limited number of males for courtship this makes it difficult to limit the number of individuals a male mates with. Whereas in many species, the competition is flipped in that the competition is prominent among the males, among the Dendrobatidae it is the opposite as the females seem to have a great deal of competition among themselves for males. Females will even take the drastic measures and resort to the destroying of other female's eggs in order to make sure that the male they mated with is receptive and that it scares the male from mating with other females. Behavior as tadpoles The poison dart frog is known for its aggressive and predatory behavior. As tadpoles, the individuals of the genus Dendrobates exhibit some unique cannibalistic tendencies, along with many other forms of predatory behavior. Dendrobates tadpoles that either consumed three or more conspecific tadpoles and/or relatively large larvae of the mosquito Trichoprosopon digitatum common in their environment led them to having a much higher growth rate and typically lived much longer lives. Reasons for this behavior could be that predation and aggression was selected for and favored for a few reasons. One reason is to eliminate predators, and the second reason is that it serves as a source of food in habitats that were low in resources. This predation could have evolved over time and led to cannibalism as another form of predatory behavior that had benefitted individuals survival fitness. However, one observation has been noted in the general characteristic of Dendrobates tadpoles including D. arboreus, D. granuliferus, D. lehmanni, D. occultator, D. pumilio, D. speciosus, and many other Dendrobates species is that they have reduced mouth parts as young tadpoles which limits their consumption typically to unfertilized eggs only. Thus, it can be assumed that the cannibalistic tendencies of Dendrobates is limited to their lifetime as a tadpole and does not cross over into their adult life. Captive care All species of poison dart frogs are Neotropical in origin. Wild-caught specimens can maintain toxicity for some time (which they obtain through a form of bioaccumulation), so appropriate care should be taken when handling them. While scientific study on the lifespan of poison dart frogs is scant, retagging frequencies indicate it can range from one to three years in the wild. However, these frogs typically live for much longer in captivity, having been reported to live as long as 25 years. These claims also seem to be questionable, since many of the larger species take a year or more to mature, and Phyllobates species can take more than two years. In captivity, most species thrive where the humidity is kept constant at 80 to 100% and where the temperature is around to during the day and no lower than to at night. Some species tolerate lower temperatures better than others. Conservation status Many species of poison dart frogs have recently experienced habitat loss, chytrid diseases, and collection for the pet trade. Some are listed as threatened or endangered as a result. Zoos have tried to counteract this disease by treating captive frogs with an antifungal agent that is used to cure athlete's foot in humans. Threats Parasites Poison dart frogs suffer from parasites ranging from helminths to protozoans. Diseases Poison dart frogs suffer from chytridiomycosis, which is a deadly disease that is caused by the fungus Batrachochytrium dendrobatidis (Bd). This infection has been found in frogs from Colostethus and Dendrobates.
Biology and health sciences
Frogs and toads
Animals
816867
https://en.wikipedia.org/wiki/Fluconazole
Fluconazole
Fluconazole is an antifungal medication used for a number of fungal infections. This includes candidiasis, blastomycosis, coccidioidomycosis, cryptococcosis, histoplasmosis, dermatophytosis, and tinea versicolor. It is also used to prevent candidiasis in those who are at high risk such as following organ transplantation, low birth weight babies, and those with low blood neutrophil counts. It is given either by mouth or by injection into a vein. Common side effects include vomiting, diarrhea, rash, and increased liver enzymes. Serious side effects may include liver problems, QT prolongation, and seizures. During pregnancy it may increase the risk of miscarriage while large doses may cause birth defects. Fluconazole is in the azole antifungal family of medication. It is believed to work by affecting the fungal cellular membrane. Fluconazole was patented in 1981 and came into commercial use in 1988. It is on the World Health Organization's List of Essential Medicines. Fluconazole is available as a generic medication. In 2022, it was the 160th most commonly prescribed medication in the United States, with more than 3million prescriptions. Medical uses Fluconazole is a first-generation triazole antifungal medication. It differs from earlier azole antifungals (such as ketoconazole) in that its structure contains a triazole ring instead of an imidazole ring. While the imidazole antifungals are mainly used topically, fluconazole and certain other triazole antifungals are preferred when systemic treatment is required because of their improved safety and predictable absorption when administered orally. Fluconazole's spectrum of activity includes most Candida species (but not Candida krusei or Candida glabrata), Cryptococcus neoformans, some dimorphic fungi, and dermatophytes, among others. Common uses include: The treatment of non-systemic Candida infections of the vagina ("yeast infections"), throat, and mouth. Certain systemic Candida infections in people with healthy immune systems, including infections of the bloodstream, kidney, or joints. Other antifungals are usually preferred when the infection is in the heart or central nervous system, and for the treatment of active infections in people with weak immune systems. The prevention of Candida infections in people with weak immune systems, such as those neutropenic due to cancer chemotherapy, those with advanced HIV infections, transplant patients, and premature infants. As a second-line agent for the treatment of cryptococcal meningoencephalitis, a fungal infection of the central nervous system. Resistance Antifungal resistance to drugs in the azole class tends to occur gradually over the course of prolonged drug therapy, resulting in clinical failure in immunocompromised patients (e.g., patients with advanced HIV receiving treatment for thrush or esophageal Candida infection). In C. albicans, resistance occurs by way of mutations in the ERG11 gene, which codes for 14α-demethylase. These mutations prevent the azole drug from binding, while still allowing binding of the enzyme's natural substrate, lanosterol. Development of resistance to one azole in this way will confer resistance to all drugs in the class. Another resistance mechanism employed by both C. albicans and C. glabrata is increasing the rate of efflux of the azole drug from the cell, by both ATP-binding cassette and major facilitator superfamily transporters. Other gene mutations are also known to contribute to development of resistance. C. glabrata develops resistance by up regulating CDR genes, and resistance in C. krusei is mediated by reduced sensitivity of the target enzyme to inhibition by the agent. The full spectrum of fungal susceptibility and resistance to fluconazole can be found in the product data sheet. According to the US Centers for Disease Control and Prevention, fluconazole resistance among Candida strains in the US is about 7%. Combating Resistance The rising fungal resistance to fluconazole and related azole drugs spurs the need to find effective combative solutions swiftly. Rising resistance raises concerns since fluconazole is commonly used due to its inexpensiveness and ease of administration, according to the World Health Organization. One possible solution to counter the increasing prevalence of Candida infections–fungal infections caused by the yeast Candida–is combination antifungal therapy, combining natural components with commercial antifungal drugs to combat resistance. Research shows that natural substances can have specified interactions with cell components, increasing the intracellular concentration of associated antifungal drugs and their effectiveness. For example, Brazilian red propolis, an organic bee liquid, synergizes with fluconazole to combat common yeast infections such as C. parapsilosis and C. glabrata. The essential oil from Nectandra lanceolata, a tree species native to wet tropical biomes, plays a similar role in ciclopirox, another common antifungal. While combination therapy offers the benefits of faster and more extensive fungal eradication, including a reduced risk of resistance or tolerance, it also presents challenges. These include potential increases in toxicity, costs, and the need for standardized practices to test the efficacy of the combination. Therefore, it is crucial to critically evaluate the role of combination therapy. An alternative to combination therapy for those who had prior exposure to Azoles is antifungal drugs of class echinocandins, recommended as the first method of treatment against invasive candidiasis. The three echinocandins currently licensed for medical use, namely anidulafungin, caspofungin, and micafungin, are potent against candidiasis, which has grown resistant to fluconazole because of the differences in their action mechanism. However, resistance to echinocandins can still develop through point mutations within highly conserved regions of the FKS1 and FKS2 genes through the exposure of members of this class. These genes encode for an enzyme called β-1,3-glucan synthase, responsible for building the yeast’s cell wall. Mutations in this enzyme reduce resistance to antifungal medications that target this enzyme and affect the yeast’s ability to construct its cell wall. Another promising avenue is the integration of phage therapy, which has shown successive results in functional therapies. Phages, viruses that infect microbes including fungi, exhibit potent antimicrobial effects against various resistant fungal strains, demonstrating remarkable specificity and efficacy. These viruses are integral components of diverse ecosystems, including the human microbiome. Their unique attributes, such as specificity, potency, compatibility with biological systems, and ability to kill fungi, make them attractive candidates for therapeutic interventions. However, challenges remain in terms of their production scalability, formulation, stability, and the emergence of fungal resistance, which hinders their widespread adoption. Prior to clinical use, phages intended for therapy require thorough purification, characterization, and validation of their virulence. While further research is needed, phage therapy holds promise in the fight against antifungal resistance that other therapies struggle to address. Contraindications Fluconazole is contraindicated in patients who: Drink alcohol have known hypersensitivity to other azole medicines such as ketoconazole; are taking terfenadine, if 400 mg per day multidose of fluconazole is administered; concomitant administration of fluconazole and quinidine, especially when fluconazole is administered in high dosages; take SSRIs such as fluoxetine or sertraline. Side effects Adverse drug reactions associated with fluconazole therapy include: Common (≥1% of patients): rash, headache, dizziness, nausea, vomiting, abdominal pain, diarrhea, and/or elevated liver enzymes Infrequent (0.1–1% of patients): anorexia, fatigue, constipation Rare (<0.1% of patients): oliguria, hypokalaemia, paraesthesia, seizures, alopecia, Stevens–Johnson syndrome, thrombocytopenia, other blood dyscrasias, serious hepatotoxicity including liver failure, anaphylactic/anaphylactoid reactions Very rare: prolonged QT interval, torsades de pointes In 2011, the US FDA reports that treatment with chronic, high doses of fluconazole during the first trimester of pregnancy may be associated with a rare and distinct set of birth defects in infants. If taken during pregnancy it may result in harm. These cases of harm, however, were only in women who took large doses for most of the first trimester. Fluconazole is secreted in human milk at concentrations similar to plasma. Fluconazole therapy has been associated with QT interval prolongation, which may lead to serious cardiac arrhythmias. Thus, it is used with caution in patients with risk factors for prolonged QT interval, such as electrolyte imbalance or use of other drugs that may prolong the QT interval (particularly cisapride and pimozide). Some people are allergic to azoles, so those allergic to other azole drugs might be allergic to fluconazole. That is, some azole drugs have adverse side-effects. Some azole drugs may disrupt estrogen production in pregnancy, affecting pregnancy outcome. Oral fluconazole is not associated with a significantly increased risk of birth defects overall, although it does increase the odds ratio of tetralogy of Fallot, but the absolute risk is still low. Women using fluconazole during pregnancy have a 50% higher risk of spontaneous abortion. Fluconazole should not be taken with cisapride (Propulsid) due to the possibility of serious, even fatal, heart problems. In rare cases, severe allergic reactions including anaphylaxis may occur. Powder for oral suspension contains sucrose and should not be used in patients with hereditary fructose, glucose/galactose malabsorption or sucrase-isomaltase deficiency. Capsules contain lactose and should not be given to patients with rare hereditary problems of galactose intolerance, Lapp lactase deficiency, or glucose-galactose malabsorption Interactions Fluconazole is an inhibitor of the human cytochrome P450 system, particularly the isozyme CYP2C19 (CYP3A4 and CYP2C9 to lesser extent) In theory, therefore, fluconazole decreases the metabolism and increases the concentration of any drug metabolised by these enzymes. In addition, its potential effect on QT interval increases the risk of cardiac arrhythmia if used concurrently with other drugs that prolong the QT interval. Berberine has been shown to exert synergistic effects with fluconazole even in drug-resistant Candida albicans infections. Fluconazole may increase the serum concentration of Erythromycin (Risk X: avoid combination). Pharmacology Pharmacodynamics Like other imidazole- and triazole-class antifungals, fluconazole inhibits the fungal cytochrome P450 enzyme 14α-demethylase. Mammalian demethylase activity is much less sensitive to fluconazole than fungal demethylase. This inhibition prevents the conversion of lanosterol to ergosterol, an essential component of the fungal cytoplasmic membrane, and subsequent accumulation of 14α-methyl sterols. Fluconazole is primarily fungistatic; however, it may be fungicidal against certain organisms in a dose-dependent manner, specifically Cryptococcus. Pharmacokinetics Following oral dosing, fluconazole is almost completely absorbed within two hours. Bioavailability is not significantly affected by the absence of stomach acid. Concentrations measured in the urine, tears, and skin are approximately 10 times the plasma concentration, whereas saliva, sputum, and vaginal fluid concentrations are approximately equal to the plasma concentration, following a standard dose range of between 100 mg and 400 mg per day. The elimination half-life of fluconazole follows zero order, and only 10% of elimination is due to metabolism, the remainder being excreted in urine and sweat. Patients with impaired renal function will be at risk of overdose. In a bulk powder form, it appears as a white crystalline powder, and it is very slightly soluble in water and soluble in alcohol. History Fluconazole was patented by Pfizer in 1981 in the United Kingdom and came into commercial use in 1988. Patent expirations occurred in 2004 and 2005.
Biology and health sciences
Antifungals
Health
817175
https://en.wikipedia.org/wiki/Biological%20dispersal
Biological dispersal
Biological dispersal refers to both the movement of individuals (animals, plants, fungi, bacteria, etc.) from their birth site to their breeding site ('natal dispersal') and the movement from one breeding site to another ('breeding dispersal'). Dispersal is also used to describe the movement of propagules such as seeds and spores. Technically, dispersal is defined as any movement that has the potential to lead to gene flow. The act of dispersal involves three phases: departure, transfer, and settlement. There are different fitness costs and benefits associated with each of these phases. Through simply moving from one habitat patch to another, the dispersal of an individual has consequences not only for individual fitness, but also for population dynamics, population genetics, and species distribution. Understanding dispersal and the consequences, both for evolutionary strategies at a species level and for processes at an ecosystem level, requires understanding on the type of dispersal, the dispersal range of a given species, and the dispersal mechanisms involved. Biological dispersal can be correlated to population density. The range of variations of a species' location determines the expansion range. Biological dispersal may be contrasted with geodispersal, which is the mixing of previously isolated populations (or whole biotas) following the erosion of geographic barriers to dispersal or gene flow. Dispersal can be distinguished from animal migration (typically round-trip seasonal movement), although within population genetics, the terms 'migration' and 'dispersal' are often used interchangeably. Furthermore, biological dispersal is impacted and limited by different environmental and individual conditions. This leads to a wide range of consequences on the organisms present in the environment and their ability to adapt their dispersal methods to that environment. Types of dispersal Some organisms are motile throughout their lives, but others are adapted to move or be moved at precise, limited phases of their life cycles. This is commonly called the dispersive phase of the life cycle. The strategies of organisms' entire life cycles often are predicated on the nature and circumstances of their dispersive phases. In general, there are two basic types: Passive Dispersal (Density-Independent Dispersal) In passive dispersal, the organisms cannot move on their own but use other methods to achieve successful reproduction or facilitation into new habitats. Organisms have evolved adaptations for dispersal that take advantage of various forms of kinetic energy occurring naturally in the environment. This can be done by taking advantage of water, wind, or an animal that is able to perform active dispersal themselves. Some organisms are capable of movement while in their larval phase. This is common amongst some invertebrates, fish, insects and sessile organisms such as plants) that depend on animal vectors, wind, gravity or current for dispersal. Invertebrates, like sea sponges and corals, pass gametes through water. In this way are able to successfully reproduce because the sperm move around, while the eggs are moved by currents. Plants act in similar ways as they can also use water currents, winds, or moving animals to transport their gametes. Seeds, spores, and fruits can have certain adaptations that aid in facilitation of movement. Active Dispersal (Density-Dependent Dispersal) In active dispersal, an organism will move locations by its own inherit capabilities. Age is not a restriction, as location change is common in both young and adult animals. The extent of dispersion is dependent on multiple factors, such as local population, resource competition, habitat quality, and habitat size. Due to this, many consider active dispersal to also be density-dependence, as density of the community plays a major role in the movement of animals. However, the effect is observed in age groups differently, which results in diverse levels of dispersion. When it comes to active dispersal, animals that are capable to free movement of large distances are ideal achieved through flying swimming, or walking. Nonetheless, there are restrictions enforced by geographical location and habitat. Walking animals are at the biggest disadvantage when it comes to this, as they can be prone to being stopped by potential barriers. Although some terrestrial animals traveling by foot can travel great distances, walking uses more energy in comparison to flying or swimming, especially when passing through adverse conditions. Due to population density, dispersal may relieve pressure for resources in an ecosystem, and competition for these resources may be a selection factor for dispersal mechanisms. Dispersal of organisms is a critical process for understanding both geographic isolation in evolution through gene flow and the broad patterns of current geographic distributions (biogeography). A distinction is often made between natal dispersal where an individual (often a juvenile) moves away from the place it was born, and breeding dispersal where an individual (often an adult) moves away from one breeding location to breed elsewhere. Costs and benefits In the broadest sense, dispersal occurs when the fitness benefits of moving outweigh the costs. There are a number of benefits to dispersal such as locating new resources, escaping unfavorable conditions, avoiding competing with siblings, and avoiding breeding with closely related individuals which could lead to inbreeding depression. There are also a number of costs associated with dispersal, which can be thought of in terms of four main currencies: energy, risk, time, and opportunity. Energetic costs include the extra energy required to move as well as energetic investment in movement machinery (e.g. wings). Risks include increased injury and mortality during dispersal and the possibility of settling in an unfavorable environment. Time spent dispersing is time that often cannot be spent on other activities such as growth and reproduction. Finally, dispersal can also lead to outbreeding depression if an individual is better adapted to its natal environment than the one it ends up in. In social animals (such as many birds and mammals) a dispersing individual must find and join a new group, which can lead to loss of social rank. Dispersal range "Dispersal range" refers to the distance a species can move from an existing population or the parent organism. An ecosystem depends critically on the ability of individuals and populations to disperse from one habitat patch to another. Therefore, biological dispersal is critical to the stability of ecosystems. Urban Environments and Dispersal Range Urban areas can be seen to have their own unique effects on the dispersal range and dispersal abilities of different organisms. For plant species, urban environments largely provide novel dispersal vectors. While animals and physical factors (i.e. wind, water, etc.) have played a role in dispersal for centuries, motor vehicles have recently been considered as major dispersal vectors. Tunnels that connect rural and urban environments have been shown to expedite a large amount of and diverse set of seeds from urban to rural environments. This could lead to possible sources of invasive species on the urban-rural gradient. Another example of the effects of urbanization could be seen next to rivers. Urbanization has led to the introduction of different invasive species through direct planting or wind dispersal. In turn, rivers next to these invasive plant species have become vital dispersal vectors. Rivers could be seen to connect urban centers to rural and natural environments. Seeds from the invasive species were shown to be transported by the rivers to natural areas located downstream, thus building upon the already established dispersal distance of the plant. In contrast, urban environments can also provide limitations for certain dispersal strategies. Human influence through urbanization greatly affects the layout of landscapes, which leads to the limitation of dispersal strategies for many organisms. These changes have largely been exhibited through pollinator-flowering plant relationships. As the pollinator's optimal range of survival is limited, it leads to a limited supply of pollination sites. Subsequently, this leads to less gene flow between distantly separated populations, in turn decreasing the genetic diversity of each of the areas. Likewise, urbanization has been shown to impact the gene flow of distinctly different species (ex. mice and bats) in similar ways. While these two species may have different ecological niches and living strategies, urbanization limits the dispersal strategies of both species. This leads to genetic isolation of both populations, resulting in limited gene flow. While the urbanization did have a greater effect on mice dispersal, it also led to a slight increase in inbreeding among bat populations. Environmental constraints Few species are ever evenly or randomly distributed within or across landscapes. In general, species significantly vary across the landscape in association with environmental features that influence their reproductive success and population persistence. Spatial patterns in environmental features (e.g. resources) permit individuals to escape unfavorable conditions and seek out new locations. This allows the organism to "test" new environments for their suitability, provided they are within animal's geographic range. In addition, the ability of a species to disperse over a gradually changing environment could enable a population to survive extreme conditions. (i.e. climate change). As the climate changes, prey and predators have to adapt to survive. This poses a problem for many animals, for example, the Southern Rockhopper Penguins. These penguins are able to live and thrive in a variety of climates due to the penguins' phenotypic plasticity. However, they are predicted to respond by dispersal, not adaptation this time. This is explained due to their long life spans and slow microevolution. Penguins in the subantarctic have very different foraging behavior from those of subtropical waters; it would be very hard to survive by keeping up with the fast-changing climate because these behaviors took years to shape. Dispersal barriers A dispersal barrier may result in a dispersal range of a species much smaller than the species distribution. An artificial example is habitat fragmentation due to human land use. By contrast, natural barriers to dispersal that limit species distribution include mountain ranges and rivers. An example is the separation of the ranges of the two species of chimpanzee by the Congo River. On the other hand, human activities may also expand the dispersal range of a species by providing new dispersal methods (e.g., ballast water from ships). Many such dispersed species become invasive, like rats or stinkbugs, but some species also have a slightly positive effect to human settlers like honeybees and earthworms. Dispersal mechanisms Most animals are capable of locomotion and the basic mechanism of dispersal is movement from one place to another. Locomotion allows the organism to "test" new environments for their suitability, provided they are within the animal's range. Movements are usually guided by inherited behaviors. The formation of barriers to dispersal or gene flow between adjacent areas can isolate populations on either side of the emerging divide. The geographic separation and subsequent genetic isolation of portions of an ancestral population can result in allopatric speciation. Plant dispersal mechanisms Seed dispersal is the movement or transport of seeds away from the parent plant. Plants are limited by vegetative reproduction and consequently rely upon a variety of dispersal vectors to transport their propagules, including both abiotic and biotic vectors. Seeds can be dispersed away from the parent plant individually or collectively, as well as dispersed in both space and time. The patterns of seed dispersal are determined in large part by the specific dispersal mechanism, and this has important implications for the demographic and genetic structure of plant populations, as well as migration patterns and species interactions. There are five main modes of seed dispersal: gravity, wind, ballistic, water, and by animals. Animal dispersal mechanisms Non-motile animals There are numerous animal forms that are non-motile, such as sponges, bryozoans, tunicates, sea anemones, corals, and oysters. In common, they are all either marine or aquatic. It may seem curious that plants have been so successful at stationary life on land, while animals have not, but the answer lies in the food supply. Plants produce their own food from sunlight and carbon dioxide—both generally more abundant on land than in water. Animals fixed in place must rely on the surrounding medium to bring food at least close enough to grab, and this occurs in the three-dimensional water environment, but with much less abundance in the atmosphere. All of the marine and aquatic invertebrates whose lives are spent fixed to the bottom (more or less; anemones are capable of getting up and moving to a new location if conditions warrant) produce dispersal units. These may be specialized "buds", or motile sexual reproduction products, or even a sort of alteration of generations as in certain cnidaria. Corals provide a good example of how sedentary species achieve dispersion. Broadcast spawning corals reproduce by releasing sperm and eggs directly into the water. These release events are coordinated by the lunar phase in certain warm months, such that all corals of one or many species on a given reef will be released on the same single or several consecutive nights. The released eggs are fertilized, and the resulting zygote develops quickly into a multicellular planula. This motile stage then attempts to find a suitable substratum for settlement. Most are unsuccessful and die or are fed upon by zooplankton and bottom-dwelling predators such as anemones and other corals. However, untold millions are produced, and a few do succeed in locating spots of bare limestone, where they settle and transform by growth into a polyp. All things being favorable, the single polyp grows into a coral head by budding off new polyps to form a colony. Motile animals The majority of animals are motile. Motile animals can disperse themselves by their spontaneous and independent locomotive powers. For example, dispersal distances across bird species depend on their flight capabilities. On the other hand, small animals utilize the existing kinetic energies in the environment, resulting in passive movement. Dispersal by water currents is especially associated with the physically small inhabitants of marine waters known as zooplankton. The term plankton comes from the Greek, πλαγκτον, meaning "wanderer" or "drifter". Dispersal by dormant stages Many animal species, especially freshwater invertebrates, are able to disperse by wind or by transfer with an aid of larger animals (birds, mammals or fishes) as dormant eggs, dormant embryos or, in some cases, dormant adult stages. Tardigrades, some rotifers and some copepods are able to withstand desiccation as adult dormant stages. Many other taxa (Cladocera, Bryozoa, Hydra, Copepoda and so on) can disperse as dormant eggs or embryos. Freshwater sponges usually have special dormant propagules called gemmulae for such a dispersal. Many kinds of dispersal dormant stages are able to withstand not only desiccation and low and high temperature, but also action of digestive enzymes during their transfer through digestive tracts of birds and other animals, high concentration of salts, and many kinds of toxicants. Such dormant-resistant stages made possible the long-distance dispersal from one water body to another and broad distribution ranges of many freshwater animals. Quantifying dispersal Dispersal is most commonly quantified either in terms of rate or distance. Dispersal rate (also called migration rate in the population genetics literature) or probability describes the probability that any individual leaves an area or, equivalently, the expected proportion of individual to leave an area. The dispersal distance is usually described by a dispersal kernel which gives the probability distribution of the distance traveled by any individual. A number of different functions are used for dispersal kernels in theoretical models of dispersal including the negative exponential distribution, extended negative exponential distribution, normal distribution, exponential power distribution, inverse power distribution, and the two-sided power distribution. The inverse power distribution and distributions with 'fat tails' representing long-distance dispersal events (called leptokurtic distributions) are thought to best match empirical dispersal data. Consequences of dispersal Dispersal not only has costs and benefits to the dispersing individual (as mentioned above), it also has consequences at the level of the population and species on both ecological and evolutionary timescales. Organisms can be dispersed through multiple methods. Carrying through animals is especially effective as it allows traveling of far distances. Many plants depend on this to be able to go to new locations, preferably with conditions ideal for precreation and germination. With this, dispersal has major influence in the determination of population and spread of plant species. Many populations have patchy spatial distributions where separate yet interacting sub-populations occupy discrete habitat patches (see metapopulations). Dispersing individuals move between different sub-populations which increases the overall connectivity of the metapopulation and can lower the risk of stochastic extinction. If a sub-population goes extinct by chance, it is more likely to be recolonized if the dispersal rate is high. Increased connectivity can also decrease the degree of local adaptation. Human interference with the environment has been seen to have an effect on dispersal. Some of these occurrences have been accidents, like in the case of zebra mussels, which are indigenous to Southeast Russia. A ship had accidentally released them into the North American Great Lakes and they became a major nuisance in the area, as they began to clog water treatment and power plants. Another case of this was seen in Chinese bighead and silver carp, which were brought in with the purpose of algae control in many catfish ponds across the U.S. Unfortunately, some had managed to escape into the neighboring rivers of Mississippi, Missouri, Illinois, and Ohio, eventually causing a negative impact for the surrounding ecosystems. However, human-created habitats such as urban environments have allowed certain migrated species to become urbanophiles or synanthropes. Dispersal has caused changes to many species on a genetic level. A positive correlation has been seen for differentiation and diversification of certain species of spiders in the Canary Islands. These spiders were residing in archipelagos and islands. Dispersion was identified as a key factor in the rate of both occurrences. Human-Mediated Dispersal Human impact has had a major influence on the movement of animals through time. An environmental response occurs in due to this, as dispersal patterns are important for species to survive major changes. There are two forms of human-mediated dispersal: Human-Vectored Dispersal (HVD) In Human-Vectored Dispersal, humans directly move the organism. This can occur deliberately, like for the usage of the animal in an agricultural setting, hunting or more. However, it can also occur accidentally, if the organism attaches itself to a person or vehicle. For this process, the organism first has to come in contact with a human and then movement can start. This has become more common has the human population all over the world has increased and movement through the world has also become more prevalent. Dispersal through a human can be many times more successful in distance compared to movement by wild or other environmental means. Human-Altered Dispersal (HAD) Human-Altered Dispersal signifies the effects that have occurred due to human interference with landscapes and animals. Many of these interferences have caused negative consequences in the environment. For example, many areas have suffered habitat loss, which in turn can have a negative effect on dispersal. Researchers have found that due to this, animals have been reported to move further distances in an attempt to find isolated places. This can especially be seen through construction of roads and other infrastructures in remote areas. Long-distance dispersals are observed when seeds are carried through human vectors. A study was conducted to test the effects of human-mediated dispersal of seeds over long distances in two species of Brassica in England. The main methods of dispersal compared with movement by wind versus movement by attachment to outerwear. It was concluded that shoes were able to transport seeds to further distances than what would be achievable through wind alone. It was noted that some seeds were able to stay on the shoes for long periods of time, about 8 hours of walking, but evenly came off. Due to this, the seeds were able to travel far distances and settle into new areas, where they were previously not inhabiting. However, it is also important that the seeds land in places where they are able to stick and grow. Specific shoe size did not seem to have an effect on prevalence. Dispersal observation methods Biological dispersal can be observed using different methods. To study the effects of dispersal, observers use the methods of landscape genetics.  This allows scientists to observe the difference between population variation, climate and well as the size and shape of the landscape. An example of the use of landscape genetics as a means to study seed dispersal, for example, involves studying the effects of traffic using motorway tunnels between inner cities and suburban area. Genome wide SNP dataset and species distribution modelling are examples of computational methods used to examine different dispersal modes. A genome-wide SNP dataset can be used to determine the genomic and demographic history within the range of collection or observation [Reference needed]. Species distribution models are used when scientists wish to determine which region is best suited for the species under observation [Reference needed]. Methods such as these are used to understand the criteria the environment provides when migration and settlement occurs such as the cases in biological invasion. Human-aided dispersal, an example of an anthropogenic effect, can contribute to biological dispersal ranges and variations. Informed dispersal is a way to observe the cues of biological dispersal suggesting the reasoning behind the placement. This concept implies that the movement between species also involve information transfer. Methods such as GPS location are used to monitor the social cues and mobility of species regarding habitat selection. GPS radio-collars can be used when collecting data on social animals such a meerkats. Consensus data such as detailed trip records and point of interest (POI) data can be used to predict the movement of humans from rural to urban areas are examples of informed dispersal [Reference needed]. Direct tracking or visual tracking allows scientists to monitor the movement of seed dispersal by color coding. Scientists and observers can track the migration of individuals through the landscape. The pattern of transportation can then be visualized to reflect the range in which the organism expands.
Biology and health sciences
Ecology
Biology
818377
https://en.wikipedia.org/wiki/Moons%20of%20Uranus
Moons of Uranus
Uranus, the seventh planet of the Solar System, has 28 confirmed moons. The 27 with names are named after characters that appear in, or are mentioned in, William Shakespeare's plays and Alexander Pope's poem The Rape of the Lock. Uranus's moons are divided into three groups: thirteen inner moons, five major moons, and ten irregular moons. The inner and major moons all have prograde orbits and are cumulatively classified as regular moons. In contrast, the orbits of the irregular moons are distant, highly inclined, and mostly retrograde. The inner moons are small dark bodies that share common properties and origins with Uranus's rings. The five major moons are ellipsoidal, indicating that they reached hydrostatic equilibrium at some point in their past (and may still be in equilibrium), and four of them show signs of internally driven processes such as canyon formation and volcanism on their surfaces. The largest of these five, Titania, is 1,578 km in diameter and the eighth-largest moon in the Solar System, about one-twentieth the mass of the Earth's Moon. The orbits of the regular moons are nearly coplanar with Uranus's equator, which is tilted 97.77° to its orbit. Uranus's irregular moons have elliptical and strongly inclined (mostly retrograde) orbits at large distances from the planet. William Herschel discovered the first two moons, Titania and Oberon, in 1787. The other three ellipsoidal moons were discovered in 1851 by William Lassell (Ariel and Umbriel) and in 1948 by Gerard Kuiper (Miranda). These five may be in hydrostatic equilibrium. The remaining moons were discovered after 1985, either during the Voyager 2 flyby mission or with the aid of advanced Earth-based telescopes. Discovery The first two moons to be discovered were Titania and Oberon, which were spotted by Sir William Herschel on January 11, 1787, six years after he had discovered the planet itself. Later, Herschel thought he had discovered up to six moons (see below) and perhaps even a ring. For nearly 50 years, Herschel's instrument was the only one with which the moons had been seen. In the 1840s, better instruments and a more favorable position of Uranus in the sky led to sporadic indications of satellites additional to Titania and Oberon. Eventually, the next two moons, Ariel and Umbriel, were discovered by William Lassell in 1851. The Roman numbering scheme of Uranus's moons was in a state of flux for a considerable time, and publications hesitated between Herschel's designations (where Titania and Oberon are Uranus II and IV) and William Lassell's (where they are sometimes I and II). With the confirmation of Ariel and Umbriel, Lassell numbered the moons I through IV from Uranus outward, and this finally stuck. In 1852, Herschel's son John Herschel gave the four then-known moons their names. No other discoveries were made for almost another century. In 1948, Gerard Kuiper at the McDonald Observatory discovered the smallest and the last of the five large, spherical moons, Miranda. Decades later, the flyby of the Voyager 2 space probe in January 1986 led to the discovery of ten further inner moons. Another satellite, Perdita, was discovered in 1999 by Erich Karkoschka after studying old Voyager photographs. Uranus was the last giant planet without any known irregular moons until 1997, when astronomers using ground-based telescopes discovered Sycorax and Caliban. From 1999 to 2003, astronomers continued searching for irregular moons of Uranus using more powerful ground-based telescopes, resulting in the discovery of seven more Uranian irregular moons. In addition, two small inner moons, Cupid and Mab, were discovered using the Hubble Space Telescope in 2003. No other discoveries were made until 2021 and 2023, when Scott Sheppard and colleagues discovered one more irregular moon of Uranus (and five more candidates waiting to be announced) using the Subaru Telescope at Mauna Kea, Hawaii. Spurious moons After Herschel discovered Titania and Oberon on 11 January 1787, he subsequently believed that he had observed four other moons: two on 18 January and 9 February 1790, and two more on 28 February and 26 March 1794. It was thus believed for many decades thereafter that Uranus had a system of six satellites, though the four latter moons were never confirmed by any other astronomer. Lassell's observations of 1851, in which he discovered Ariel and Umbriel, however, failed to support Herschel's observations; Ariel and Umbriel, which Herschel certainly ought to have seen if he had seen any satellites besides Titania and Oberon, did not correspond to any of Herschel's four additional satellites in orbital characteristics. Herschel's four spurious satellites were thought to have sidereal periods of 5.89 days (interior to Titania), 10.96 days (between Titania and Oberon), 38.08 days, and 107.69 days (exterior to Oberon). It was therefore concluded that Herschel's four satellites were spurious, probably arising from the misidentification of faint stars in the vicinity of Uranus as satellites, and the credit for the discovery of Ariel and Umbriel was given to Lassell. Names Although the first two Uranian moons were discovered in 1787, they were not named until 1852, a year after two more moons had been discovered. The responsibility for naming was taken by John Herschel, son of the discoverer of Uranus. Herschel, instead of assigning names from Greek mythology, named the moons after magical spirits in English literature: the fairies Oberon and Titania from William Shakespeare's A Midsummer Night's Dream, and the sylph Ariel and gnome Umbriel from Alexander Pope's The Rape of the Lock (Ariel is also a sprite in Shakespeare's The Tempest). It is uncertain if John Herschel was the originator of the names, or if it was instead William Lassell (who discovered Ariel and Umbriel) who chose the names and asked Herschel for permission. Subsequent names, rather than continuing the airy spirits theme (only Puck and Mab continued the trend), have focused on Herschel's source material. In 1949, the fifth moon, Miranda, was named by its discoverer Gerard Kuiper after a thoroughly mortal character in Shakespeare's The Tempest. The current IAU practice is to name moons after characters from Shakespeare's plays and The Rape of the Lock (although at present only Ariel, Umbriel, and Belinda have names drawn from the latter; all the rest are from Shakespeare). The outer retrograde moons are all named after characters from one play, The Tempest; the sole known outer prograde moon, Margaret, is named from Much Ado About Nothing. Some asteroids, also named after the same Shakespearean characters, share names with moons of Uranus: 171 Ophelia, 218 Bianca, 593 Titania, 666 Desdemona, 763 Cupido, and 2758 Cordelia. Characteristics and groups The Uranian satellite system is the least massive among those of the giant planets. Indeed, the combined mass of the five major satellites is less than half that of Triton (the seventh-largest moon in the Solar System) alone. The largest of the satellites, Titania, has a radius of 788.9 km, or less than half that of the Moon, but slightly more than that of Rhea, the second-largest moon of Saturn, making Titania the eighth-largest moon in the Solar System. Uranus is about 10,000 times more massive than its moons. Inner moons As of , Uranus is known to have 13 inner moons, whose orbits all lie inside that of Miranda. The inner moons are classified into two groups based on similar orbital distances: these are the Portia group, which includes the six moons Bianca, Cressida, Desdemona, Juliet, Portia, and Rosalind; and the Belinda group, which includes the three moons Cupid, Belinda, and Perdita. All of the inner moons are intimately connected with the rings of Uranus, which probably resulted from the fragmentation of one or several small inner moons. The two innermost moons, Cordelia and Ophelia, are shepherds of Uranus's ε ring, whereas the small moon Mab is a source of Uranus's outermost μ ring. There may be two additional small (2–7 km in radius) undiscovered shepherd moons located about 100 km exterior to Uranus's α and β rings. At 162 km, Puck is the largest of the inner moons of Uranus and the only one imaged by Voyager 2 in any detail. Puck and Mab are the two outermost inner satellites of Uranus. All inner moons are dark objects; their geometrical albedo is less than 10%. They are composed of water ice contaminated with a dark material, probably radiation-processed organics. The inner moons constantly perturb each other, especially within the closely-packed Portia and Belinda groups. The system is chaotic and apparently unstable. Simulations show that the moons may perturb each other into crossing orbits, which may eventually result in collisions between the moons. Desdemona may collide with Cressida within the next million years, and Cupid will likely collide with Belinda in the next 10 million years; Perdita and Juliet may be involved in later collisions. Because of this, the rings and inner moons may be under constant flux, with moons colliding and re-accreting on short timescales. Large moons Uranus has five major moons: Miranda, Ariel, Umbriel, Titania, and Oberon. They range in diameter from 472 km for Miranda to 1578 km for Titania. All these moons are relatively dark objects: their geometrical albedo varies between 30 and 50%, whereas their Bond albedo is between 10 and 23%. Umbriel is the darkest moon and Ariel the brightest. The masses of the moons range from 6.7 kg (Miranda) to 3.5 kg (Titania). For comparison, the Moon has a mass of 7.5 kg. The major moons of Uranus are thought to have formed in the accretion disc, which existed around Uranus for some time after its formation or resulted from a large impact suffered by Uranus early in its history. This view is supported by their large thermal inertia, a surface property they share with dwarf planets like Pluto and Haumea. It differs strongly from the thermal behaviour of the Uranian irregular moons that is comparable to classical trans-Neptunian objects. This suggests a separate origin. All major moons comprise approximately equal amounts rock and ice, except Miranda, which is made primarily of ice. The ice component may include ammonia and carbon dioxide. Their surfaces are heavily cratered, though all of them (except Umbriel) show signs of endogenic resurfacing in the form of lineaments (canyons) and, in the case of Miranda, ovoid race-track like structures called coronae. Extensional processes associated with upwelling diapirs are likely responsible for the origin of the coronae. Ariel appears to have the youngest surface with the fewest impact craters, while Umbriel's appears oldest. A past 3:1 orbital resonance between Miranda and Umbriel and a past 4:1 resonance between Ariel and Titania are thought to be responsible for the heating that caused substantial endogenic activity on Miranda and Ariel. One piece of evidence for such a past resonance is Miranda's unusually high orbital inclination (4.34°) for a body so close to the planet. The largest Uranian moons may be internally differentiated, with rocky cores at their centers surrounded by ice mantles. Titania and Oberon may harbor liquid water oceans at the core/mantle boundary. The major moons of Uranus are airless bodies. For instance, Titania was shown to possess no atmosphere at a pressure larger than 10–20 nanobar. The path of the Sun in the local sky over the course of a local day during Uranus's and its major moons' summer solstice is quite different from that seen on most other Solar System worlds. The major moons have almost exactly the same rotational axial tilt as Uranus (their axes are parallel to that of Uranus). The Sun would appear to follow a circular path around Uranus's celestial pole in the sky, at the closest about 7 degrees from it, during the hemispheric summer. Near the equator, it would be seen nearly due north or due south (depending on the season). At latitudes higher than 7°, the Sun would trace a circular path about 15 degrees in diameter in the sky, and never set during the hemispheric summer, moving to a position over the celestial equator during the Uranian equinox, and then invisible below the horizon during the hemispheric winter. Irregular moons Uranus's irregular moons range in size from 120 to 200 km (Sycorax) to under 10 km (S/2023 U 1). Due to the small number of known Uranian irregular moons, it is not yet clear which of them belong to groups with similar orbital characteristics. The only known group among Uranus's irregular moons is the Caliban group, which is clustered at orbital distances between and inclinations between 141°–144°. The Caliban group includes three retrograde moons, which are Caliban, S/2023 U 1, Stephano. The intermediate inclinations 60° < i < 140° are devoid of known moons due to the Kozai instability. In this instability region, solar perturbations at apoapse cause the moons to acquire large eccentricities that lead to collisions with inner satellites or ejection. The lifetime of moons in the instability region is from 10 million to a billion years. Margaret is the only known irregular prograde moon of Uranus, and it has one of the most eccentric orbits of any moon in the Solar System. List The Uranian moons are listed here by orbital period, from shortest to longest. Moons massive enough for their surfaces to have collapsed into a spheroid are highlighted in light blue and bolded. The inner and major moons all have prograde orbits. Irregular moons with retrograde orbits are shown in dark grey. Margaret, the only known irregular moon of Uranus with a prograde orbit, is shown in light grey. The orbits and mean distances of the irregular moons are variable over short timescales due to frequent planetary and solar perturbations, therefore the listed orbital elements of all irregular moons are averaged over a 8,000-year numerical integration by Brozović and Jacobson (2009). These may differ from osculating orbital elements provided by other sources. The orbital elements of major moons listed here are based on the epoch of 1 January 2000, while orbital elements of irregular satellites are based on the epoch of 1 January 2020.
Physical sciences
Solar System
Astronomy
818487
https://en.wikipedia.org/wiki/Moons%20of%20Neptune
Moons of Neptune
The planet Neptune has 16 known moons, which are named for minor water deities and a water creature in Greek mythology. By far the largest of them is Triton, discovered by William Lassell on 10 October 1846, 17 days after the discovery of Neptune itself. Over a century passed before the discovery of the second natural satellite, Nereid, in 1949, and another 40 years passed before Proteus, Neptune's second-largest moon, was discovered in 1989. Triton is unique among moons of planetary mass in that its orbit is retrograde to Neptune's rotation and inclined relative to Neptune's equator, which suggests that it did not form in orbit around Neptune but was instead gravitationally captured by it. The next-largest satellite in the Solar System suspected to be captured, Saturn's moon Phoebe, has only 0.03% of Triton's mass. The capture of Triton, probably occurring some time after Neptune formed a satellite system, was a catastrophic event for Neptune's original satellites, disrupting their orbits so that they collided to form a rubble disc. Triton is massive enough to have achieved hydrostatic equilibrium and to retain a thin atmosphere capable of forming clouds and hazes. Inward of Triton are seven small regular satellites, all of which have prograde orbits in planes that lie close to Neptune's equatorial plane; some of these orbit among Neptune's rings. The largest of them is Proteus. They were re-accreted from the rubble disc generated after Triton's capture after the Tritonian orbit became circular. Neptune also has eight more outer irregular satellites other than Triton, including Nereid, whose orbits are much farther from Neptune and at high inclination: three of these have prograde orbits, while the remainder have retrograde orbits. In particular, Nereid has an unusually close and eccentric orbit for an irregular satellite, suggesting that it may have once been a regular satellite that was significantly perturbed to its current position when Triton was captured. Neptune's outermost moon S/2021 N 1, which has an orbital period of about 27 Earth years, orbits farther from its planet than any other known moon in the Solar System. History Discovery Triton was discovered by William Lassell in 1846, just seventeen days after the discovery of Neptune. Nereid was discovered by Gerard P. Kuiper in 1949. The third moon, later named Larissa, was first observed by Harold J. Reitsema, William B. Hubbard, Larry A. Lebofsky and David J. Tholen on 24 May 1981. The astronomers were observing a star's close approach to Neptune, looking for rings similar to those discovered around Uranus four years earlier. If rings were present, the star's luminosity would decrease slightly just before the planet's closest approach. The star's luminosity dipped only for several seconds, which meant that it was due to a moon rather than a ring. No further moons were found until Voyager 2 flew by Neptune in 1989. Voyager 2 rediscovered Larissa and discovered five inner moons: Naiad, Thalassa, Despina, Galatea and Proteus. In 2001, two surveys using large ground-based telescopes found five additional outer irregular moons, bringing the total to thirteen. Follow-up surveys by two teams in 2002 and 2003 respectively re-observed all five of these moons, which are Halimede, Sao, Psamathe, Laomedeia, and Neso. The 2002 survey also found a sixth moon, but it could not be re-observed enough times to determine its orbit, and it thus became lost. In 2013 Mark R. Showalter discovered Hippocamp while examining Hubble Space Telescope images of Neptune's ring arcs from 2009. He used a technique similar to panning to compensate for orbital motion and allow stacking of multiple images to bring out faint details. After deciding on a whim to expand the search area to radii well beyond the rings, he found an unambiguous dot that represented the new moon. He then found it repeatedly in other archival HST images going back to 2004. Voyager 2, which had observed all of Neptune's other inner satellites, did not detect it during its 1989 flyby, due to its dimness. In 2021, Scott S. Sheppard and colleagues used the Subaru Telescope at Mauna Kea, Hawaii and discovered two more irregular moons of Neptune, which were announced in 2024. These two moons are provisionally designated S/2021 N 1 and S/2002 N 5. The latter turned out to be a recovery of the lost moon from 2002. Names Triton did not have an official name until the twentieth century. The name "Triton" was suggested by Camille Flammarion in his 1880 book Astronomie Populaire, but it did not come into common use until at least the 1930s. Until this time it was usually simply known as "the satellite of Neptune". Other moons of Neptune are also named for Greek and Roman water gods, in keeping with Neptune's position as god of the sea: either from Greek mythology, usually children of Poseidon, the Greek equivalent of Neptune (Triton, Proteus, Despina, Thalassa); lovers of Poseidon (Larissa); other mythological creatures related to Poseidon (Hippocamp); classes of minor Greek water deities (Naiad, Nereid); or specific Nereids (Halimede, Galatea, Neso, Sao, Laomedeia, Psamathe). For the "normal" irregular satellites, the general convention is to use names ending in "a" for prograde satellites, names ending in "e" for retrograde satellites, and names ending in "o" for exceptionally inclined satellites, exactly like the convention for the moons of Jupiter. Two asteroids share the same names as moons of Neptune: 74 Galatea and 1162 Larissa. Characteristics The moons of Neptune can be divided into two groups: regular and irregular. The first group includes the seven inner moons, which follow circular prograde orbits lying in the equatorial plane of Neptune. The second group consists of all nine other moons including Triton. They generally follow inclined eccentric and often retrograde orbits far from Neptune; the only exception is Triton, which orbits close to the planet following a circular orbit, though retrograde and inclined. Regular moons In order of distance from Neptune, the regular moons are Naiad, Thalassa, Despina, Galatea, Larissa, Hippocamp, and Proteus. All but the outer two are within Neptune-synchronous orbit (Neptune's rotational period is 0.6713 day or 16 hours) and thus are being tidally decelerated. Naiad, the closest regular moon, is also the second smallest among the inner moons (following the discovery of Hippocamp), whereas Proteus is the largest regular moon and the second largest moon of Neptune. The first five moons orbit much faster than Neptune's rotation itself ranging from 7 hours for Naiad and Thalassa, to 13 hours for Larissa. The inner moons are closely associated with Neptune's rings. The two innermost satellites, Naiad and Thalassa, orbit between the Galle and LeVerrier rings. Despina may be a shepherd moon of the LeVerrier ring, because its orbit lies just inside this ring. The next moon, Galatea, orbits just inside the most prominent of Neptune's rings, the Adams ring. This ring is very narrow, with a width not exceeding 50 km, and has five embedded bright arcs. The gravity of Galatea helps confine the ring particles within a limited region in the radial direction, maintaining the narrow ring. Various resonances between the ring particles and Galatea may also have a role in maintaining the arcs. Only the two largest regular moons have been imaged with a resolution sufficient to discern their shapes and surface features. Larissa, about 200 km in diameter, is elongated. Proteus is not significantly elongated, but not fully spherical either: it resembles an irregular polyhedron, with several flat or slightly concave facets 150 to 250 km in diameter. At about 400 km in diameter, it is larger than the Saturnian moon Mimas, which is fully ellipsoidal. This difference may be due to a past collisional disruption of Proteus. The surface of Proteus is heavily cratered and shows a number of linear features. Its largest crater, Pharos, is more than 150 km in diameter. All of Neptune's inner moons are dark objects: their geometric albedo ranges from 7 to 10%. Their spectra indicate that they are made from water ice contaminated by some very dark material, probably complex organic compounds. In this respect, the inner Neptunian moons are similar to the inner Uranian moons. Irregular moons In order of their distance from the planet, the irregular moons are Triton, Nereid, Halimede, Sao, S/2002 N 5, Laomedeia, Psamathe, Neso, and S/2021 N 1, a group that includes both prograde and retrograde objects. The seven outermost moons are similar to the irregular moons of other giant planets, and are thought to have been gravitationally captured by Neptune, unlike the regular satellites, which probably formed in situ. Triton and Nereid are unusual irregular satellites and are thus treated separately from the other seven irregular Neptunian moons, which are more like the outer irregular satellites of the other outer planets. Firstly, they are the largest two known irregular moons in the Solar System, with Triton being almost an order of magnitude larger than all other known irregular moons. Secondly, they both have atypically small semi-major axes, with Triton's being over an order of magnitude smaller than those of all other known irregular moons. Thirdly, they both have unusual orbital eccentricities: Nereid has one of the most eccentric orbits of any known irregular satellite, and Triton's orbit is a nearly perfect circle. Finally, Nereid also has the lowest inclination of any known irregular satellite. Triton Triton follows a retrograde and quasi-circular orbit, and is thought to be a gravitationally captured satellite. It was the second moon in the Solar System that was discovered to have a substantial atmosphere, which is primarily nitrogen with small amounts of methane and carbon monoxide. The pressure on Triton's surface is about 14 μbar. In 1989 the Voyager 2 spacecraft observed what appeared to be clouds and hazes in this thin atmosphere. Triton is one of the coldest bodies in the Solar System, with a surface temperature of about . Its surface is covered by nitrogen, methane, carbon dioxide and water ices and has a high geometric albedo of more than 70%. The Bond albedo is even higher, reaching up to 90%. Surface features include the large southern polar cap, older cratered planes cross-cut by graben and scarps, as well as youthful features probably formed by endogenic processes like cryovolcanism. Voyager 2 observations revealed a number of active geysers within the polar cap heated by the Sun, which eject plumes to the height of up to 8 km. Triton has a relatively high density of about 2 g/cm3 indicating that rocks constitute about two thirds of its mass, and ices (mainly water ice) the remaining one third. There may be a layer of liquid water deep inside Triton, forming a subterranean ocean. Because of its retrograde orbit and relative proximity to Neptune (closer than the Moon is to Earth), tidal deceleration is causing Triton to spiral inward, which will lead to its destruction in about 3.6 billion years. Nereid Nereid is the third-largest moon of Neptune. It has a prograde but very eccentric orbit and is believed to be a former regular satellite that was scattered to its current orbit through gravitational interactions during Triton's capture. Water ice has been spectroscopically detected on its surface. Early measurements of Nereid showed large, irregular variations in its visible magnitude, which were speculated to be caused by forced precession or chaotic rotation combined with an elongated shape and bright or dark spots on the surface. This was disproved in 2016, when observations from the Kepler space telescope showed only minor variations. Thermal modeling based on infrared observations from the Spitzer and Herschel space telescopes suggest that Nereid is only moderately elongated which disfavours forced precession of the rotation. The thermal model also indicates that the surface roughness of Nereid is very high, likely similar to the Saturnian moon Hyperion. Nereid dominates the normal irregular satellites of Neptune, having about 98% of the mass of Neptune's entire irregular satellite system altogether (if Triton is not counted). This is similar to the situation of Phoebe at Saturn. If it is counted as a normal irregular satellite (but not Triton), then Nereid is also by far the largest normal irregular satellite known, having about two-thirds the mass of all normal irregular moons combined. Normal irregular moons Among the remaining irregular moons, Sao, S/2002 N 5, and Laomedeia follow prograde orbits, whereas Halimede, Psamathe, Neso and S/2021 N 1 follow retrograde orbits. There are at least two groups of moons that share similar orbits, with the prograde moons Sao, S/2002 N 5, and Laomedeia belonging to the Sao group and the retrograde moons Psamathe, Neso, and S/2021 N 1 belonging to the Neso group. The moons of the Neso group have the largest orbits of any natural satellites discovered in the Solar System to date, with average orbital distances over 125 times the distance between Earth and the Moon and orbital periods over 25 years. Neptune has the largest Hill sphere in the Solar System, owing primarily to its large distance from the Sun; this allows it to retain control of such distant moons. Nevertheless, the Jovian moons in the Carme and Pasiphae groups orbit at a greater percentage of their primary's Hill radius than the Neso group moons. Formation The mass distribution of the Neptunian moons is the most lopsided of the satellite systems of the giant planets in the Solar System. One moon, Triton, makes up nearly all of the mass of the system, with all other moons together comprising only one third of one percent. This is similar to the moon system of Saturn, where Titan makes up more than 95% of the total mass, but is different from the more balanced systems of Jupiter and Uranus. The reason for the lopsidedness of the present Neptunian system is that Triton was captured well after the formation of Neptune's original satellite system, and experts conjecture much of the system was destroyed in the process of capture. Triton's orbit upon capture would have been highly eccentric, and would have caused chaotic perturbations in the orbits of the original inner Neptunian satellites, causing them to collide and reduce to a disc of rubble. This means it is likely that Neptune's present inner satellites are not the original bodies that formed with Neptune. Only after Triton's orbit became circularised could some of the rubble re-accrete into the present-day regular moons. The mechanism of Triton's capture has been the subject of several theories over the years. One of them postulates that Triton was captured in a three-body encounter. In this scenario, Triton is the surviving member of a binary Kuiper belt object disrupted by its encounter with Neptune. Numerical simulations show that there is a 0.41 probability that the moon Halimede collided with Nereid at some time in the past. Although it is not known whether any collision has taken place, both moons appear to have similar ("grey") colors, implying that Halimede could be a fragment of Nereid. List The Neptunian moons are listed here by orbital period, from shortest to longest. Irregular (captured) moons are marked by color. The orbits and mean distances of the irregular moons are variable over short timescales due to frequent planetary and solar perturbations, therefore the listed orbital elements of all irregular moons are averaged over a 30,000-year period: these may differ from osculating orbital elements provided by other sources. Their orbital elements are all based on the epoch of 1 January 2020. Triton, the only Neptunian moon massive enough for its surface to have collapsed into a spheroid, is emboldened.
Physical sciences
Solar System
Astronomy
19773328
https://en.wikipedia.org/wiki/Mollusca
Mollusca
Mollusca is a phylum of protostomic invertebrate animals, whose members are known as molluscs or mollusks (). Around 76,000 extant species of molluscs are recognized, making it the second-largest animal phylum after Arthropoda. The number of additional fossil species is estimated between 60,000 and 100,000, and the proportion of undescribed species is very high. Many taxa remain poorly studied. Molluscs are the largest marine phylum, comprising about 23% of all the named marine organisms. They are highly diverse, not just in size and anatomical structure, but also in behaviour and habitat, as numerous groups are freshwater and even terrestrial species. The phylum is typically divided into 7 or 8 taxonomic classes, of which two are entirely extinct. Cephalopod molluscs, such as squid, cuttlefish, and octopuses, are among the most neurologically advanced of all invertebrates—and either the giant squid or the colossal squid is the largest known extant invertebrate species. The gastropods (snails, slugs and abalone) are by far the most diverse class and account for 80% of the total classified molluscan species. The four most universal features defining modern molluscs are a soft body composed almost entirely of muscle, a mantle with a significant cavity used for breathing and excretion, the presence of a radula (except for bivalves), and the structure of the nervous system. Other than these common elements, molluscs express great morphological diversity, so many textbooks base their descriptions on a "hypothetical ancestral mollusc" (see image below). This has a single, "limpet-like" shell on top, which is made of proteins and chitin reinforced with calcium carbonate, and is secreted by a mantle covering the whole upper surface. The underside of the animal consists of a single muscular "foot". Although molluscs are coelomates, the coelom tends to be small. The main body cavity is a hemocoel through which blood circulates; as such, their circulatory systems are mainly open. The "generalized" mollusc's feeding system consists of a rasping "tongue", the radula, and a complex digestive system in which exuded mucus and microscopic, muscle-powered "hairs" called cilia play various important roles. The generalized mollusc has two paired nerve cords, or three in bivalves. The brain, in species that have one, encircles the esophagus. Most molluscs have eyes, and all have sensors to detect chemicals, vibrations, and touch. The simplest type of molluscan reproductive system relies on external fertilization, but more complex variations occur. Nearly all produce eggs, from which may emerge trochophore larvae, more complex veliger larvae, or miniature adults. The coelomic cavity is reduced. They have an open circulatory system and kidney-like organs for excretion. Good evidence exists for the appearance of gastropods, cephalopods, and bivalves in the Cambrian period, 541–485.4 million years ago. However, the evolutionary history both of molluscs' emergence from the ancestral Lophotrochozoa and of their diversification into the well-known living and fossil forms are still subjects of vigorous debate among scientists. Molluscs have been and still are an important food source for humans. Toxins that can accumulate in certain molluscs under specific conditions create a risk of food poisoning, and many jurisdictions have regulations to reduce this risk. Molluscs have, for centuries, also been the source of important luxury goods, notably pearls, mother of pearl, Tyrian purple dye, and sea silk. Their shells have also been used as money in some preindustrial societies. A handful of mollusc species are sometimes considered hazards or pests for human activities. The bite of the blue-ringed octopus is often fatal, and that of Octopus apollyon causes inflammation that can last over a month. Stings from a few species of large tropical cone shells of the family Conidae can also kill, but their sophisticated, though easily produced, venoms have become important tools in neurological research. Schistosomiasis (also known as bilharzia, bilharziosis, or snail fever) is transmitted to humans by water snail hosts, and affects about 200 million people. Snails and slugs can also be serious agricultural pests, and accidental or deliberate introduction of some snail species into new environments has seriously damaged some ecosystems. Etymology The words mollusc and mollusk are both derived from the French , which originated from the post-classical Latin , from mollis, soft, first used by J. Jonston (Historiæ Naturalis, 1650) to describe a group comprising cephalopods. is used in classical Latin as an adjective only with (nut) to describe a particular type of soft nut. The use of mollusca in biological taxonomy by Jonston and later Linnaeus may have been influenced by Aristotle's ta malákia (the soft ones; < malakós "soft"), which he applied inter alia to cuttlefish. The scientific study of molluscs is accordingly called malacology. The name Molluscoida was formerly used to denote a division of the animal kingdom containing the brachiopods, bryozoans, and tunicates, the members of the three groups having been supposed to somewhat resemble the molluscs. As now known, these groups have no relation to molluscs, and very little to one another, so the name Molluscoida has been abandoned. Definition The most universal features of the body structure of molluscs are a mantle with a significant body cavity used for breathing and excretion, and the organization of the nervous system. Many have a calcareous shell. Molluscs have developed such a varied range of body structures, finding synapomorphies (defining characteristics) to apply to all modern groups is difficult. The most general characteristic of molluscs is they are unsegmented and bilaterally symmetrical. The following are present in all modern molluscs: The dorsal part of the body wall is a mantle (or pallium) which secretes calcareous spicules, plates or shells. It overlaps the body with enough spare room to form a mantle cavity. The anus and genitals open into the mantle cavity. There are paired nerve cords. Other characteristics that commonly appear in textbooks have significant exceptions: Diversity Estimates of accepted described living species of molluscs vary from 50,000 to a maximum of 120,000 species. The total number of described species is difficult to estimate because of unresolved synonymy. In 1969, David Nicol estimated the probable total number of living mollusc species at 107,000 of which were about 12,000 fresh-water gastropods and 35,000 terrestrial. The Bivalvia would comprise about 14% of the total and the other five classes less than 2% of the living molluscs. In 2009, Chapman estimated the number of described living mollusc species at 85,000. Haszprunar in 2001 estimated about 93,000 named species, which include 23% of all named marine organisms. Molluscs are second only to arthropods in numbers of living animal species—far behind the arthropods' 1,113,000 but well ahead of chordates' 52,000. About 200,000 living species in total are estimated, and 70,000 fossil species, although the total number of mollusc species ever to have existed, whether or not preserved, must be many times greater than the number alive today. Molluscs have more varied forms than any other animal phylum. They include snails, slugs and other gastropods; clams and other bivalves; squids and other cephalopods; and other lesser-known but similarly distinctive subgroups. The majority of species still live in the oceans, from the seashores to the abyssal zone, but some form a significant part of the freshwater fauna and the terrestrial ecosystems. Molluscs are extremely diverse in tropical and temperate regions, but can be found at all latitudes. About 80% of all known mollusc species are gastropods. Cephalopoda such as squid, cuttlefish, and octopuses are among the most neurologically advanced of all invertebrates. The giant squid, which until recently had not been observed alive in its adult form, is one of the largest invertebrates, surpassed in weight but not in length by the colossal squid. Freshwater and terrestrial molluscs appear exceptionally vulnerable to extinction. Estimates of the numbers of non-marine molluscs vary widely, partly because many regions have not been thoroughly surveyed. There is also a shortage of specialists who can identify all the animals in any one area to species. However, in 2004 the IUCN Red List of Threatened Species included nearly 2,000 endangered non-marine molluscs. For comparison, the great majority of mollusc species are marine, but only 41 of these appeared on the 2004 Red List. About 42% of recorded extinctions since the year 1500 are of molluscs, consisting almost entirely of non-marine species. Anatomy Because of the great range of anatomical diversity among molluscs, many textbooks start the subject of molluscan anatomy by describing what is called an archi-mollusc, hypothetical generalized mollusc, or hypothetical ancestral mollusc (HAM) to illustrate the most common features found within the phylum. The depiction is visually rather similar to modern monoplacophorans. The generalized mollusc is an unsegmented, bilaterally symmetrical animal and has a single, "limpet-like" shell on top. The shell is secreted by a mantle covering the upper surface. The underside consists of a single muscular "foot". The visceral mass, or visceropallium, is the soft, nonmuscular metabolic region of the mollusc. It contains the body organs. Mantle and mantle cavity The mantle cavity, a fold in the mantle, encloses a significant amount of space. It is lined with epidermis, and is exposed, according to habitat, to sea, fresh water or air. The cavity was at the rear in the earliest molluscs, but its position now varies from group to group. The anus, a pair of osphradia (chemical sensors) in the incoming "lane", the hindmost pair of gills and the exit openings of the nephridia (kidneys) known as "Organs of bojanus" and gonads (reproductive organs) are in the mantle cavity. The whole soft body of bivalves lies within an enlarged mantle cavity. Shell The mantle edge secretes a shell (secondarily absent in a number of taxonomic groups, such as the nudibranchs) that consists of mainly chitin and conchiolin (a protein hardened with calcium carbonate), except the outermost layer, which in almost all cases is all conchiolin (see periostracum). Molluscs never use phosphate to construct their hard parts, with the questionable exception of Cobcrephora. While most mollusc shells are composed mainly of aragonite, those gastropods that lay eggs with a hard shell use calcite (sometimes with traces of aragonite) to construct the eggshells. The shell consists of three layers: the outer layer (the periostracum) made of organic matter, a middle layer made of columnar calcite, and an inner layer consisting of laminated calcite, often nacreous. In some forms the shell contains openings. In abalone there are holes in the shell used for respiration and the release of egg and sperm, in the nautilus a string of tissue called the siphuncle goes through all the chambers, and the eight plates that make up the shell of chitons are penetrated with living tissue with nerves and sensory structures. Foot The body of a mollusc has a ventral muscular foot, which is adapted to different purposes (locomotion, grasping the substratum, burrowing or feeding) in different classes. The foot carries a pair of statocysts, which act as balance sensors. In gastropods, it secretes mucus as a lubricant to aid movement. In forms having only a top shell, such as limpets, the foot acts as a sucker attaching the animal to a hard surface, and the vertical muscles clamp the shell down over it; in other molluscs, the vertical muscles pull the foot and other exposed soft parts into the shell. In bivalves, the foot is adapted for burrowing into the sediment; in cephalopods it is used for jet propulsion, and the tentacles and arms are derived from the foot. Circulatory system Most molluscs' circulatory systems are mainly open, except for cephalopods, whose circulatory systems are closed. Although molluscs are coelomates, their coeloms are reduced to fairly small spaces enclosing the heart and gonads. The main body cavity is a hemocoel through which blood and coelomic fluid circulate and which encloses most of the other internal organs. These hemocoelic spaces act as an efficient hydrostatic skeleton. The blood of these molluscs contains the respiratory pigment hemocyanin as an oxygen-carrier. The heart consists of one or more pairs of atria (auricles), which receive oxygenated blood from the gills and pump it to the ventricle, which pumps it into the aorta (main artery), which is fairly short and opens into the hemocoel. The atria of the heart also function as part of the excretory system by filtering waste products out of the blood and dumping it into the coelom as urine. A pair of metanephridia ("little kidneys") to the rear of and connected to the coelom extracts any re-usable materials from the urine and dumps additional waste products into it, and then ejects it via tubes that discharge into the mantle cavity. Exceptions to the above are the molluscs Planorbidae or ram's horn snails, which are air-breathing snails that use iron-based hemoglobin instead of the copper-based hemocyanin to carry oxygen through their blood. Respiration Most molluscs have only one pair of gills, or even only a singular gill. Generally, the gills are rather like feathers in shape, although some species have gills with filaments on only one side. They divide the mantle cavity so water enters near the bottom and exits near the top. Their filaments have three kinds of cilia, one of which drives the water current through the mantle cavity, while the other two help to keep the gills clean. If the osphradia detect noxious chemicals or possibly sediment entering the mantle cavity, the gills' cilia may stop beating until the unwelcome intrusions have ceased. Each gill has an incoming blood vessel connected to the hemocoel and an outgoing one to the heart. Eating, digestion, and excretion Molluscs use intracellular digestion. Most molluscs have muscular mouths with radulae, "tongues", bearing many rows of chitinous teeth, which are replaced from the rear as they wear out. The radula primarily functions to scrape bacteria and algae off rocks, and is associated with the odontophore, a cartilaginous supporting organ. The radula is unique to the molluscs and has no equivalent in any other animal. Molluscs' mouths also contain glands that secrete slimy mucus, to which the food sticks. Beating cilia (tiny "hairs") drive the mucus towards the stomach, so the mucus forms a long string called a "food string". At the tapered rear end of the stomach and projecting slightly into the hindgut is the prostyle, a backward-pointing cone of feces and mucus, which is rotated by further cilia so it acts as a bobbin, winding the mucus string onto itself. Before the mucus string reaches the prostyle, the acidity of the stomach makes the mucus less sticky and frees particles from it. The particles are sorted by yet another group of cilia, which send the smaller particles, mainly minerals, to the prostyle so eventually they are excreted, while the larger ones, mainly food, are sent to the stomach's cecum (a pouch with no other exit) to be digested. The sorting process is by no means perfect. Periodically, circular muscles at the hindgut's entrance pinch off and excrete a piece of the prostyle, preventing the prostyle from growing too large. The anus, in the part of the mantle cavity, is swept by the outgoing "lane" of the current created by the gills. Carnivorous molluscs usually have simpler digestive systems. As the head has largely disappeared in bivalves, the mouth has been equipped with labial palps (two on each side of the mouth) to collect the detritus from its mucus. Nervous system The cephalic molluscs have two pairs of main nerve cords organized around a number of paired ganglia, the visceral cords serving the internal organs and the pedal ones serving the foot. Most pairs of corresponding ganglia on both sides of the body are linked by commissures (relatively large bundles of nerves). The ganglia above the gut are the cerebral, the pleural, and the visceral, which are located above the esophagus (gullet). The pedal ganglia, which control the foot, are below the esophagus and their commissure and connectives to the cerebral and pleural ganglia surround the esophagus in a circumesophageal nerve ring or nerve collar. The acephalic molluscs (i.e., bivalves) also have this ring but it is less obvious and less important. The bivalves have only three pairs of ganglia—cerebral, pedal, and visceral—with the visceral as the largest and most important of the three functioning as the principal center of "thinking". Some such as the scallops have eyes around the edges of their shells which connect to a pair of looped nerves and which provide the ability to distinguish between light and shadow. Reproduction The simplest molluscan reproductive system relies on external fertilization, but with more complex variations. All produce eggs, from which may emerge trochophore larvae, more complex veliger larvae, or miniature adults. Two gonads sit next to the coelom, a small cavity that surrounds the heart, into which they shed ova or sperm. The nephridia extract the gametes from the coelom and emit them into the mantle cavity. Molluscs that use such a system remain of one sex all their lives and rely on external fertilization. Some molluscs use internal fertilization and/or are hermaphrodites, functioning as both sexes; both of these methods require more complex reproductive systems. C. obtusus is an endemic snail species of the Eastern Alps. There is strong evidence for self-fertilization in the easternmost snail populations of this species. The most basic molluscan larva is a trochophore, which is planktonic and feeds on floating food particles by using the two bands of cilia around its "equator" to sweep food into the mouth, which uses more cilia to drive them into the stomach, which uses further cilia to expel undigested remains through the anus. New tissue grows in the bands of mesoderm in the interior, so the apical tuft and anus are pushed further apart as the animal grows. The trochophore stage is often succeeded by a veliger stage in which the prototroch, the "equatorial" band of cilia nearest the apical tuft, develops into the velum ("veil"), a pair of cilia-bearing lobes with which the larva swims. Eventually, the larva sinks to the seafloor and metamorphoses into the adult form. While metamorphosis is the usual state in molluscs, the cephalopods differ in exhibiting direct development: the hatchling is a 'miniaturized' form of the adult. The development of molluscs is of particular interest in the field of ocean acidification as environmental stress is recognized to affect the settlement, metamorphosis, and survival of larvae. Ecology Feeding Most molluscs are herbivorous, grazing on algae or filter feeders. For those grazing, two feeding strategies are predominant. Some feed on microscopic, filamentous algae, often using their radula as a 'rake' to comb up filaments from the sea floor. Others feed on macroscopic 'plants' such as kelp, rasping the plant surface with its radula. To employ this strategy, the plant has to be large enough for the mollusc to 'sit' on, so smaller macroscopic plants are not as often eaten as their larger counterparts. Filter feeders are molluscs that feed by straining suspended matter and food particles from water, typically by passing the water over their gills. Most bivalves are filter feeders, which can be measured through clearance rates. Research has demonstrated that environmental stress can affect the feeding of bivalves by altering the energy budget of organisms. Cephalopods are primarily predatory, and the radula takes a secondary role to the jaws and tentacles in food acquisition. The monoplacophoran Neopilina uses its radula in the usual fashion, but its diet includes protists such as the xenophyophore Stannophyllum. Sacoglossan sea-slugs suck the sap from algae, using their one-row radula to pierce the cell walls, whereas dorid nudibranchs and some Vetigastropoda feed on sponges and others feed on hydroids. (An extensive list of molluscs with unusual feeding habits is available in the appendix of ) Classification Opinions vary about the number of classes of molluscs; for example, the table below shows seven living classes, and two extinct ones. Although they are unlikely to form a clade, some older works combine the Caudofoveata and Solenogasters into one class, the Aplacophora. Two of the commonly recognized "classes" are known only from fossils. Phylogeny The phylogeny (evolutionary "family tree") of molluscs is a controversial subject. In addition to the debates about whether Kimberella and any of the "halwaxiids" were molluscs or closely related to molluscs, debates arise about the relationships between the classes of living molluscs. In fact, some groups traditionally classified as molluscs may have to be redefined as distinct but related. Molluscs are generally regarded members of the Lophotrochozoa, a group defined by having trochophore larvae and, in the case of living Lophophorata, a feeding structure called a lophophore. The other members of the Lophotrochozoa are the annelid worms and seven marine phyla. The diagram on the right summarizes a phylogeny presented in 2007 without the annelid worms. Because the relationships between the members of the family tree are uncertain, it is difficult to identify the features inherited from the last common ancestor of all molluscs. For example, it is uncertain whether the ancestral mollusc was metameric (composed of repeating units)—if it was, that would suggest an origin from an annelid-like worm. Scientists disagree about this: Giribet and colleagues concluded, in 2006, the repetition of gills and of the foot's retractor muscles were later developments, while in 2007, Sigwart concluded the ancestral mollusc was metameric, and it had a foot used for creeping and a "shell" that was mineralized. In one particular branch of the family tree, the shell of conchiferans is thought to have evolved from the spicules (small spines) of aplacophorans; but this is difficult to reconcile with the embryological origins of spicules. The molluscan shell appears to have originated from a mucus coating, which eventually stiffened into a cuticle. This would have been impermeable and thus forced the development of more sophisticated respiratory apparatus in the form of gills. Eventually, the cuticle would have become mineralized,using the same genetic machinery (engrailed) as most other bilaterian skeletons. The first mollusc shell almost certainly was reinforced with the mineral aragonite. Classification into higher taxa for molluscan classes has been and remains problematic. Numerous different clades have been proposed but few have received strong support. Traditionally, Mollusca is split into two subphyla, Conchifera and Aculifera, based on the presense of a shell. The "Testaria" hypothesis is similar, but includes chitons with the rest of the conchiferans. Some studies completely reject the proposal, instead favoring a "Serialia" hypothesis which classifies chitons and monoplacophorans as closely related. Morphological analyses tend to recover a conchiferan clade that receives less support from molecular analyses, although these results also lead to unexpected paraphylies, for instance scattering the bivalves throughout all other mollusc groups. However, an analysis in 2009 using both morphological and molecular phylogenetics comparisons concluded the molluscs are not monophyletic; in particular, Scaphopoda and Bivalvia are both separate, monophyletic lineages unrelated to the remaining molluscan classes; the traditional phylum Mollusca is polyphyletic, and it can only be made monophyletic if scaphopods and bivalves are excluded. A 2010 analysis recovered the traditional conchiferan and aculiferan groups, and showed molluscs were monophyletic, demonstrating that available data for solenogastres was contaminated. Current molecular data are insufficient to constrain the molluscan phylogeny, and since the methods used to determine the confidence in clades are prone to overestimation, it is risky to place too much emphasis even on the areas of which different studies agree. Rather than eliminating unlikely relationships, the latest studies add new permutations of internal molluscan relationships, even bringing the conchiferan hypothesis into question. Evolutionary History Good evidence exists for the appearance of gastropods (e.g., Aldanella), cephalopods (e.g., Plectronoceras, Nectocaris?) and bivalves (Pojetaia, Fordilla) towards the middle of the Cambrian period, c. , though arguably each of these may belong only to the stem lineage of their respective classes. However, the evolutionary history both of the emergence of molluscs from the ancestral group Lophotrochozoa, and of their diversification into the well-known living and fossil forms, is still vigorously debated. Debate occurs about whether some Ediacaran and Early Cambrian fossils really are molluscs. Kimberella, from about , has been described by some paleontologists as "mollusc-like", but others are unwilling to go further than "probable bilaterian", if that. There is an even sharper debate about whether Wiwaxia, from about , was a mollusc, and much of this centers on whether its feeding apparatus was a type of radula or more similar to that of some polychaete worms. Nicholas Butterfield, who opposes the idea that Wiwaxia was a mollusc, has written that earlier microfossils from are fragments of a genuinely mollusc-like radula. This appears to contradict the concept that the ancestral molluscan radula was mineralized. However, the Helcionellids, which first appear over in Early Cambrian rocks from Siberia and China, are thought to be early molluscs with rather snail-like shells. Shelled molluscs therefore predate the earliest trilobites. Although most helcionellid fossils are only a few millimeters long, specimens a few centimeters long have also been found, most with more limpet-like shapes. The tiny specimens have been suggested to be juveniles and the larger ones adults. Some analyses of helcionellids concluded these were the earliest gastropods. However, other scientists are not convinced these Early Cambrian fossils show clear signs of the torsion that identifies modern gastropods twists the internal organs so the anus lies above the head. Volborthella, some fossils of which predate , was long thought to be a cephalopod, but discoveries of more detailed fossils showed its shell was not secreted, but built from grains of the mineral silicon dioxide (silica), and it was not divided into a series of compartments by septa as those of fossil shelled cephalopods and the living Nautilus are. Volborthellas classification is uncertain. The Middle Cambrian fossil Nectocaris is often interpreted as a cephalopod with 2 arms and no shell, but the Late Cambrian fossil Plectronoceras is now thought to be the earliest undisputed cephalopod fossil, as its shell had septa and a siphuncle, a strand of tissue that Nautilus uses to remove water from compartments it has vacated as it grows, and which is also visible in fossil ammonite shells. However, Plectronoceras and other early cephalopods crept along the seafloor instead of swimming, as their shells contained a "ballast" of stony deposits on what is thought to be the underside, and had stripes and blotches on what is thought to be the upper surface. All cephalopods with external shells except the nautiloids became extinct by the end of the Cretaceous period . However, the shell-less Coleoidea (squid, octopus, cuttlefish) are abundant today. The Early Cambrian fossils Fordilla and Pojetaia are regarded as bivalves. "Modern-looking" bivalves appeared in the Ordovician period, . One bivalve group, the rudists, became major reef-builders in the Cretaceous, but became extinct in the Cretaceous–Paleogene extinction event. Even so, bivalves remain abundant and diverse. The Hyolitha are a class of extinct animals with a shell and operculum that may be molluscs. Authors who suggest they deserve their own phylum do not comment on the position of this phylum in the tree of life. Human interaction For millennia, molluscs have been a source of food for humans, as well as important luxury goods, notably pearls, mother of pearl, Tyrian purple dye, sea silk, and chemical compounds. Their shells have also been used as a form of currency in some preindustrial societies. Some species of molluscs can bite or sting humans, and some have become agricultural pests. Uses by humans Molluscs, especially bivalves such as clams and mussels, have been an important food source since at least the advent of anatomically modern humans, and this has often resulted in overfishing. Other commonly eaten molluscs include octopuses and squids, whelks, oysters, and scallops. In 2005, China accounted for 80% of the global mollusc catch, netting almost . Within Europe, France remained the industry leader. Some countries regulate importation and handling of molluscs and other seafood, mainly to minimize the poison risk from toxins that can sometimes accumulate in the animals. Most molluscs with shells can produce pearls, but only the pearls of bivalves and some gastropods, whose shells are lined with nacre, are valuable. The best natural pearls are produced by marine pearl oysters, Pinctada margaritifera and Pinctada mertensi, which live in the tropical and subtropical waters of the Pacific Ocean. Natural pearls form when a small foreign object gets stuck between the mantle and shell. The two methods of culturing pearls insert either "seeds" or beads into oysters. The "seed" method uses grains of ground shell from freshwater mussels, and overharvesting for this purpose has endangered several freshwater mussel species in the southeastern United States. The pearl industry is so important in some areas, significant sums of money are spent on monitoring the health of farmed molluscs. Other luxury and high-status products were made from molluscs. Tyrian purple, made from the ink glands of murex shells, "fetched its weight in silver" in the fourth century BC, according to Theopompus. The discovery of large numbers of Murex shells on Crete suggests the Minoans may have pioneered the extraction of "imperial purple" during the Middle Minoan period in the 20th–18th centuries BC, centuries before the Tyrians. Sea silk is a fine, rare, and valuable fabric produced from the long silky threads (byssus) secreted by several bivalve molluscs, particularly Pinna nobilis, to attach themselves to the sea bed. Procopius, writing on the Persian wars circa 550 CE, "stated that the five hereditary satraps (governors) of Armenia who received their insignia from the Roman Emperor were given chlamys (or cloaks) made from lana pinna. Apparently, only the ruling classes were allowed to wear these chlamys." Mollusc shells, including those of cowries, were used as a kind of money (shell money) in several preindustrial societies. However, these "currencies" generally differed in important ways from the standardized government-backed and -controlled money familiar to industrial societies. Some shell "currencies" were not used for commercial transactions, but mainly as social status displays at important occasions, such as weddings. When used for commercial transactions, they functioned as commodity money, as a tradable commodity whose value differed from place to place, often as a result of difficulties in transport, and which was vulnerable to incurable inflation if more efficient transport or "goldrush" behavior appeared. Bioindicators Bivalve molluscs are used as bioindicators to monitor the health of aquatic environments in both fresh water and the marine environments. Their population status or structure, physiology, behaviour or the level of contamination with elements or compounds can indicate the state of contamination status of the ecosystem. They are particularly useful since they are sessile so that they are representative of the environment where they are sampled or placed. Potamopyrgus antipodarum is used by some water treatment plants to test for estrogen-mimicking pollutants from industrial agriculture. Several species of mollusca have been used as bioindicators of environmental stresses that can cause DNA damage. These species include the American oyster Crassostrea virginica, zebra mussels (Dreissena polymorpha) and the blue mussel Mytilus edulis. Harm to humans Stings and bites Some molluscs sting or bite, but deaths from mollusc venoms total less than 10% of those from jellyfish stings. All octopuses are venomous, but only a few species pose a significant threat to humans. Blue-ringed octopuses in the genus Hapalochlaena, which live around Australia and New Guinea, bite humans only if severely provoked, but their venom kills 25% of human victims. Another tropical species, Octopus apollyon, causes severe inflammation that can last for over a month even if treated correctly, and the bite of Octopus rubescens can cause necrosis that lasts longer than one month if untreated, and headaches and weakness persisting for up to a week even if treated. All species of cone snails are venomous and can sting painfully when handled, although many species are too small to pose much of a risk to humans, and only a few fatalities have been reliably reported. Their venom is a complex mixture of toxins, some fast-acting and others slower but deadlier. The effects of individual cone-shell toxins on victims' nervous systems are so precise as to be useful tools for research in neurology, and the small size of their molecules makes it easy to synthesize them. Disease vectors Schistosomiasis (also known as bilharzia, bilharziosis or snail fever), a disease caused by the fluke worm Schistosoma, is "second only to malaria as the most devastating parasitic disease in tropical countries. An estimated 200 million people in 74 countries are infected with the disease—100 million in Africa alone." The parasite has 13 known species, two of which infect humans. The parasite itself is not a mollusc, but all the species have freshwater snails as intermediate hosts. Pests Some species of molluscs, particularly certain snails and slugs, can be serious crop pests, and when introduced into new environments, can unbalance local ecosystems. One such pest, the giant African snail Achatina fulica, has been introduced to many parts of Asia, as well as to many islands in the Indian Ocean and Pacific Ocean. In the 1990s, this species reached the West Indies. Attempts to control it by introducing the predatory snail Euglandina rosea proved disastrous, as the predator ignored Achatina fulica and went on to extirpate several native snail species instead.
Biology and health sciences
Biology
null
19773811
https://en.wikipedia.org/wiki/Hominidae
Hominidae
The Hominidae (), whose members are known as the great apes or hominids (), are a taxonomic family of primates that includes eight extant species in four genera: Pongo (the Bornean, Sumatran and Tapanuli orangutan); Gorilla (the eastern and western gorilla); Pan (the chimpanzee and the bonobo); and Homo, of which only modern humans (Homo sapiens) remain. Numerous revisions in classifying the great apes have caused the use of the term hominid to change over time. The original meaning of "hominid" referred only to humans (Homo) and their closest extinct relatives. However, by the 1990s humans and other apes were considered to be "hominids". The earlier restrictive meaning has now been largely assumed by the term hominin, which comprises all members of the human clade after the split from the chimpanzees (Pan). The current meaning of "hominid" includes all the great apes including humans. Usage still varies, however, and some scientists and laypersons still use "hominid" in the original restrictive sense; the scholarly literature generally shows the traditional usage until the turn of the 21st century. Within the taxon Hominidae, a number of extant and extinct genera are grouped with the humans, chimpanzees, and gorillas in the subfamily Homininae; others with orangutans in the subfamily Ponginae (see classification graphic below). The most recent common ancestor of all Hominidae lived roughly 14 million years ago, when the ancestors of the orangutans speciated from the ancestral line of the other three genera. Those ancestors of the family Hominidae had already speciated from the family Hylobatidae (the gibbons), perhaps 15 to 20 million years ago. Due to the close genetic relationship between humans and the other great apes, certain animal rights organizations, such as the Great Ape Project, argue that nonhuman great apes are persons and should be given basic human rights. Twenty-nine countries have instituted research bans to protect great apes from any kind of scientific testing. Evolution In the early Miocene, about 22 million years ago, there were many species of tree-adapted primitive catarrhines from East Africa; the variety suggests a long history of prior diversification. Fossils from 20 million years ago include fragments attributed to Victoriapithecus, the earliest Old World monkey. Among the genera thought to be in the ape lineage leading up to 13 million years ago are Proconsul, Rangwapithecus, Dendropithecus, Limnopithecus, Nacholapithecus, Equatorius, Nyanzapithecus, Afropithecus, Heliopithecus, and Kenyapithecus, all from East Africa. At sites far distant from East Africa, the presence of other generalized non-cercopithecids, that is, non-monkey primates, of middle Miocene age—Otavipithecus from cave deposits in Namibia, and Pierolapithecus and Dryopithecus from France, Spain and Austria—is further evidence of a wide diversity of ancestral ape forms across Africa and the Mediterranean basin during the relatively warm and equable climatic regimes of the early and middle Miocene. The most recent of these far-flung Miocene apes (hominoids) is Oreopithecus, from the fossil-rich coal beds in northern Italy and dated to 9 million years ago. Molecular evidence indicates that the lineage of gibbons (family Hylobatidae), the "lesser apes", diverged from that of the great apes some 18–12 million years ago, and that of orangutans (subfamily Ponginae) diverged from the other great apes at about 12 million years. There are no fossils that clearly document the ancestry of gibbons, which may have originated in a still-unknown South East Asian hominoid population; but fossil proto-orangutans, dated to around 10 million years ago, may be represented by Sivapithecus from India and Griphopithecus from Turkey. Species close to the last common ancestor of gorillas, chimpanzees and humans may be represented by Nakalipithecus fossils found in Kenya and Ouranopithecus fossils found in Greece. Molecular evidence suggests that between 8 and 4 million years ago, first the gorillas (genus Gorilla), and then the chimpanzees (genus Pan) split off from the line leading to humans. Human DNA is approximately 98.4% identical to that of chimpanzees when comparing single nucleotide polymorphisms (see human evolutionary genetics). The fossil record, however, of gorillas and chimpanzees is limited; both poor preservation—rain forest soils tend to be acidic and dissolve bone—and sampling bias probably contribute most to this problem. Other hominins probably adapted to the drier environments outside the African equatorial belt; and there they encountered antelope, hyenas, elephants and other forms becoming adapted to surviving in the East African savannas, particularly the regions of the Sahel and the Serengeti. The wet equatorial belt contracted after about 8 million years ago, and there is very little fossil evidence for the divergence of the hominin lineage from that of gorillas and chimpanzees—which split was thought to have occurred around that time. The earliest fossils argued by some to belong to the human lineage are Sahelanthropus tchadensis (7 Ma) and Orrorin tugenensis (6 Ma), followed by Ardipithecus (5.5–4.4 Ma), with species Ar. kadabba and Ar. ramidus. Taxonomy Terminology The classification of the great apes has been revised several times in the last few decades; these revisions have led to a varied use of the word "hominid" over time. The original meaning of the term referred to only humans and their closest relatives—what is now the modern meaning of the term "hominin". The meaning of the taxon Hominidae changed gradually, leading to a modern usage of "hominid" that includes all the great apes including humans. A number of very similar words apply to related classifications: A hominoid, sometimes called an ape, is a member of the superfamily Hominoidea: extant members are the gibbons (lesser apes, family Hylobatidae) and the hominids. A hominid is a member of the family Hominidae, the great apes: orangutans, gorillas, chimpanzees and humans. A hominine is a member of the subfamily Homininae: gorillas, chimpanzees, and humans (excludes orangutans). A hominin is a member of the tribe Hominini: chimpanzees and humans. A homininan, following a suggestion by Wood and Richmond (2000), would be a member of the subtribe Hominina of the tribe Hominini: that is, modern humans and their closest relatives, including Australopithecina, but excluding chimpanzees. A human is a member of the genus Homo, of which Homo sapiens is the only extant species, and within that Homo sapiens sapiens is the only surviving subspecies. A cladogram indicating common names (cf. more detailed cladogram below): Extant and fossil relatives of humans Hominidae was originally the name given to the family of humans and their (extinct) close relatives, with the other great apes (that is, the orangutans, gorillas and chimpanzees) all being placed in a separate family, the Pongidae. However, that definition eventually made Pongidae paraphyletic because at least one great ape species (the chimpanzees) proved to be more closely related to humans than to other great apes. Most taxonomists today encourage monophyletic groups—this would require, in this case, the use of Pongidae to be restricted to just one closely related grouping. Thus, many biologists now assign Pongo (as the subfamily Ponginae) to the family Hominidae. The taxonomy shown here follows the monophyletic groupings according to the modern understanding of human and great ape relationships. Humans and close relatives including the tribes Hominini and Gorillini form the subfamily Homininae (see classification graphic below). (A few researchers go so far as to refer the chimpanzees and the gorillas to the genus Homo along with humans.) But, those fossil relatives more closely related to humans than the chimpanzees represent the especially close members of the human family, Many extinct hominids have been studied to help understand the relationship between modern humans and the other extant hominids. Some of the extinct members of this family include Gigantopithecus, Orrorin, Ardipithecus, Kenyanthropus, and the australopithecines Australopithecus and Paranthropus. The exact criteria for membership in the tribe Hominini under the current understanding of human origins are not clear, but the taxon generally includes those species that share more than 97% of their DNA with the modern human genome, and exhibit a capacity for language or for simple cultures beyond their 'local family' or band. The theory of mind concept—including such faculties as empathy, attribution of mental state, and even empathetic deception—is a controversial criterion; it distinguishes the adult human alone among the hominids. Humans acquire this capacity after about four years of age, whereas it has not been proven (nor has it been disproven) that gorillas or chimpanzees ever develop a theory of mind. This is also the case for some New World monkeys outside the family of great apes, as, for example, the capuchin monkeys. However, even without the ability to test whether early members of the Hominini (such as Homo erectus, Homo neanderthalensis, or even the australopithecines) had a theory of mind, it is difficult to ignore similarities seen in their living cousins. Orangutans have shown the development of culture comparable to that of chimpanzees, and some say the orangutan may also satisfy those criteria for the theory of mind concept. These scientific debates take on political significance for advocates of great ape personhood. Description The great apes are tailless primates, with the smallest living species being the bonobo at in weight, and the largest being the eastern gorillas, with males weighing . In all great apes, the males are, on average, larger and stronger than the females, although the degree of sexual dimorphism varies greatly among species. Hominid teeth are similar to those of the Old World monkeys and gibbons, although they are especially large in gorillas. The dental formula is . Human teeth and jaws are markedly smaller relative to body size compared to those of other apes. This may be an adaptation not only to the extensive use of tools, which has supplanted the role of jaws in hunting and fighting, but also to eating cooked food since the end of the Pleistocene. Behavior Although most living species are predominantly quadrupedal, they are all able to use their hands for gathering food or nesting materials, and, in some cases, for tool use. They build complex sleeping platforms, also called nests, in trees to sleep in at night, but chimpanzees and gorillas also build terrestrial nests, and gorillas can also sleep on the bare ground. All species are omnivorous, although chimpanzees and orangutans primarily eat fruit. When gorillas run short of fruit at certain times of the year or in certain regions, they resort to eating shoots and leaves, often of bamboo, a type of grass. Gorillas have extreme adaptations for chewing and digesting such low-quality forage, but they still prefer fruit when it is available, often going miles out of their way to find especially preferred fruits. Humans, since the Neolithic Revolution, have consumed mostly cereals and other starchy foods, including increasingly highly processed foods, as well as many other domesticated plants (including fruits) and meat. Both chimpanzees and humans are known to wage wars over territories and resources. Gestation in great apes lasts 8–9 months, and results in the birth of a single offspring, or, rarely, twins. The young are born helpless, and require care for long periods of time. Compared with most other mammals, great apes have a remarkably long adolescence, not being weaned for several years, and not becoming fully mature for eight to thirteen years in most species (longer in orangutans and humans). As a result, females typically give birth only once every few years. There is no distinct breeding season. Gorillas and chimpanzees live in family groups of around five to ten individuals, although much larger groups are sometimes noted. Chimpanzees live in larger groups that break up into smaller groups when fruit becomes less available. When small groups of female chimpanzees go off in separate directions to forage for fruit, the dominant males can no longer control them and the females often mate with other subordinate males. In contrast, groups of gorillas stay together regardless of the availability of fruit. When fruit is hard to find, they resort to eating leaves and shoots. This fact is related to gorillas' greater sexual dimorphism relative to that of chimpanzees; that is, the difference in size between male and female gorillas is much greater than that between male and female chimpanzees. This enables gorilla males to physically dominate female gorillas more easily. In both chimpanzees and gorillas, the groups include at least one dominant male, and young males leave the group at maturity. Legal status Due to the close genetic relationship between humans and the other great apes, certain animal rights organizations, such as the Great Ape Project, argue that nonhuman great apes are persons and, per the Declaration on Great Apes, should be given basic human rights. In 1999, New Zealand was the first country to ban any great ape experimentation, and now 29 countries have currently instituted a research ban to protect great apes from any kind of scientific testing. On 25 June 2008, the Spanish parliament supported a new law that would make "keeping apes for circuses, television commercials or filming" illegal. On 8 September 2010, the European Union banned the testing of great apes. Conservation The following table lists the estimated number of great ape individuals living outside zoos. Phylogeny Below is a cladogram with extinct species. It is indicated approximately how many million years ago (Mya) the clades diverged into newer clades. Extant There are eight living species of great ape which are classified in four genera. The following classification is commonly accepted: Family Hominidae: humans and other great apes; extinct genera and species excluded Subfamily Ponginae Tribe Pongini Genus Pongo Bornean orangutan, Pongo pygmaeus Northwest Bornean orangutan, Pongo pygmaeus pygmaeus Northeast Bornean orangutan, Pongo pygmaeus morio Central Bornean orangutan, Pongo pygmaeus wurmbii Sumatran orangutan, Pongo abelii Tapanuli orangutan, Pongo tapanuliensis Subfamily Homininae Tribe Gorillini Genus Gorilla Western gorilla, Gorilla gorilla Western lowland gorilla, Gorilla gorilla gorilla Cross River gorilla, Gorilla gorilla diehli Eastern gorilla, Gorilla beringei Mountain gorilla, Gorilla beringei beringei Eastern lowland gorilla, Gorilla beringei graueri Tribe Hominini Subtribe Panina Genus Pan Chimpanzee, Pan troglodytes Central chimpanzee, Pan troglodytes troglodytes Western chimpanzee, Pan troglodytes verus Nigeria-Cameroon chimpanzee, Pan troglodytes ellioti Eastern chimpanzee, Pan troglodytes schweinfurthii Bonobo, Pan paniscus Subtribe Hominina Genus Homo Human, Homo sapiens Anatomically modern human, Homo sapiens sapiens Fossil In addition to the extant species and subspecies, archaeologists, paleontologists, and anthropologists have discovered and classified numerous extinct great ape species as below, based on the taxonomy shown. Family Hominidae Subfamily Ponginae Tribe Lufengpithecini † Lufengpithecus Lufengpithecus lufengensis Lufengpithecus keiyuanensis Lufengpithecus hudienensis Meganthropus Meganthropus palaeojavanicus Tribe Sivapithecini† Ankarapithecus Ankarapithecus meteai Sivapithecus Sivapithecus brevirostris Sivapithecus punjabicus Sivapithecus parvada Sivapithecus sivalensis Sivapithecus indicus Gigantopithecus Gigantopithecus bilaspurensis Gigantopithecus blacki Gigantopithecus giganteus Tribe Pongini Khoratpithecus† Khoratpithecus ayeyarwadyensis Khoratpithecus piriyai Khoratpithecus chiangmuanensis Pongo (orangutans) Pongo hooijeri† Subfamily Homininae Tribe Dryopithecini † Kenyapithecus (placement disputed) Kenyapithecus wickeri Danuvius Danuvius guggenmosi Pierolapithecus (placement disputed) Pierolapithecus catalaunicus Ouranopithecus Ouranopithecus macedoniensis Otavipithecus Otavipithecus namibiensis Morotopithecus (placement disputed) Morotopithecus bishopi Oreopithecus (placement disputed) Oreopithecus bambolii Nakalipithecus Nakalipithecus nakayamai Anoiapithecus Anoiapithecus brevirostris Hispanopithecus (placement disputed) Hispanopithecus laietanus Hispanopithecus crusafonti Dryopithecus Dryopithecus fontani Rudapithecus (placement disputed) Rudapithecus hungaricus Samburupithecus Samburupithecus kiptalami Graecopithecus † Graecopithecus freybergi Tribe Gorillini Chororapithecus † (placement debated) Chororapithecus abyssinicus Tribe Hominini Subtribe Panina Subtribe Hominina Sahelanthropus† Sahelanthropus tchadensis Orrorin† Orrorin tugenensis Orrorin praegens Ardipithecus† Ardipithecus ramidus Ardipithecus kadabba Kenyanthropus† Kenyanthropus platyops Australopithecus† Australopithecus bahrelghazali Australopithecus anamensis Australopithecus afarensis Australopithecus africanus Australopithecus garhi Australopithecus sediba Australopithecus deyiremeda Paranthropus† Paranthropus aethiopicus Paranthropus robustus Paranthropus boisei Homo – close relatives of modern humans Homo gautengensis† (probable H. habilis specimens) Homo rudolfensis† (membership in Homo uncertain) Homo habilis† (membership in Homo uncertain) Homo naledi† (membership in Homo uncertain) Dmanisi Man, Homo georgicus† (probable early subspecies of Homo erectus) Homo ergaster† (African Homo erectus) Homo erectus† Homo erectus bilzingslebenensis † Java Man, Homo erectus erectus † Lantian Man, Homo erectus lantianensis † Nanjing Man, Homo erectus nankinensis † Peking Man, Homo erectus pekinensis † Solo Man, Homo erectus soloensis † (possible separate species) Tautavel Man, Homo erectus tautavelensis † Yuanmou Man, Homo erectus yuanmouensis † Flores Man or Hobbit, Homo floresiensis† (membership in Homo uncertain) Homo luzonensis † (membership in Homo uncertain) Homo antecessor† Homo heidelbergensis† Homo cepranensis† (probable H. heidelbergensis specimens) Homo helmei† (probable early H. sapiens specimens) Homo tsaichangensis† (thought by some to be a subspecies of H. erectus or a Denisovan; unlikely to be separate species) Denisovans (scientific name not yet assigned)† Neanderthal, Homo neanderthalensis† Homo rhodesiensis† (probable late H. heidelbergensis specimens) Modern human, Homo sapiens (sometimes called Homo sapiens sapiens) Homo sapiens idaltu† Archaic Homo sapiens†
Biology and health sciences
Apes
Animals
2630435
https://en.wikipedia.org/wiki/Ox
Ox
An ox (: oxen), also known as a bullock (in British, Australian, and Indian English), is a large bovine, trained and used as a draft animal. Oxen are commonly castrated adult male cattle, because castration inhibits testosterone and aggression, which makes the males docile and safer to work with. Cows (adult females) or bulls (intact males) may also be used in some areas. Oxen are used for plowing, for transport (pulling carts, hauling wagons and even riding), for threshing grain by trampling, and for powering machines that grind grain or supply irrigation among other purposes. Oxen may be also used to skid logs in forests, particularly in low-impact, select-cut logging. Oxen are usually yoked in pairs. Light work such as carting household items on good roads might require just one pair, while for heavier work, further pairs would be added as necessary. A team used for a heavy load over difficult ground might exceed nine or ten pairs. Domestication Oxen are thought to have first been harnessed and put to work around 4000 BC. Training Working oxen are taught to respond to the signals of the teamster, bullocky or ox-driver. The signals are given by oral command and body language, reinforced by a goad, whip or a long pole, which also serves as a measure of length (see rod). In pre-industrial times, teamsters were known for their loud voices and forthright language. Verbal commands for draft animals vary widely throughout the world. In North America, the most common commands are: Back: back up Gee: turn to the right Get up (also giddyup or giddyap, contractions for "get thee up" or "get ye up"): go Haw: turn to the left Whoa: stop In the New England tradition, young castrated cattle selected for draft are known as working steers and are painstakingly trained from a young age. Their teamster makes or buys as many as a dozen yokes of different sizes for each animal as it grows. The steers are normally considered fully trained at the age of four and only then become known as oxen. A tradition in south-eastern England was to use oxen (often Sussex cattle) as dual-purpose animals: for draft and beef. A plowing team of eight oxen normally consisted of four pairs aged a year apart. Each year, a pair of steers of about three years of age would be bought for the team and trained with the older animals. The pair would be kept for about four years, then sold at about seven years old to be fattened for beef – thus covering much of the cost of buying that year's new pair. Use of oxen for plowing survived in some areas of England (such as the South Downs) until the early twentieth century. Pairs of oxen were always hitched the same way round, and they were often given paired names. In southern England it was traditional to call the near-side (left) ox of a pair by a single-syllable name and the off-side (right) one by a longer one (for example: Lark and Linnet, Turk and Tiger). Ox trainers favor larger animals for their ability to do more work. Oxen are therefore usually of larger breeds, and are usually males because they are generally larger. Females can also be trained as oxen, but as well as being smaller are often more valued for producing calves and milk. Bulls (intact males) are also used in many parts of the world, especially Asia and Africa. Shoeing Working oxen usually require shoes, although in England not all working oxen were shod. Since their hooves are cloven, two shoes are required for each hoof, as opposed to a single horseshoe. Ox shoes are usually of approximately half-moon or banana shape, either with or without caulkins, and are fitted in symmetrical pairs to the hooves. Unlike horses, oxen are not easily able to balance on three legs while a farrier shoes the fourth. In England, shoeing was accomplished by throwing the ox to the ground and lashing all four feet to a heavy wooden tripod until the shoeing was complete. A similar technique was used in Serbia and, in a simpler form, in India, where it is still practiced. In Italy, where oxen may be very large, shoeing is accomplished using a massive framework of beams in which the animal can be partly or completely lifted from the ground by slings passed under the body; the feet are then lashed to lateral beams or held with a rope while the shoes are fitted. Such devices were made of wood in the past, but may today be of metal. Similar devices are found in France, Austria, Germany, Spain, Canada and the United States, where they may be called ox slings, ox presses or shoeing stalls. The system was sometimes adopted in England also, where the device was called a crush or trevis; one example is recorded in the Vale of Pewsey. The shoeing of an ox partly lifted in a sling is the subject of John Singer Sargent's painting Shoeing the Ox, while A Smith Shoeing an Ox by Karel Dujardin shows an ox being shod standing, tied to a post by the horns and balanced by supporting the raised hoof. Uses and comparison to horses Oxen can pull heavier loads, and pull for a longer period of time than horses, depending on weather conditions. On the other hand, they are also slower than horses, which has both advantages and disadvantages. Their pulling style is steadier, but they cannot cover as much ground in a given period of time. For agricultural purposes, oxen are more suitable for heavy tasks such as breaking sod or plowing in wet, heavy, or clay-filled soil. When hauling freight, oxen can move very heavy loads in a slow and steady fashion. They are at a disadvantage compared to horses when it is necessary to pull a plow or load of freight relatively quickly. For millennia, oxen also could pull heavier loads because of the use of the yoke, which was designed to work best with the neck and shoulder anatomy of cattle. Until the invention of the horse collar, which allowed the horse to engage the pushing power of its hindquarters in moving a load, horses could not pull with their full strength because the yoke was incompatible with their anatomy (yokes press on their chest, inhibiting their breathing).
Technology
Agriculture, labor and economy
null
2631477
https://en.wikipedia.org/wiki/Homologous%20recombination
Homologous recombination
Homologous recombination is a type of genetic recombination in which genetic information is exchanged between two similar or identical molecules of double-stranded or single-stranded nucleic acids (usually DNA as in cellular organisms but may be also RNA in viruses). Homologous recombination is widely used by cells to accurately repair harmful DNA breaks that occur on both strands of DNA, known as double-strand breaks (DSB), in a process called homologous recombinational repair (HRR). Homologous recombination also produces new combinations of DNA sequences during meiosis, the process by which eukaryotes make gamete cells, like sperm and egg cells in animals. These new combinations of DNA represent genetic variation in offspring, which in turn enables populations to adapt during the course of evolution. Homologous recombination is also used in horizontal gene transfer to exchange genetic material between different strains and species of bacteria and viruses. Horizontal gene transfer is the primary mechanism for the spread of antibiotic resistance in bacteria. Although homologous recombination varies widely among different organisms and cell types, for double-stranded DNA (dsDNA) most forms involve the same basic steps. After a double-strand break occurs, sections of DNA around the 5' ends of the break are cut away in a process called resection. In the strand invasion step that follows, an overhanging 3' end of the broken DNA molecule then "invades" a similar or identical DNA molecule that is not broken. After strand invasion, the further sequence of events may follow either of two main pathways discussed below (see Models); the DSBR (double-strand break repair) pathway or the SDSA (synthesis-dependent strand annealing) pathway. Homologous recombination that occurs during DNA repair tends to result in non-crossover products, in effect restoring the damaged DNA molecule as it existed before the double-strand break. Homologous recombination is conserved across all three domains of life as well as DNA and RNA viruses, suggesting that it is a nearly universal biological mechanism. The discovery of genes for homologous recombination in protists—a diverse group of eukaryotic microorganisms—has been interpreted as evidence that homologous recombination emerged early in the evolution of eukaryotes. Since their dysfunction has been strongly associated with increased susceptibility to several types of cancer, the proteins that facilitate homologous recombination are topics of active research. Homologous recombination is also used in gene targeting, a technique for introducing genetic changes into target organisms. For their development of this technique, Mario Capecchi, Martin Evans and Oliver Smithies were awarded the 2007 Nobel Prize for Physiology or Medicine; Capecchi and Smithies independently discovered applications to mouse embryonic stem cells, however the highly conserved mechanisms underlying the DSB repair model, including uniform homologous integration of transformed DNA (gene therapy), were first shown in plasmid experiments by Orr-Weaver, Szostak and Rothstein. Researching the plasmid-induced DSB, using γ-irradiation in the 1970s-1980s, led to later experiments using endonucleases (e.g. I-SceI) to cut chromosomes for genetic engineering of mammalian cells, where nonhomologous recombination is more frequent than in yeast. History and discovery In the early 1900s, William Bateson and Reginald Punnett found an exception to one of the principles of inheritance originally described by Gregor Mendel in the 1860s. In contrast to Mendel's notion that traits are independently assorted when passed from parent to child—for example that a cat's hair color and its tail length are inherited independent of each other—Bateson and Punnett showed that certain genes associated with physical traits can be inherited together, or genetically linked. In 1911, after observing that linked traits could on occasion be inherited separately, Thomas Hunt Morgan suggested that "crossovers" can occur between linked genes, where one of the linked genes physically crosses over to a different chromosome. Two decades later, Barbara McClintock and Harriet Creighton demonstrated that chromosomal crossover occurs during meiosis, the process of cell division by which sperm and egg cells are made. Within the same year as McClintock's discovery, Curt Stern showed that crossing over—later called "recombination"—could also occur in somatic cells like white blood cells and skin cells that divide through mitosis. In 1947, the microbiologist Joshua Lederberg showed that bacteria—which had been assumed to reproduce only asexually through binary fission—are capable of genetic recombination, which is more similar to sexual reproduction. This work established E. coli as a model organism in genetics, and helped Lederberg win the 1958 Nobel Prize in Physiology or Medicine. Building on studies in fungi, in 1964 Robin Holliday proposed a model for recombination in meiosis which introduced key details of how the process can work, including the exchange of material between chromosomes through Holliday junctions. In 1983, Jack Szostak and colleagues presented a model now known as the DSBR pathway, which accounted for observations not explained by the Holliday model. During the next decade, experiments in Drosophila, budding yeast and mammalian cells led to the emergence of other models of homologous recombination, called SDSA pathways, which do not always rely on Holliday junctions. Much of the later work identifying proteins involved in the process and determining their mechanisms has been performed by a number of individuals including James Haber, Patrick Sung, Stephen Kowalczykowski, and others. In eukaryotes Homologous recombination (HR) is essential to cell division in eukaryotes like plants, animals, fungi and protists. Homologous recombination repairs double-strand breaks in DNA caused by ionizing radiation or DNA-damaging chemicals. Left unrepaired, these double-strand breaks can cause large-scale rearrangement of chromosomes in somatic cells, which can in turn lead to cancer. In addition to repairing DNA, homologous recombination also helps produce genetic diversity when cells divide in meiosis to become specialized gamete cells—sperm or egg cells in animals, pollen or ovules in plants, and spores in fungi. It does so by facilitating chromosomal crossover, in which regions of similar but not identical DNA are exchanged between homologous chromosomes. This creates new, possibly beneficial combinations of genes, which can give offspring an evolutionary advantage. Chromosomal crossover often begins when a protein called Spo11 makes a targeted double-strand break in DNA. These sites are non-randomly located on the chromosomes; usually in intergenic promoter regions and preferentially in GC-rich domains These double-strand break sites often occur at recombination hotspots, regions in chromosomes that are about 1,000–2,000 base pairs in length and have high rates of recombination. The absence of a recombination hotspot between two genes on the same chromosome often means that those genes will be inherited by future generations in equal proportion. This represents linkage between the two genes greater than would be expected from genes that independently assort during meiosis. Timing within the mitotic cell cycle Double-strand breaks can be repaired through homologous recombination, polymerase theta-mediated end joining (TMEJ) or through non-homologous end joining (NHEJ). NHEJ is a DNA repair mechanism which, unlike homologous recombination, does not require a long homologous sequence to guide repair. Whether homologous recombination or NHEJ is used to repair double-strand breaks is largely determined by the phase of cell cycle. Homologous recombination repairs DNA before the cell enters mitosis (M phase). It occurs during and shortly after DNA replication, in the S and G2 phases of the cell cycle, when sister chromatids are more easily available. Compared to homologous chromosomes, which are similar to another chromosome but often have different alleles, sister chromatids are an ideal template for homologous recombination because they are an identical copy of a given chromosome. When no homologous template is available or when the template cannot be accessed due to a defect in homologous recombination, the break is repaired via TMEJ in the S and G2 phases of the cell cycle. In contrast to homologous recombination and TMEJ, NHEJ is predominant in the G1 phase of the cell cycle, when the cell is growing but not yet ready to divide. It occurs less frequently after the G1 phase, but maintains at least some activity throughout the cell cycle. The mechanisms that regulate homologous recombination and NHEJ throughout the cell cycle vary widely between species. Cyclin-dependent kinases (CDKs), which modify the activity of other proteins by adding phosphate groups to (that is, phosphorylating) them, are important regulators of homologous recombination in eukaryotes. When DNA replication begins in budding yeast, the cyclin-dependent kinase Cdc28 begins homologous recombination by phosphorylating the Sae2 protein. After being so activated by the addition of a phosphate, Sae2 causes a clean cut to be made near a double-strand break in DNA. It is unclear if the endonuclease responsible for this cut is Sae2 itself or another protein, Mre11. This allows a protein complex including Mre11, known as the MRX complex, to bind to DNA, and begins a series of protein-driven reactions that exchange material between two DNA molecules. The role of chromatin The packaging of eukaryotic DNA into chromatin presents a barrier to all DNA-based processes that require recruitment of enzymes to their sites of action. To allow homologous recombination (HR) DNA repair, the chromatin must be remodeled. In eukaryotes, ATP dependent chromatin remodeling complexes and histone-modifying enzymes are two predominant factors employed to accomplish this remodeling process. Chromatin relaxation occurs rapidly at the site of a DNA damage. In one of the earliest steps, the stress-activated protein kinase, c-Jun N-terminal kinase (JNK), phosphorylates SIRT6 on serine 10 in response to double-strand breaks or other DNA damage. This post-translational modification facilitates the mobilization of SIRT6 to DNA damage sites, and is required for efficient recruitment of poly (ADP-ribose) polymerase 1 (PARP1) to DNA break sites and for efficient repair of DSBs. PARP1 protein starts to appear at DNA damage sites in less than a second, with half maximum accumulation within 1.6 seconds after the damage occurs. Next the chromatin remodeler Alc1 quickly attaches to the product of PARP1 action, a poly-ADP ribose chain, and Alc1 completes arrival at the DNA damage within 10 seconds of the occurrence of the damage. About half of the maximum chromatin relaxation, presumably due to action of Alc1, occurs by 10 seconds. This then allows recruitment of the DNA repair enzyme MRE11, to initiate DNA repair, within 13 seconds. γH2AX, the phosphorylated form of H2AX is also involved in the early steps leading to chromatin decondensation after DNA double-strand breaks. The histone variant H2AX constitutes about 10% of the H2A histones in human chromatin. γH2AX (H2AX phosphorylated on serine 139) can be detected as soon as 20 seconds after irradiation of cells (with DNA double-strand break formation), and half maximum accumulation of γH2AX occurs in one minute. The extent of chromatin with phosphorylated γH2AX is about two million base pairs at the site of a DNA double-strand break. γH2AX does not, itself, cause chromatin decondensation, but within 30 seconds of irradiation, RNF8 protein can be detected in association with γH2AX. RNF8 mediates extensive chromatin decondensation, through its subsequent interaction with CHD4, a component of the nucleosome remodeling and deacetylase complex NuRD. After undergoing relaxation subsequent to DNA damage, followed by DNA repair, chromatin recovers to a compaction state close to its pre-damage level after about 20 min. Homologous recombination during meiosis In vertebrates the locations at which recombination occurs are determined by the binding locations of PRDM9, a protein which recognizes a specific sequence motif by its zinc finger array. At these sites, another protein, SPO11 catalyses recombination-initiating double strand breaks (DSBs), a subset of which are repaired by recombination with the homologous chromosome. PRDM9 deposits both H3K4me3 and H3K36me3 histone methylation marks at the sites it binds, and this methyltransferase activity is essential for its role in DSB positioning. Following their formation, DSB sites are processed by resection, resulting in single-stranded DNA (ssDNA) that becomes decorated with DMC1. From mid-zygotene to early pachytene, as part of the recombinational repair process, DMC1 dissociates from the ssDNA and counts decrease until all breaks (except those on the XY chromosomes) are repaired at late pachytene. Several other proteins are involved in this process, including ZCWPW1, the first protein directly positioned by PRDM9's dual histone marks. ZCWPW1 is important for homologous DSB repair, not positioning. Models Two primary models for how homologous recombination repairs double-strand breaks in DNA are the double-strand break repair (DSBR) pathway (sometimes called the double Holliday junction model) and the synthesis-dependent strand annealing (SDSA) pathway. The two pathways are similar in their first several steps. After a double-strand break occurs, the MRX complex (MRN complex in humans) binds to DNA on either side of the break. Next a resection takes place, in which DNA around the 5' ends of the break is cut back. This happens in two distinct steps: first the MRX complex recruits the Sae2 protein, and these two proteins trim back the 5' ends on either side of the break to create short 3' overhangs of single-strand DNA; in the second step, 5'→3' resection is continued by the Sgs1 helicase and the Exo1 and Dna2 nucleases. As a helicase, Sgs1 "unzips" the double-strand DNA, while the nuclease activity of Exo1 and Dna2 allows them to cut the single-stranded DNA produced by Sgs1. The RPA protein, which has high affinity for single-stranded DNA, then binds the 3' overhangs. With the help of several other proteins that mediate the process, the Rad51 protein (and Dmc1, in meiosis) then forms a filament of nucleic acid and protein on the single strand of DNA coated with RPA. This nucleoprotein filament then begins searching for DNA sequences similar to that of the 3' overhang. After finding such a sequence, the single-stranded nucleoprotein filament moves into (invades) the similar or identical recipient DNA duplex in a process called strand invasion. In cells that divide through mitosis, the recipient DNA duplex is generally a sister chromatid, which is identical to the damaged DNA molecule and provides a template for repair. In meiosis, however, the recipient DNA tends to be from a similar but not necessarily identical homologous chromosome. A displacement loop (D-loop) is formed during strand invasion between the invading 3' overhang strand and the homologous chromosome. After strand invasion, a DNA polymerase extends the end of the invading 3' strand by synthesizing new DNA. This changes the D-loop to a cross-shaped structure known as a Holliday junction. Following this, more DNA synthesis occurs on the invading strand (i.e., one of the original 3' overhangs), effectively restoring the strand on the homologous chromosome that was displaced during strand invasion. DSBR pathway After the stages of resection, strand invasion and DNA synthesis, the DSBR and SDSA pathways become distinct. The DSBR pathway is unique in that the second 3' overhang (which was not involved in strand invasion) also forms a Holliday junction with the homologous chromosome. The double Holliday junctions are then converted into recombination products by nicking endonucleases, a type of restriction endonuclease which cuts only one DNA strand. The DSBR pathway commonly results in crossover, though it can sometimes result in non-crossover products; the ability of a broken DNA molecule to collect sequences from separated donor loci was shown in mitotic budding yeast using plasmids or endonuclease induction of chromosomal events. Because of this tendency for chromosomal crossover, the DSBR pathway is a likely model of how crossover homologous recombination occurs during meiosis. Whether recombination in the DSBR pathway results in chromosomal crossover is determined by how the double Holliday junction is cut, or "resolved". Chromosomal crossover will occur if one Holliday junction is cut on the crossing strand and the other Holliday junction is cut on the non-crossing strand (in Figure 5, along the horizontal purple arrowheads at one Holliday junction and along the vertical orange arrowheads at the other). Alternatively, if the two Holliday junctions are cut on the crossing strands (along the horizontal purple arrowheads at both Holliday junctions in Figure 5), then chromosomes without crossover will be produced. SDSA pathway Homologous recombination via the SDSA pathway occurs in cells that divide through mitosis and meiosis and results in non-crossover products. In this model, the invading 3' strand is extended along the recipient DNA duplex by a DNA polymerase, and is released as the Holliday junction between the donor and recipient DNA molecules slides in a process called branch migration. The newly synthesized 3' end of the invading strand is then able to anneal to the other 3' overhang in the damaged chromosome through complementary base pairing. After the strands anneal, a small flap of DNA can sometimes remain. Any such flaps are removed, and the SDSA pathway finishes with the resealing, also known as ligation, of any remaining single-stranded gaps. During mitosis, the major homologous recombination pathway for repairing DNA double-strand breaks appears to be the SDSA pathway (rather than the DSBR pathway). The SDSA pathway produces non-crossover recombinants (Figure 5). During meiosis non-crossover recombinants also occur frequently and these appear to arise mainly by the SDSA pathway as well. Non-crossover recombination events occurring during meiosis likely reflect instances of repair of DNA double-strand damages or other types of DNA damages. SSA pathway The single-strand annealing (SSA) pathway of homologous recombination repairs double-strand breaks between two repeat sequences. The SSA pathway is unique in that it does not require a separate similar or identical molecule of DNA, like the DSBR or SDSA pathways of homologous recombination. Instead, the SSA pathway only requires a single DNA duplex, and uses the repeat sequences as the identical sequences that homologous recombination needs for repair. The pathway is relatively simple in concept: after two strands of the same DNA duplex are cut back around the site of the double-strand break, the two resulting 3' overhangs then align and anneal to each other, restoring the DNA as a continuous duplex. As DNA around the double-strand break is cut back, the single-stranded 3' overhangs being produced are coated with the RPA protein, which prevents the 3' overhangs from sticking to themselves. A protein called Rad52 then binds each of the repeat sequences on either side of the break, and aligns them to enable the two complementary repeat sequences to anneal. After annealing is complete, leftover non-homologous flaps of the 3' overhangs are cut away by a set of nucleases, known as Rad1/Rad10, which are brought to the flaps by the Saw1 and Slx4 proteins. New DNA synthesis fills in any gaps, and ligation restores the DNA duplex as two continuous strands. The DNA sequence between the repeats is always lost, as is one of the two repeats. The SSA pathway is considered mutagenic since it results in such deletions of genetic material. BIR pathway During DNA replication, double-strand breaks can sometimes be encountered at replication forks as DNA helicase unzips the template strand. These defects are repaired in the break-induced replication (BIR) pathway of homologous recombination. The precise molecular mechanisms of the BIR pathway remain unclear. Three proposed mechanisms have strand invasion as an initial step, but they differ in how they model the migration of the D-loop and later phases of recombination. The BIR pathway can also help to maintain the length of telomeres (regions of DNA at the end of eukaryotic chromosomes) in the absence of (or in cooperation with) telomerase. Without working copies of the enzyme telomerase, telomeres typically shorten with each cycle of mitosis, which eventually blocks cell division and leads to senescence. In budding yeast cells where telomerase has been inactivated through mutations, two types of "survivor" cells have been observed to avoid senescence longer than expected by elongating their telomeres through BIR pathways. Maintaining telomere length is critical for cell immortalization, a key feature of cancer. Most cancers maintain telomeres by upregulating telomerase. However, in several types of human cancer, a BIR-like pathway helps to sustain some tumors by acting as an alternative mechanism of telomere maintenance. This fact has led scientists to investigate whether such recombination-based mechanisms of telomere maintenance could thwart anti-cancer drugs like telomerase inhibitors. In bacteria Homologous recombination is a major DNA repair process in bacteria. It is also important for producing genetic diversity in bacterial populations, although the process differs substantially from meiotic recombination, which repairs DNA damages and brings about diversity in eukaryotic genomes. Homologous recombination has been most studied and is best understood for Escherichia coli. Double-strand DNA breaks in bacteria are repaired by the RecBCD pathway of homologous recombination. Breaks that occur on only one of the two DNA strands, known as single-strand gaps, are thought to be repaired by the RecF pathway. Both the RecBCD and RecF pathways include a series of reactions known as branch migration, in which single DNA strands are exchanged between two intercrossed molecules of duplex DNA, and resolution, in which those two intercrossed molecules of DNA are cut apart and restored to their normal double-stranded state. RecBCD pathway The RecBCD pathway is the main recombination pathway used in many bacteria to repair double-strand breaks in DNA, and the proteins are found in a broad array of bacteria. These double-strand breaks can be caused by UV light and other radiation, as well as chemical mutagens. Double-strand breaks may also arise by DNA replication through a single-strand nick or gap. Such a situation causes what is known as a collapsed replication fork and is fixed by several pathways of homologous recombination including the RecBCD pathway. In this pathway, a three-subunit enzyme complex called RecBCD initiates recombination by binding to a blunt or nearly blunt end of a break in double-strand DNA. After RecBCD binds the DNA end, the RecB and RecD subunits begin unzipping the DNA duplex through helicase activity. The RecB subunit also has a nuclease domain, which cuts the single strand of DNA that emerges from the unzipping process. This unzipping continues until RecBCD encounters a specific nucleotide sequence (5'-GCTGGTGG-3') known as a Chi site. Upon encountering a Chi site, the activity of the RecBCD enzyme changes drastically. DNA unwinding pauses for a few seconds and then resumes at roughly half the initial speed. This is likely because the slower RecB helicase unwinds the DNA after Chi, rather than the faster RecD helicase, which unwinds the DNA before Chi. Recognition of the Chi site also changes the RecBCD enzyme so that it cuts the DNA strand with Chi and begins loading multiple RecA proteins onto the single-stranded DNA with the newly generated 3' end. The resulting RecA-coated nucleoprotein filament then searches out similar sequences of DNA on a homologous chromosome. The search process induces stretching of the DNA duplex, which enhances homology recognition (a mechanism termed conformational proofreading). Upon finding such a sequence, the single-stranded nucleoprotein filament moves into the homologous recipient DNA duplex in a process called strand invasion. The invading 3' overhang causes one of the strands of the recipient DNA duplex to be displaced, to form a D-loop. If the D-loop is cut, another swapping of strands forms a cross-shaped structure called a Holliday junction. Resolution of the Holliday junction by some combination of RuvABC or RecG can produce two recombinant DNA molecules with reciprocal genetic types, if the two interacting DNA molecules differ genetically. Alternatively, the invading 3’ end near Chi can prime DNA synthesis and form a replication fork. This type of resolution produces only one type of recombinant (non-reciprocal). RecF pathway Bacteria appear to use the RecF pathway of homologous recombination to repair single-strand gaps in DNA. When the RecBCD pathway is inactivated by mutations and additional mutations inactivate the SbcCD and ExoI nucleases, the RecF pathway can also repair DNA double-strand breaks. In the RecF pathway the RecQ helicase unwinds the DNA and the RecJ nuclease degrades the strand with a 5' end, leaving the strand with the 3' end intact. RecA protein binds to this strand and is either aided by the RecF, RecO, and RecR proteins or stabilized by them. The RecA nucleoprotein filament then searches for a homologous DNA and exchanges places with the identical or nearly identical strand in the homologous DNA. Although the proteins and specific mechanisms involved in their initial phases differ, the two pathways are similar in that they both require single-stranded DNA with a 3' end and the RecA protein for strand invasion. The pathways are also similar in their phases of branch migration, in which the Holliday junction slides in one direction, and resolution, in which the Holliday junctions are cleaved apart by enzymes. The alternative, non-reciprocal type of resolution may also occur by either pathway. Branch migration Immediately after strand invasion, the Holliday junction moves along the linked DNA during the branch migration process. It is in this movement of the Holliday junction that base pairs between the two homologous DNA duplexes are exchanged. To catalyze branch migration, the RuvA protein first recognizes and binds to the Holliday junction and recruits the RuvB protein to form the RuvAB complex. Two sets of the RuvB protein, which each form a ring-shaped ATPase, are loaded onto opposite sides of the Holliday junction, where they act as twin pumps that provide the force for branch migration. Between those two rings of RuvB, two sets of the RuvA protein assemble in the center of the Holliday junction such that the DNA at the junction is sandwiched between each set of RuvA. The strands of both DNA duplexes—the "donor" and the "recipient" duplexes—are unwound on the surface of RuvA as they are guided by the protein from one duplex to the other. Resolution In the resolution phase of recombination, any Holliday junctions formed by the strand invasion process are cut, thereby restoring two separate DNA molecules. This cleavage is done by RuvAB complex interacting with RuvC, which together form the RuvABC complex. RuvC is an endonuclease that cuts the degenerate sequence 5'-(A/T)TT(G/C)-3'. The sequence is found frequently in DNA, about once every 64 nucleotides. Before cutting, RuvC likely gains access to the Holliday junction by displacing one of the two RuvA tetramers covering the DNA there. Recombination results in either "splice" or "patch" products, depending on how RuvC cleaves the Holliday junction. Splice products are crossover products, in which there is a rearrangement of genetic material around the site of recombination. Patch products, on the other hand, are non-crossover products in which there is no such rearrangement and there is only a "patch" of hybrid DNA in the recombination product. Facilitating genetic transfer Homologous recombination is an important method of integrating donor DNA into a recipient organism's genome in horizontal gene transfer, the process by which an organism incorporates foreign DNA from another organism without being the offspring of that organism. Homologous recombination requires incoming DNA to be highly similar to the recipient genome, and so horizontal gene transfer is usually limited to similar bacteria. Studies in several species of bacteria have established that there is a log-linear decrease in recombination frequency with increasing difference in sequence between host and recipient DNA. In bacterial conjugation, where DNA is transferred between bacteria through direct cell-to-cell contact, homologous recombination helps integrate foreign DNA into the host genome via the RecBCD pathway. The RecBCD enzyme promotes recombination after DNA is converted from single-strand DNA–in which form it originally enters the bacterium–to double-strand DNA during replication. The RecBCD pathway is also essential for the final phase of transduction, a type of horizontal gene transfer in which DNA is transferred from one bacterium to another by a virus. Foreign, bacterial DNA is sometimes misincorporated in the capsid head of bacteriophage virus particles as DNA is packaged into new bacteriophages during viral replication. When these new bacteriophages infect other bacteria, DNA from the previous host bacterium is injected into the new bacterial host as double-strand DNA. The RecBCD enzyme then incorporates this double-strand DNA into the genome of the new bacterial host. Bacterial transformation Natural bacterial transformation involves the transfer of DNA from a donor bacterium to a recipient bacterium, where both donor and recipient are ordinarily of the same species. Transformation, unlike bacterial conjugation and transduction, depends on numerous bacterial gene products that specifically interact to perform this process. Thus transformation is clearly a bacterial adaptation for DNA transfer. In order for a bacterium to bind, take up and integrate donor DNA into its resident chromosome by homologous recombination, it must first enter a special physiological state termed competence. The RecA/Rad51/DMC1 gene family plays a central role in homologous recombination during bacterial transformation as it does during eukaryotic meiosis and mitosis. For instance, the RecA protein is essential for transformation in Bacillus subtilis and Streptococcus pneumoniae, and expression of the RecA gene is induced during the development of competence for transformation in these organisms. As part of the transformation process, the RecA protein interacts with entering single-stranded DNA (ssDNA) to form RecA/ssDNA nucleofilaments that scan the resident chromosome for regions of homology and bring the entering ssDNA to the corresponding region, where strand exchange and homologous recombination occur. Thus the process of homologous recombination during bacterial transformation has fundamental similarities to homologous recombination during meiosis. In viruses Homologous recombination occurs in several groups of viruses. In DNA viruses such as herpesvirus, recombination occurs through a break-and-rejoin mechanism like in bacteria and eukaryotes. There is also evidence for recombination in some RNA viruses, specifically positive-sense ssRNA viruses like retroviruses, picornaviruses, and coronaviruses. There is controversy over whether homologous recombination occurs in negative-sense ssRNA viruses like influenza. In RNA viruses, homologous recombination can be either precise or imprecise. In the precise type of RNA-RNA recombination, there is no difference between the two parental RNA sequences and the resulting crossover RNA region. Because of this, it is often difficult to determine the location of crossover events between two recombining RNA sequences. In imprecise RNA homologous recombination, the crossover region has some difference with the parental RNA sequences – caused by either addition, deletion, or other modification of nucleotides. The level of precision in crossover is controlled by the sequence context of the two recombining strands of RNA: sequences rich in adenine and uracil decrease crossover precision. Homologous recombination is important in facilitating viral evolution. For example, if the genomes of two viruses with different disadvantageous mutations undergo recombination, then they may be able to regenerate a fully functional genome. Alternatively, if two similar viruses have infected the same host cell, homologous recombination can allow those two viruses to swap genes and thereby evolve more potent variations of themselves. Homologous recombination is the proposed mechanism whereby the DNA virus human herpesvirus-6 integrates into human telomeres. When two or more viruses, each containing lethal genomic damage, infect the same host cell, the virus genomes can often pair with each other and undergo homologous recombinational repair to produce viable progeny. This process, known as multiplicity reactivation, has been studied in several bacteriophages, including phage T4. Enzymes employed in recombinational repair in phage T4 are functionally homologous to enzymes employed in bacterial and eukaryotic recombinational repair. In particular, with regard to a gene necessary for the strand exchange reaction, a key step in homologous recombinational repair, there is functional homology from viruses to humans (i. e. uvsX in phage T4; recA in E. coli and other bacteria, and rad51 and dmc1 in yeast and other eukaryotes, including humans). Multiplicity reactivation has also been demonstrated in numerous pathogenic viruses. Coronavirus Coronaviruses are capable of genetic recombination when at least two viral genomes are present in the same infected cell. RNA recombination appears to be a major driving force in determining (1) genetic variability within a CoV species, (2) the capability of a CoV species to jump from one host to another, and (3) infrequently, the emergence of novel CoVs. The mechanism of recombination in CoVs likely involves template switching during genome replication. Recombination in RNA viruses appears to be an adaptation for coping with genome damage. The pandemic SARS-CoV-2's entire receptor binding motif appears to have been introduced through recombination from coronaviruses of pangolins. Such a recombination event may have been a critical step in the evolution of SARS-CoV-2's capability to infect humans. Recombination events are likely key steps in the evolutionary process that leads to the emergence of new human coronaviruses. During COVID-19 pandemic in 2020, many genomic sequences of Australian SARS‐CoV‐2 isolates have deletions or mutations (29742G>A or 29742G>U; "G19A" or "G19U")in the Coronavirus 3′ stem-loop II-like motif (s2m), an RNA motif in 3' untranslated region of viral genome, suggesting that RNA recombination events may have occurred in s2m of SARS-CoV-2. Based on computational analysis of 1319 Australia SARS‐CoV‐2 sequences using Recco algorithm (https://recco.bioinf.mpi-inf.mpg.de/), 29742G("G19"), 29744G("G21"), and 29751G("G28") were predicted as recombination hotspots. The SARS-CoV-2 outbreak in Diamond Princess cruise most likely originated from either a single person infected with a virus variant identical to the Wuhan WIV04 isolates, or simultaneously with another primary case infected with a virus containing the 11083G > T mutation. Linkage disequilibrium analysis confirmed that RNA recombination with the 11083G > T mutation also contributed to the increase of mutations among the viral progeny. The findings indicate that the 11083G > T mutation of SARS-CoV-2 spread during shipboard quarantine and arose through de novo RNA recombination under positive selection pressure. In addition, in three patients in this cruise, two mutations 29736G > T and 29751G > T ("G13" and "G28") were also located in Coronavirus 3′ stem-loop II-like motif (s2m), as "G28" was predicted as recombination hotspots in Australian SARS-CoV-2 mutants. Although s2m is considered an RNA motif highly conserved among many coronavirus species, this result also suggests that s2m of SARS-CoV-2 is rather a RNA recombination/mutation hotspot. Effects of dysfunction Without proper homologous recombination, chromosomes often incorrectly align for the first phase of cell division in meiosis. This causes chromosomes to fail to properly segregate in a process called nondisjunction. In turn, nondisjunction can cause sperm and ova to have too few or too many chromosomes. Down's syndrome, which is caused by an extra copy of chromosome 21, is one of many abnormalities that result from such a failure of homologous recombination in meiosis. Deficiencies in homologous recombination have been strongly linked to cancer formation in humans. For example, each of the cancer-related diseases Bloom syndrome, Werner syndrome and Rothmund–Thomson syndrome are caused by malfunctioning copies of RecQ helicase genes involved in the regulation of homologous recombination: BLM, WRN and RECQL4, respectively. In the cells of Bloom's syndrome patients, who lack a working copy of the BLM protein, there is an elevated rate of homologous recombination. Experiments in mice deficient in BLM have suggested that the mutation gives rise to cancer through a loss of heterozygosity caused by increased homologous recombination. A loss in heterozygosity refers to the loss of one of two versions—or alleles—of a gene. If one of the lost alleles helps to suppress tumors, like the gene for the retinoblastoma protein for example, then the loss of heterozygosity can lead to cancer. Decreased rates of homologous recombination cause inefficient DNA repair, which can also lead to cancer. This is the case with BRCA1 and BRCA2, two similar tumor suppressor genes whose malfunctioning has been linked with considerably increased risk for breast and ovarian cancer. Cells missing BRCA1 and BRCA2 have a decreased rate of homologous recombination and increased sensitivity to ionizing radiation, suggesting that decreased homologous recombination leads to increased susceptibility to cancer. Because the only known function of BRCA2 is to help initiate homologous recombination, researchers have speculated that more detailed knowledge of BRCA2's role in homologous recombination may be the key to understanding the causes of breast and ovarian cancer. Tumours with a homologous recombination deficiency (including BRCA defects) are described as HRD-positive. Evolutionary conservation While the pathways can mechanistically vary, the ability of organisms to perform homologous recombination is universally conserved across all domains of life. Based on the similarity of their amino acid sequences, homologs of a number of proteins can be found in multiple domains of life indicating that they evolved a long time ago, and have since diverged from common ancestral proteins. RecA recombinase family members are found in almost all organisms with RecA in bacteria, Rad51 and DMC1 in eukaryotes, RadA in archaea, and UvsX in T4 phage. Related single stranded binding proteins that are important for homologous recombination, and many other processes, are also found in all domains of life. Rad54, Mre11, Rad50, and a number of other proteins are also found in both archaea and eukaryotes. The RecA recombinase family The proteins of the RecA recombinase family of proteins are thought to be descended from a common ancestral recombinase. The RecA recombinase family contains RecA protein from bacteria, the Rad51 and Dmc1 proteins from eukaryotes, and RadA from archaea, and the recombinase paralog proteins. Studies modeling the evolutionary relationships between the Rad51, Dmc1 and RadA proteins indicate that they are monophyletic, or that they share a common molecular ancestor. Within this protein family, Rad51 and Dmc1 are grouped together in a separate clade from RadA. One of the reasons for grouping these three proteins together is that they all possess a modified helix-turn-helix motif, which helps the proteins bind to DNA, toward their N-terminal ends. An ancient gene duplication event of a eukaryotic RecA gene and subsequent mutation has been proposed as a likely origin of the modern RAD51 and DMC1 genes. The proteins generally share a long conserved region known as the RecA/Rad51 domain. Within this protein domain are two sequence motifs, Walker A motif and Walker B motif. The Walker A and B motifs allow members of the RecA/Rad51 protein family to engage in ATP binding and ATP hydrolysis. Meiosis-specific proteins The discovery of Dmc1 in several species of Giardia, one of the earliest protists to diverge as a eukaryote, suggests that meiotic homologous recombination—and thus meiosis itself—emerged very early in eukaryotic evolution. In addition to research on Dmc1, studies on the Spo11 protein have provided information on the origins of meiotic recombination. Spo11, a type II topoisomerase, can initiate homologous recombination in meiosis by making targeted double-strand breaks in DNA. Phylogenetic trees based on the sequence of genes similar to SPO11 in animals, fungi, plants, protists and archaea have led scientists to believe that the version Spo11 currently in eukaryotes emerged in the last common ancestor of eukaryotes and archaea. Technological applications Gene targeting Many methods for introducing DNA sequences into organisms to create recombinant DNA and genetically modified organisms use the process of homologous recombination. Also called gene targeting, the method is especially common in yeast and mouse genetics. The gene targeting method in knockout mice uses mouse embryonic stem cells to deliver artificial genetic material (mostly of therapeutic interest), which represses the target gene of the mouse by the principle of homologous recombination. The mouse thereby acts as a working model to understand the effects of a specific mammalian gene. In recognition of their discovery of how homologous recombination can be used to introduce genetic modifications in mice through embryonic stem cells, Mario Capecchi, Martin Evans and Oliver Smithies were awarded the 2007 Nobel Prize for Physiology or Medicine. Advances in gene targeting technologies which hijack the homologous recombination mechanics of cells are now leading to the development of a new wave of more accurate, isogenic human disease models. These engineered human cell models are thought to more accurately reflect the genetics of human diseases than their mouse model predecessors. This is largely because mutations of interest are introduced into endogenous genes, just as they occur in the real patients, and because they are based on human genomes rather than rat genomes. Furthermore, certain technologies enable the knock-in of a particular mutation rather than just knock-outs associated with older gene targeting technologies. Protein engineering Protein engineering with homologous recombination develops chimeric proteins by swapping fragments between two parental proteins. These techniques exploit the fact that recombination can introduce a high degree of sequence diversity while preserving a protein's ability to fold into its tertiary structure, or three-dimensional shape. This stands in contrast to other protein engineering techniques, like random point mutagenesis, in which the probability of maintaining protein function declines exponentially with increasing amino acid substitutions. The chimeras produced by recombination techniques are able to maintain their ability to fold because their swapped parental fragments are structurally and evolutionarily conserved. These recombinable "building blocks" preserve structurally important interactions like points of physical contact between different amino acids in the protein's structure. Computational methods like SCHEMA and statistical coupling analysis can be used to identify structural subunits suitable for recombination. Techniques that rely on homologous recombination have been used to engineer new proteins. In a study published in 2007, researchers were able to create chimeras of two enzymes involved in the biosynthesis of isoprenoids, a diverse class of compounds including hormones, visual pigments and certain pheromones. The chimeric proteins acquired an ability to catalyze an essential reaction in isoprenoid biosynthesis—one of the most diverse pathways of biosynthesis found in nature—that was absent in the parent proteins. Protein engineering through recombination has also produced chimeric enzymes with new function in members of a group of proteins known as the cytochrome P450 family, which in humans is involved in detoxifying foreign compounds like drugs, food additives and preservatives. Cancer therapy Homologous recombination proficient (HRP) cancer cells are able to repair the DNA damage, which is caused by chemotherapy such as cisplatin. Thus, HRP cancers are difficult to treat. Studies suggest that homologous recombination can be targeted via c-Abl inhibition. Cancer cells with BRCA mutations have deficiencies in homologous recombination, and drugs to exploit those deficiencies have been developed and used successfully in clinical trials. Olaparib, a PARP1 inhibitor, shrunk or stopped the growth of tumors from breast, ovarian and prostate cancers caused by mutations in the BRCA1 or BRCA2 genes, which are necessary for HR. When BRCA1 or BRCA2 is absent, other types of DNA repair mechanisms must compensate for the deficiency of HR, such as base-excision repair (BER) for stalled replication forks or non-homologous end joining (NHEJ) for double strand breaks. By inhibiting BER in an HR-deficient cell, olaparib applies the concept of synthetic lethality to specifically target cancer cells. While PARP1 inhibitors represent a novel approach to cancer therapy, researchers have cautioned that they may prove insufficient for treating late-stage metastatic cancers. Cancer cells can become resistant to a PARP1 inhibitor if they undergo deletions of mutations in BRCA2, undermining the drug's synthetic lethality by restoring cancer cells' ability to repair DNA by HR.
Biology and health sciences
Genetics
Biology
2634789
https://en.wikipedia.org/wiki/Air%20superiority%20fighter
Air superiority fighter
An air superiority fighter (also styled air-superiority fighter) is a fighter aircraft designed to seize control of enemy airspace by establishing tactical dominance (air superiority) over the opposing air force. Air-superiority fighters are primarily tasked to perform aerial combat against agile, lightly armed aircraft (most often enemy fighters) and eliminate any challenge over control of the airspace, although some (e.g. strike fighters) may have a secondary role for air-to-surface attacks. Evolution of the term During World War II and through the Korean War, fighters were classified by their role: heavy fighter, interceptor, escort fighter, night fighter, and so forth. With the development of guided missiles in the 1950s, design diverged between fighters optimized to fight in the beyond visual range (BVR) regime (interceptors), and fighters optimized to fight in the within visual range (WVR) regime (air-superiority fighters). In the United States, the influential proponents of BVR developed fighters with no forward-firing gun, such as the original F-4 Phantom II, as it was thought that they would never need to resort to WVR combat. These aircraft would sacrifice high maneuverability, and instead focus on other performance characteristics, as they presumably would never engage in a dogfight with enemy fighters. The first air-superiority fighters After lessons learned from combat experiences involving modern military air capacity, the U.S. Navy's VFAX/VFX and U.S. Air Force's F-X (Fighter Experimental) reassessed their tactical direction which resulted in the U.S. Navy's F-14 Tomcat and US Air Force's F-15 Eagle. The two designs were built to achieve air superiority and significant consideration was given during the development of both aircraft to allow them to excel at the shorter ranges of fighter combat. Both aircraft also serve as interceptors due to their high maximum speed and advanced radars. By contrast, the Soviets (and the succeeding Russian Federation) developed and continue to operate separate types of aircraft, the interceptor MiG-31 and the short-range MiG-29 for air superiority, although the long-range Su-27 can combine the roles of air superiority and interceptor. Evolution of secondary ground-attack capability For the US Navy, the F-14 Tomcat was initially deployed solely as an air-superiority fighter, as well as fleet defense interceptor and tactical aerial reconnaissance. By contrast, the multirole F/A-18 Hornet was designed as strike fighter while having only enough of an edge to defend itself against enemy fighters if needed. While the F-14 had an undeveloped secondary ground attack capability (with a Stores Management System (SMS) that included air-to-ground options as well as rudimentary software in the AWG-9), the Navy did not want to risk it in the air-to-ground role at the time, due to its lack of proper defensive electronic countermeasures (DECM) and radar homing and warning (RHAW) for overland operations, as well as the fighter's high cost. In the 1990s, the US Navy added LANTIRN pods to its F-14s and deployed them on precision ground-attack missions. The F-15 Eagle was envisioned originally as an air-superiority fighter and interceptor under the mantra "not a pound for air-to-ground". However, the F-15C can carry "dumb" and GPS guided bombs, such capabilities which were first used by the Israeli Air Force. In fact, the basic airframe proved versatile enough to produce a very capable strike fighter, the F-15E Strike Eagle. While designed for ground attack, it retains the air-to-air lethality of the original F-15. Similarly, the F-16 Fighting Falcon was also originally designed as fighter but has since evolved into a successful all-weather multirole aircraft. Since the 1990s, with air-superiority fighters such as the F-14 and F-15 pressed into the strike role and/or having a strike derivative, the lines between air-superiority fighters and multirole fighters has blurred somewhat. The F-22 Raptor, designed primarily as an air superiority fighter, would receive precision strike capabilities through mission system upgrades. Similarly the MiG-29 and Su-27, despite originally designed for air superiority, have been commonly outfitted to use a range of air-to-surface armaments which would make them multirole fighters, indeed the Su-34 strike fighter has been derived from the Su-27. The Eurofighter Typhoon is another example of an aircraft designed as an air superiority fighter, but became multirole fighters with strike capabilities in later production tranches. With the retirement of the F-14 Tomcat, the US Navy has pressed its F/A-18 Hornet and its upsized derivative, the F/A-18E/F Super Hornet, into a fleet defence fighter, despite the Hornets being originally designed as multirole strike fighters. Due to the high costs of aircraft development, the next generation of USAF air superiority platforms will be multirole with strike capabilities designed from the outset. List of active air-superiority fighters
Technology
Military aviation
null
2634842
https://en.wikipedia.org/wiki/McDonnell%20Douglas%20DC-9
McDonnell Douglas DC-9
The McDonnell Douglas DC-9 is an American five-abreast, single-aisle aircraft designed by the Douglas Aircraft Company. It was initially produced as the Douglas DC-9 prior to August 1967, after which point the company had merged with McDonnell Aircraft to become McDonnell Douglas. Following the introduction of its first jetliner, the high-capacity DC-8, in 1959, Douglas was interested in producing an aircraft suited to smaller routes. As early as 1958, design studies were conducted; approval for the DC-9, a smaller all-new jetliner, came on April 8, 1963. The DC-9-10 first flew on February 25, 1965, and gained its type certificate on November 23, to enter service with Delta Air Lines on December 8. The DC-9 is powered by two rear-mounted Pratt & Whitney JT8D low-bypass turbofan engines under a T-tail for a cleaner wing aerodynamic. It has a two-person flight deck and built-in airstairs to better suit smaller airports. The aircraft was capable of taking off from 5,000 ft runways, connecting small cities and towns in the jet stream of air travel where jet service was previously impossible. The Series 10 aircraft are 104 ft (32 m) long for typically 90 coach seats. The Series 30, stretched by 15 ft (4.5 m) to seat 115 in economy, has a larger wing and more powerful engines for a higher maximum takeoff weight (MTOW); it first flew in August 1966 and entered service in February 1967. The Series 20 has the Series 10 fuselage, more powerful engines, and the Series 30's improved wings; it first flew in September 1968 and entered service in January 1969. The Series 40 was further lengthened by 6 ft (2 m) for 125 passengers, and the final DC-9-50 series first flew in 1974, stretched again by 8 ft (2.5 m) for 135 passengers. When deliveries ended in October 1982, 976 had been built. Smaller variants competed with the BAC One-Eleven, Fokker F28, and Sud Aviation Caravelle, and larger ones with the original Boeing 737. The original DC-9 was followed by the second generation in 1980, the MD-80 series, a lengthened DC-9-50 with a larger wing and a higher MTOW. This was further developed into the third generation, the MD-90, in the early 1990s, as the body was stretched again, fitted with V2500 high-bypass turbofans, and an updated flight deck. The shorter and final version, the MD-95, was renamed the Boeing 717 after McDonnell Douglas's merger with Boeing in 1997; it is powered by Rolls-Royce BR715 engines. The DC-9 family was produced between 1965 and 2006 with a total delivery of 2441 units: 976 DC-9s, 1191 MD-80s, 116 MD-90s, and 155 Boeing 717s. As of August 2022, 250 aircraft remain in service: 31 DC-9s (freighter), 116 MD-80s (mainly freighter), and 103 Boeing 717s (passenger), while the MD-90 was retired without freighter conversion. Development Origins During the late 1950s, Douglas Aircraft studied a short- to medium-range airliner to complement their then-sole jetliner, the high-capacity, long-range DC-8 (DC stands for Douglas Commercial). The Model 2067, a four-engined aircraft sized for medium-range routes was studied in depth, but work on it was abandoned after the proposal did not receive enough interest from airlines. In 1960, Douglas signed a two-year contract with the French aeronautics company Sud Aviation for technical cooperation; under the terms of this contract, Douglas would market and support the Sud Aviation Caravelle and produce a licensed version if sufficient orders were forthcoming from airlines. However, none were ever ordered from the company, leading to Douglas returning to its design studies after the co-operation deal expired. In 1962, design studies were underway into what would become the DC-9, known as Model 2086. The first envisioned version seated 63 passengers and had a gross weight of 69,000 lb (31,300 kg). This design was changed into what would be the initial DC-9 variant. During February 1963, detailed design work commenced. On April 8, 1963, Douglas announced that it would proceed with the DC-9. Shortly thereafter, Delta Air Lines placed the initial order for the DC-9, ordering 15 aircraft along with options for another 15. By January 1965, Douglas had garnered orders for 58 DC-9 as well as options for a further 44. Unlike the competing but larger Boeing 727 trijet, which used as many 707 components as possible, the DC-9 was developed as an all-new design. Throughout its development, Douglas had placed considerable emphasis on making the airliner as economic as possible, as well as to facilitate its future growth. The adoption of the Pratt & Whitney JT8D low-bypass turbofan engine, which had already been developed for the Boeing 727, enabled Douglas to benefit from the preexisting investment. Pratt & Whitney had long collaborated with Douglas on various projects, thus their engine was a natural choice for the company. In order to reduce the considerable financial burden of its development, Douglas implemented one of the first shared-risk production arrangements for the DC-9, arranging for de Havilland Canada to produce the wing at its own financial cost in return for promises on prospective future production orders. Entry into service The pace of development on the program was rapid. The first DC-9, a production model, flew on February 25, 1965. The second DC-9 flew a few weeks later, with a test fleet of five aircraft flying by July. Several key refinements to the aircraft were made during flight testing, such as the replacement of the original leading-edge slat design to achieve lower drag. The flight test program proceeded at a rapid pace; the initial Series 10 received airworthiness certification from the Federal Aviation Administration on November 23, 1965, permitting it to enter service with Delta Air Lines on December 8. Through the DC-9, Douglas had beaten rival company Boeing and their 737 to enter the short-haul jet market, a key factor that contributed to the DC-9 becoming the best selling airliner in the world for a time. By May 1976, the company had delivered 726 aircraft of the DC-9 family, which was more than double the number of its nearest competitor. However, following decades of intense competition between the two airliners, the DC-9 would eventually be overtaken as the world's best selling airliner by Boeing's 737. From the onset of its development, the DC-9 had been intended to be available in multiple versions to suit varying customer requirements; the first stretched version, the Series 30, with a longer fuselage and extended wing tips, flew on August 1, 1966, entering service with Eastern Air Lines in 1967. The initial Series 10 was followed by the improved -20, -30, and -40 variants. The final DC-9 series was the -50, which first flew in 1974. Production The DC-9 series, the first generation of the DC-9 family, would become a long term commercial success for the manufacturer. However, early production of the type had come at a higher unit cost than had been anticipated, leading to DC-9s being sold at a loss. The unfavorable early economics of the type negatively impacted Douglas, pushing it into fiscal hardship. However, the high customer demand for the DC-9 made the company attractive for either an acquisition or a merger; Douglas would merge with the American aerospace company McDonnell Aircraft to form McDonnell Douglas in 1967. The DC-9 family is one of the longest-lasting aircraft in production and operation. It was produced on the final assembly line in Long Beach, California, beginning in 1965, and later was on a common line with the second generation of the DC-9 family, the MD-80, with which it shares its line number sequence. Following the delivery of 976 DC-9s and 108 MD-80s, McDonnell Douglas stopped series production of the DC-9 in December 1982. The last member of the DC-9 family, the Boeing 717, was produced until 2006. The DC-9 family was produced in total units: 976 DC-9s (first generation), 1191 MD-80s (second generation), 116 MD-90s, and 155 Boeing 717s (third generation). This compared to 2,970 Airbus A320s and 5,270 Boeing 737s delivered as of 2006. Enhancement studies Studies aimed at further improving DC-9 fuel efficiency, by means of retrofitted wingtips of various types, were undertaken by McDonnell Douglas, but these did not demonstrate significant benefits, especially with existing fleets shrinking. The wing design makes retrofitting difficult. Between 1973 and 1975, McDonnell Douglas studied the possibility of replacing engines on the DC-9 with the JT8D-109 turbofan, a quieter and more efficient variant of the JT8D. This progressed to the flight-test stage, and tests achieved noise reduction between 8 and 9 decibels depending on the phase of flight. No further aircraft were modified, and the test aircraft was re-equipped with standard JT8D-9s prior to delivery to its airline customer. Further developments (DC-9 family) Two further developments of the original or first generation DC-9 series used the new designation with McDonnell Douglas initials (MD- prefix) followed by the year of development. The first derivative or second generation was the MD-80 series and the second derivative or third generation was the MD-90 series. Together, they formed the DC-9 family of 12 aircraft members (variants), and if the DC-9- designation were retained, the family members would be: First generation (Series 10, Series 20, Series 30, Series 40, and Series 50), second generation (Series 81, Series 82, Series 83, Series 87, and Series 88), and third generation (Series 90 and Series 95). The Series 10 (DC-9-10) was the smallest family member and the Series 90 (MD-90) was the largest. Second generation (MD-80 series) The original DC-9 series was followed in 1980 by the introduction of the second generation of the DC-9 family, the MD-80 series. This was originally called the DC-9-80 (short Series 80 and later stylized Super 80). It was a lengthened DC-9-50 with a higher maximum takeoff weight (MTOW), a larger wing, new main landing gear, and higher fuel capacity. The MD-80 series features a number of variants of the JT8D turbofan engine that had higher thrust ratings than those available on the original DC-9 series. The MD-80 series includes the MD-81, MD-82, MD-83, MD-88, and shortest variant, the MD-87. Third generation (MD-90 series) MD-90 The MD-80 series was further developed into the third generation, the MD-90 series, in the early 1990s. It has yet another fuselage stretch, an electronic flight instrument system (first introduced on the MD-88), and completely new International Aero V2500 high-bypass turbofan engines. In comparison to the very successful MD-80, relatively few MD-90s were built. Boeing 717 (MD-95) The shorter and final variant, the MD-95, was renamed the Boeing 717 after McDonnell Douglas's merger with Boeing in 1997 and before aircraft deliveries began. The fuselage length and wing are very similar to those of the DC-9-30, but much use was made of lighter, modern materials. Power is supplied by two BMW/Rolls-Royce BR715 high-bypass turbofan engines. Comac ARJ21 China's Comac ARJ21 is derived from the DC-9 family. The ARJ21 is built with manufacturing tooling from the MD-90 Trunkliner program. As a consequence, it has the same fuselage cross-section, nose profile, and tail. Design The DC-9 was designed for short to medium-haul routes, often to smaller airports with shorter runways and less ground infrastructure than the major airports being served by larger airliners like the Boeing 707 and Douglas DC-8, where accessibility and short-field characteristics were needed. The DC-9's takeoff weight was limited to 80,000 lb (36,300 kg) for a two-person flight crew by the then-Federal Aviation Agency regulations at the time. The commercial passenger aircraft have five abreast layout for economy seating that can accommodate 80 to 135 passengers, depending on version and seating arrangement. Turnarounds were simplified by built-in airstairs, including one in the tail, which shortened boarding and deplaning times. The DC-9 was originally designed to perform a maximum of 40,000 landings. The DC-9 has two rear-mounted JT8D turbofan engines, relatively small, efficient wings, and a T-tail. The tail-mounted engine design facilitated a clean wing without engine pods, which had numerous advantages. First, the flaps could be longer, unimpeded by pods on the leading edge and engine-blast concerns on the trailing edge. This simplified design improved airflow at low speeds and enabled lower takeoff and approach speeds, thus lowering field length requirements and keeping wing structure light. The second advantage of the tail-mounted engines was the reduction in foreign object damage from ingested debris from runways and aprons, but with this position, the engines could ingest ice streaming off the wing roots. The third was the absence of engines in underslung pods, which permitted a reduction in fuselage ground clearance, making the airliner more accessible to baggage handlers and passengers. The cockpit of the DC-9 was largely analogue, with flight controls mainly consisting of various levers, wheels, and knobs. The problem of deep stalling, revealed by the loss of the BAC One-Eleven prototype in 1963, was overcome through various changes, including the introduction of vortilons, small surfaces beneath the wings' leading edges used to control airflow and increase low-speed lift. The need for such features is a result of the rear-mounted engines. Variants The DC-9 series, the first generation of the DC-9 family, includes five members or variants and 10 subvariants, which are the production versions (types). Their designations use the Series (DC-9-) prefix followed by a two-digit numbering with the same first digit and the second digit being a zero for variant names and a nonzero for version/type designations. The first variant, Series 10 (DC-9-10), has four versions (Series 11, Series 12, Series 14 and Series 15); the second variant, Series 20, has one version (Series 21); the third variant, Series 30, has four versions (Series 31, Series 32, Series 33 and Series 34); the fourth variant, Series 40, has one version (Series 41); and the fifth or final variant, Series 50, has one version (Series 51). Series 10 Subvariant Series 11, Series 12, Series 14, Series 15 The original DC-9 (later designated the Series 10) was the smallest DC-9 variant. The -10 was long and had a maximum weight of . The Series 10 was similar in size and configuration to the BAC One-Eleven and featured a T-tail and rear-mounted engines. Power was provided by a pair of JT8D-5 or JT8D-7 engines. A total of 137 were built. Delta Air Lines was the initial operator. The Series 10 was produced in two main subvariants, the Series 14 and 15, although, of the first four aircraft, three were built as Series 11s and one as Series 12. These were later converted to Series 14 standard. No Series 13 was produced. A passenger/cargo version of the aircraft, with a side cargo door forward of the wing and a reinforced cabin floor, was certificated on March 1, 1967. Cargo versions included the Series 15MC (minimum change) with folding seats that can be carried in the rear of the aircraft, and the Series 15RC (rapid change) with seats removable on pallets. These differences disappeared over the years as new interiors were installed. The Series 10 was unique in the DC-9 family in not having leading-edge slats. The Series 10 was designed to have short takeoff and landing distances without the use of leading-edge high-lift devices. Therefore, the wing design of the Series 10 featured airfoils with extremely high maximum-lift capability to obtain the low stalling speeds necessary for short-field performance. Series 10 features The Series 10 has an overall length of , a fuselage length of , a passenger-cabin length of , and a wingspan of . The Series 10 was offered with the -thrust JT8D-1 and JT8D-7. All versions of the DC-9 are equipped with an AlliedSignal (Garrett) GTCP85 APU, located in the aft fuselage. The Series 10, as with all later versions of the DC-9, is equipped with a two-crew analog flightdeck. The Series 14 was originally certificated with an MTOW of , but subsequent options offered increases to 86,300 and . The aircraft's MLW in all cases is . The Series 14 has a fuel capacity of 3,693 US gallons (with the 907 US gal centre section fuel). The Series 15, certificated on January 21, 1966, is physically identical to the Series 14 but has an increased MTOW of . Typical range with 50 passengers and baggage is , increasing to at long-range cruise. Range with maximum payload is , increasing to with full fuel. The aircraft is fitted with a passenger door in the port forward fuselage, and a service door/emergency exit is installed opposite. An airstair installed below the front passenger door was available as an option as was an airstair in the tailcone. This also doubled as an emergency exit. Available with either two or four overwing exits, the DC-9-10 can seat up to a maximum certified exit limit of 109 passengers. Typical all-economy layout is 90 passengers, and 72 passengers in a more typical mixed-class layout with 12 first and 60 economy-class passengers. All versions of the DC-9 are equipped with a tricycle undercarriage, featuring a twin nose unit and twin main units. Series 20 Subvariant Series 21 The Series 20 was designed to satisfy a Scandinavian Airlines request for improved short-field performance by using the more-powerful engines and improved wings of the -30 combined with the shorter fuselage used in the -10. Ten Series 20 aircraft were produced, all as the Model -21. The -21 had slats and stairs at the rear of plane. In 1969, a DC-9 Series 20 at Long Beach was fitted with an Elliott Flight Automation Head-up display by McDonnell Douglas and used for successful three-month-long trials with pilots from various airlines, the Federal Aviation Administration, and the US Air Force. Series 20 features The Series 20 has an overall length of , a fuselage length of , a passenger-cabin length of , and a wingspan of . The DC-9 Series 20 is powered by the thrust JT8D-11 engine. The Series 20 was originally certificated at an MTOW of but this was increased to , eight percent more than on the higher weight Series 14s and 15s. The aircraft's MLW is and MZFW is . Typical range with maximum payload is , increasing to with maximum fuel. The Series 20, using the same wing as the Series 30, 40 and 50, has a slightly lower basic fuel capacity than the Series 10 (3,679 US gallons). Series 20 milestones First flight: September 18, 1968. FAA certification: November 25, 1968. First delivery: December 11, 1968, to Scandinavian Airlines System (SAS) Entry into service: January 27, 1969, with SAS. Last delivery: May 1, 1969, to SAS. Series 30 Subvariant Series 31, Series 32, Series 33, Series 34 The Series 30 was produced to counter Boeing's 737 twinjet; 662 were built, about 60% of the total. The -30 entered service with Eastern Airlines in February 1967 with a fuselage stretch, wingspan increased by just over and full-span leading edge slats, improving takeoff and landing performance. Maximum takeoff weight was typically . Engines for Models -31, -32, -33, and -34 included the P&W JT8D-7 and JT8D-9 rated at of thrust, or JT8D-11 with . Unlike the Series 10, the Series 30 had leading-edge devices to reduce the landing speeds at higher landing weights; full-span slats reduced approach speeds by six knots despite 5,000 lb greater weight. The slats were lighter than slotted Krueger flaps, since the structure associated with the slat is a more efficient torque box than the structure associated with the slotted Krueger. The wing had a six-percent increase in chord, all ahead of the front spar, allowing the 15 percent chord slat to be incorporated. Series 30 versions The Series 30 was built in four main sub-variants. DC-9-31: Produced in passenger version only. The first DC-9 Series 30 flew on August 1, 1966, and the first delivery was to Eastern Airlines on February 27, 1967, after certification on December 19, 1966. Basic MTOW of and subsequently certificated at weights up to . DC-9-32: Introduced in the first year (1967). Certificated March 1, 1967. Basic MTOW of later increased to . A number of cargo versions of the Series 32 were also produced: 32LWF (Light Weight Freight) with modified cabin but no cargo door or reinforced floor, intended for package freighter use. 32CF (Convertible Freighter), with a reinforced floor but retaining passenger facilities 32AF (All Freight), a windowless all-cargo aircraft. DC-9-33: Following the Series 31 and 32 came the Series 33 for passenger/cargo or all-cargo use. Certificated on April 15, 1968, the aircraft's MTOW was , MLW to and MZFW to . JT8D-9 or -11 ( thrust) engines were used. Wing incidence was increased 1.25 degrees to reduce cruise drag. Only 22 were built, as All Freight (AF), Convertible Freight (CF) and Rapid Change (RC) aircraft. DC-9-34: The last variant was the Series 34, intended for longer range with an MTOW of , an MLW of and an MZFW of . The DC-9-34CF (Convertible Freighter) was certificated April 20, 1976, while the passenger followed on November 3, 1976. The aircraft has the more powerful JT8D-9s with the -15 and -17 engines as an option. It had the wing incidence change introduced on the DC-9-33. Twelve were built, five as convertible freighters. Series 30 features The DC-9-30 was offered with a selection of variants of JT8D including the -1, -7, -9, -11, -15. and -17. The most common on the Series 31 is the JT8D-7 ( thrust), although it was also available with the -9 and -17 engines. On the Series 32 the JT8D-9 ( thrust) was standard, with the -11 also offered. The Series 33 was offered with the JT8D-9 or -11 ( thrust) engines and the heavyweight -34 with the JT8D-9, -15 ( thrust) or -17 ( thrust) engines. Series 40 Subvariant Series 41 The DC-9-40 is a further lengthened version. With a longer fuselage, accommodation was up to 125 passengers. The Series 40 was fitted with Pratt & Whitney engines with thrust of . A total of 71 were produced. The variant first entered service with Scandinavian Airlines System (SAS) in March 1968. Its unit cost was . Series 50 Subvariant Series 51 The Series 50 was the largest version of the DC-9 to enter airline service. It features an fuselage stretch and seats up to 139 passengers. It entered revenue service in August 1975 with Eastern Airlines and included several detail improvements, a new cabin interior, and more powerful JT8D-15 or 17 engines in the class. McDonnell Douglas delivered 96, all as the Model -51. Some visual cues to distinguish this version from other DC-9 variants include side strakes or fins below the side cockpit windows, spray deflectors on the nose gear, and thrust reversers angled inward 17 degrees compared to the original configuration. The thrust reverser modification was developed by Air Canada for its earlier aircraft, and adopted by McDonnell Douglas as a standard feature on the series 50. It was also applied to many earlier DC-9s during regular maintenance. Military and government Operators As of May 2024, a total of 30 DC-9 series aircraft remain in service, of which 20 are operated by Aeronaves TSM and two passenger aircraft in service with African Express Airways, and the rest in cargo service. With the existing DC-9 fleet shrinking, modifications do not appear to be likely to occur, especially since the wing design makes retrofitting difficult. DC-9s are therefore likely to be further replaced in service by newer airliners such as Boeing 737, Airbus A320, Embraer E-Jets, and the Airbus A220. However one former Scandinavian Airlines DC-9-21 is operated as a skydiving jump platform at Perris Valley Airport in Perris, California. With the steps on the ventral stairs removed, it is the only airline transport class jet certified to date by the FAA for skydiving operations as of 2006. This is the last and only -21 series still airworthy, and after being out of service for over a decade, it returned to the sky on May 7th, 2024 During the mid 1990s, Northwest Airlines was the largest operator of the type in the world, flying 180 DC-9s. After its acquisition of Northwest Airlines, Delta Air Lines operated a sizable fleet of DC-9s, most of which were over 30 years old at the time. With severe increases in fuel prices in the summer of 2008, Northwest Airlines began retiring its DC-9s, switching to Airbus A319s that are 27% more fuel efficient. As the Northwest/Delta merger progressed, Delta returned several stored DC-9s to service. Delta Air Lines made its last DC-9 commercial flight from Minneapolis/St. Paul to Atlanta on January 6, 2014, with the flight number DL2014. Deliveries Accidents and incidents , the DC-9 family aircraft has been involved in 276 major aviation accidents and incidents, including 156 hull-losses, with 3,697 fatalities combined (all generations of family members)= (1st gen., DC-9 series): 107 hull-losses & 2,250 fatalities + (2nd gen., MD-80 series): 46 hull-losses & 1,446 fatalities + (3rd gen., MD-90 series including Boeing 717): 3 hull-losses & 1 fatality. Accidents with fatalities On October 1, 1966, West Coast Airlines Flight 956 crashed with eighteen fatalities and no survivors. This accident marked the first loss of a DC-9. On March 9, 1967, TWA Flight 553 crashed in a field in Concord Township, near Urbana, Ohio, following a mid-air collision with a Beechcraft Baron, an accident that triggered substantial changes in air traffic control procedures. All 25 people on board the DC-9 and the sole occupant of the Beechcraft were killed. On March 27, 1968, Ozark Air Lines Flight 965, a DC-9-15, collided with a Cessna 150F while both aircraft were on approach to the same runway at Lambert Field in St. Louis, Missouri. The Cessna crashed, killing the two pilots aboard, while the DC-9 landed safely with no injuries to the 49 passengers and crew. On March 16, 1969, Viasa Flight 742, a DC-9-32, crashed into the La Trinidad neighborhood of Maracaibo, Venezuela, during a failed take-off. All 84 people on board the aircraft, as well as 71 people on the ground, were killed. With 155 dead in all, this was the deadliest crash involving a member of the original DC-9 family, as well as the worst crash in civil aviation history at the time it took place. On September 9, 1969, Allegheny Airlines Flight 853, a DC-9-30, collided in mid-air with a Piper PA-28 Cherokee near Fairland, Indiana. The DC-9 carried 78 passengers and four crew members, the Piper, one pilot. Both aircraft were destroyed, and all occupants were killed. On February 15, 1970, a Dominicana de Aviación DC-9-32 crashed after taking off from Santo Domingo. The crash, possibly caused by contaminated fuel, killed all 102 passengers and crew, including champion boxer Teo Cruz. On May 2, 1970, an Overseas National Airways DC-9, wet-leased to ALM Dutch Antilles Airlines and operating as ALM Flight 980, ditched in the Caribbean Sea on a flight from New York's John F. Kennedy International Airport to Princess Juliana International Airport on Saint Maarten. After three landing attempts in poor weather at Saint Maarten, the pilots began to divert to their alternate of Saint Croix, U.S. Virgin Islands but ran out of fuel 30 mi (48 km) short of the island. After about ten minutes, the aircraft sank in 5,000 ft (1,524 m) of water and was never recovered. 40 people survived the ditching; 23 perished. On November 14, 1970, Southern Airways Flight 932, a DC-9, crashed into a hill near Tri-State Airport in Huntington, West Virginia. All 75 on board were killed (including 37 members of the Marshall University Thundering Herd football team, eight members of the coaching staff, 25 boosters, and others). On June 6, 1971, Hughes Airwest Flight 706 was involved in a midair collision with a U.S. Marine Corps F-4 Phantom fighter. All 49 people on board the DC-9 died; one of two aboard the USMC aircraft ejected and survived. On January 21, 1972, a Turkish Airlines DC-9-32 TC-JAC diverted to Adana, Turkey after pressurization problems. The aircraft hit the ground during downwind on the 2nd approach and caught fire. There was one fatality. On January 26, 1972, JAT Flight 367 from Stockholm to Belgrade, DC-9-32 registration YU-AHT, was destroyed in flight by a bomb placed on board. The sole survivor was a flight attendant, Vesna Vulović, who holds the record for the world's longest fall without a parachute when she fell some inside a section of the airplane and survived. On December 20, 1972, North Central Airlines Flight 575, DC-9-31 registration N954N, collided during its takeoff roll with Delta Air Lines Flight 954, a Convair CV-880 N8807E that was taxiing across the same runway at O'Hare International Airport in Chicago, Illinois. The DC-9 was destroyed, killing 10 and injuring 15 of the 45 people on board; two people among the 93 aboard the Convair 880 suffered minor injuries. Both aircraft were written off. On 5 March 1973, Iberia Flight 504, a DC-9-32, flying from Palma de Mallorca to London collided in mid-air with Spantax Flight 400, a Convair 990 flying from Madrid to London. All 68 people on board the DC-9 were killed. The CV-990 made a successful emergency landing at Cognac – Châteaubernard Air Base. On July 31, 1973, Delta Air Lines Flight 723, DC-9-31 registration N975NE, crashed into a seawall at Logan International Airport in Boston, Massachusetts, killing all 83 passengers and 6 crew members on board. One of the passengers initially survived the accident but later died in a hospital. On September 11, 1974, Eastern Air Lines Flight 212, a DC-9-30 crashed just short of the runway at Charlotte, North Carolina, killing 71 out of the 82 occupants. On October 30, 1975, Inex-Adria Aviopromet Flight 450, a DC-9-32, hit high ground during an approach in fog near Prague-Suchdol, Czechoslovakia. 75 of the 120 people were killed. On September 10, 1976, Inex-Adria Aviopromet Flight 550, a DC-9-31 collided with British Airways Flight 476, a Hawker Siddeley Trident 3B, over the Croatian town of Vrbovec, killing all 176 people aboard both aircraft. On April 4, 1977, Southern Airways Flight 242, a DC-9-31, lost engine power while flying through a severe thunderstorm before crash landing onto a highway in New Hope, Georgia, striking roadside buildings. The crash and fire resulted in the death of both flight crew and 61 passengers. Nine people on the ground also died. Both flight attendants and 20 passengers survived. On June 26, 1978, Air Canada Flight 189, a DC-9 overran the runway in Toronto after a blown tire aborted the takeoff. Two of the 107 passengers and crew were killed. On September 14, 1979, Aero Trasporti Italiani Flight 12, a DC-9-32 crashed in the mountains near Cagliari, Italy while approaching Cagliari-Elmas Airport. All 27 passengers and 4 crew members died in the crash and ensuing fire. On June 27, 1980, Itavia Flight 870, DC-9-15 I-TIGI, broke up mid-air after an explosion and crashed into the sea near the Italian island of Ustica, killing all 81 people on board. The event spawned numerous conspiracy theories, inconclusive investigations into an alleged cover-up by the Italian military, and one of the longest court inquiries in Italian history, which resulted in a 2013 ruling that the DC-9 was shot down by an air-to-air missile launched by a warplane, but without identifying who fired the missile or why. A popular theory endorsed by Giuliano Amato and Francesco Cossiga, both former Prime Ministers of Italy, says that the French Air Force shot down the DC-9 while trying to down a different aircraft carrying Libyan leader Muammar Gaddafi, but no conclusive evidence of this has been presented. On July 27, 1981, Aeroméxico Flight 230, a DC-9 ran off the runway in Chihuahua. Thirty passengers and two crew of the 66 on board were killed. Bad weather and pilot error were the causes of the accident. On June 2, 1983, Air Canada Flight 797, a DC-9 experienced an electrical fire in the aft lavatory during flight, resulting in an emergency landing at Cincinnati/Northern Kentucky International Airport. During evacuation, the sudden influx of oxygen caused a flash fire throughout the cabin, resulting in the deaths of 23 of the 41 passengers, including Canadian folk singer Stan Rogers. All five crew members survived. On December 7, 1983, the Madrid runway disaster took place where a departing Iberia Boeing 727 struck an Aviaco Douglas DC-9 causing the death of 93 passengers and crew. All 42 passengers and crew on board the DC-9 were killed. On December 20, 1983, Ozark Air Lines Flight 650, DC-9-31 N994Z, struck a snowplow on landing at Sioux Falls Regional Airport in low visibility. The right wing was torn from the jet; the snowplow driver was killed and two flight attendants were injured. The accident was attributed to inadequate air traffic control (ATC) supervision of snow-clearing operations. On September 6, 1985, Midwest Express Airlines Flight 105, operated with a DC-9-14, crashed just after takeoff from General Mitchell International Airport in Milwaukee, Wisconsin. The crash was caused by improper control inputs by the flight crew after the number 2 engine failed, and all 31 aboard were killed. On August 31, 1986, Aeroméxico Flight 498 collided in mid-air with a Piper Cherokee over the city of Cerritos, California, then crashed into the city, killing all 64 aboard the aircraft, 15 people on the ground, and all three in the small plane. On April 4, 1987, Garuda Indonesia Flight 035, a DC-9-32, hit a pylon and crashed on approach to Polonia International Airport in bad weather with 24 fatalities. On November 15, 1987, Continental Airlines Flight 1713, a DC-9-14, crashed on takeoff from Stapleton International Airport in bad weather with 28 fatalities. This accident was attributed to a combination of confusion at the ATC, exceeding allowed time-limit for takeoff after de-icing the wings, and inexperienced crew. On November 14, 1990, Alitalia Flight 404, a DC-9-32, crashed into a hillside on approach to Zurich Airport, killing all 46 persons on board. The crash was caused by a short circuit, which led to a failure of the aircraft's NAV receiver and GPWS system. On December 3, 1990, Northwest Airlines Flight 1482, a DC-9-14, taxied onto the wrong taxiway in dense fog at Detroit-Metropolitan Wayne County Airport, Michigan. It entered the active runway instead of the taxiway instructed by air traffic controllers. It was then struck by a departing Boeing 727. Nine people were killed. On March 5, 1991, Aeropostal Alas de Venezuela Flight 108, a DC-9-32, crashed into a mountainside in Trujillo State, Venezuela, killing all 40 passengers and five crew aboard. On July 2, 1994, USAir Flight 1016, DC-9-31 N954VJ crashed in Charlotte, North Carolina, while performing a go-around because of heavy storms and wind shear at the approach of runway 18R. There were 37 fatalities and 15 injured among the passengers and crew. Although the airplane came to rest in a residential area with the tail section striking a house, there were no fatalities or injuries on the ground. On May 11, 1996, ValuJet Flight 592, DC-9-32 N904VJ crashed in the Florida Everglades due to a fire caused by the activation of chemical oxygen generators illegally stored in the hold. The fire damaged the plane's electrical system and eventually overcame the crew, resulting in the deaths of all 110 people on board. On October 10, 1997, Austral Flight 2553, a DC-9-32 registration LV-WEG, en route from Posadas to Buenos Aires, crashed near Fray Bentos, Uruguay, killing all 69 passengers and five crew on board. On February 2, 1998, Cebu Pacific Flight 387, a DC-9-32 RP-C1507 crashed on the slopes of Mount Sumagaya in Misamis Oriental, Philippines, killing all 104 passengers and crew on board. Aviation investigators deemed the incident to be caused by pilot error when the plane made a non-regular stopover to Tacloban. On November 9, 1999, TAESA Flight 725 crashed a few minutes after leaving Uruapan International Airport en route to Mexico City. 18 people were killed in the accident. On October 6, 2000, Aeroméxico Flight 250, DC-9-31 N936ML, overran the runway at General Lucio Blanco International Airport in Reynosa, crashed into houses and fell into a small canal. Four people on the ground were killed but all 83 passengers and 5 crew survived. The accident was attributed to a late and excessively fast touchdown on a runway that was waterlogged due to heavy rainfall from Hurricane Keith. On 10 December 2005, Sosoliso Airlines Flight 1145 from Abuja crash-landed at Port Harcourt International Airport, Nigeria. There were 108 fatalities and two survivors. On April 15, 2008, Hewa Bora Airways Flight 122 crashed into a residential neighborhood, in the Goma, Democratic Republic of the Congo, resulting in the deaths of at least 44 people. On July 6, 2008, USA Jet Airlines Flight 199, a DC-9-15F, crashed on approach to Saltillo, Mexico, after a flight from Shreveport, Louisiana. The captain died and the first officer was seriously injured. Hull losses On November 27, 1973, Eastern Airlines Flight 300, a DC-9-31, landed long at Akron-Canton Airport in light rain and fog, overran the runway, and went over an embankment. All 21 passengers and 5 crew survived with various injuries. On February 21, 1986, USAir Flight 499, DC-9-31 N961VJ, landed long and overran runway 24 at Erie International Airport, coming to rest on a road. One passenger suffered minor injuries; the other 17 passengers and 5 crew were uninjured. The crash was attributed to the pilots' decisions to continue an excessively fast approach, and to land downwind in snow, which was prohibited on runway 24. On April 18, 1993, Japan Air System Flight 451, DC-9-41 JA8448, skidded off the runway at Hanamaki Airport after the inexperienced pilot mishandled a go-around attempt due to windshear and landed hard. There were 19 injuries in the crash and ensuing fire, but all 77 aboard survived. Aircraft on display Canada CF-TLL (cn 47021) – DC-9-32 on static display at the Canada Aviation and Space Museum in Ottawa, Ontario, Canada. It was previously operated by Air Canada. Czechia N1332U (cn 47404) – DC-9-31 nose section preserved at industrial area in Liberec, Czechia and rebuilt into flight simulator. The DC-9 was previously operated by Northwest. Indonesia PK-GNC (cn 47481) – DC-9-32 painted in Garuda Indonesia's 1960s livery and put on display inside GMF hangar in Soekarno-Hatta Airport. PK-GNT (cn 47790) – DC-9-32 on static display at the Transportation Museum in Taman Mini Indonesia Indah in Jakarta, Indonesia. It was relegated to display status after suffering heavy damage in a landing accident in 1993. It was previously operated by Garuda Indonesia. Italy MM62012 (cn 47595) – DC-9-32 on static display at Volandia adjacent to Milan Malpensa Airport. This aircraft was operated by the Italian Air Force as a VIP transport, carrying the president of Italy among other duties. Netherlands N929L (cn 47174) – DC-9-32 nose section displayed inside Schiphol International Airport. Painted in KLM livery although the plane never served with the airline. It was previously used by TWA and Delta Airlines. Mexico XA-JEB – Ex Aeromexico DC-9-32 on display at a park in Cadereyta de Montes, Querétaro, Mexico. Formerly Hugh Hefner's private jet, the 'Big Bunny', XA-JEB was sold in 1975 to Venezuela Airlines, who later sold it to Aeromexico, where it was operated until 2004. It was sold and placed on display in 2008 for use as an educational tool. N942ML – with painted registration "XA-SFE" is found on the second floor of the Luxury shopping mall "Centro Comercial Santa Fe" in the business district of Mexico City. It is on display with an Interjet livery for the Kidzania brand. N606NW – with painted registration "XA-MEX" can be found in Cuicuilo Plaza at the south of the city. Similar to "XA-SFE", it wears an Interjet Livery for the Kidzania brand. Spain EC-BQZ (cn 47456) – DC-9-32 on static display at Adolfo Suárez Madrid–Barajas Airport in Madrid. EC-DGB – DC-9-34 front section only preserved at Elder Museum of Science and Technology, Gran Canaria. United States N675MC (cn 47651) – DC-9-51 on static display at the Delta Flight Museum at Hartsfield–Jackson Atlanta International Airport in Atlanta, Georgia. It arrived at the museum on 27 April 2014. It was previously operated by Delta Air Lines. N779NC (cn 48101) – DC-9-51 was on static display at the Carolinas Aviation Museum at Charlotte Douglas International Airport in Charlotte, North Carolina, until it was scrapped in January 2017. Its ferry flight to Charlotte was the last scheduled passenger DC-9 flight in the United States. It was previously operated by Delta Air Lines. Specifications
Technology
Specific aircraft_2
null
8309686
https://en.wikipedia.org/wiki/Coordination%20number
Coordination number
In chemistry, crystallography, and materials science, the coordination number, also called ligancy, of a central atom in a molecule or crystal is the number of atoms, molecules or ions bonded to it. The ion/molecule/atom surrounding the central ion/molecule/atom is called a ligand. This number is determined somewhat differently for molecules than for crystals. For molecules and polyatomic ions the coordination number of an atom is determined by simply counting the other atoms to which it is bonded (by either single or multiple bonds). For example, [Cr(NH3)2Cl2Br2]− has Cr3+ as its central cation, which has a coordination number of 6 and is described as hexacoordinate. The common coordination numbers are 4, 6 and 8. Molecules, polyatomic ions and coordination complexes In chemistry, coordination number, defined originally in 1893 by Alfred Werner, is the total number of neighbors of a central atom in a molecule or ion. The concept is most commonly applied to coordination complexes. Simple and commonplace cases The most common coordination number for d-block transition metal complexes is 6. The coordination number does not distinguish the geometry of such complexes, i.e. octahedral vs trigonal prismatic. For transition metal complexes, coordination numbers range from 2 (e.g., AuI in Ph3PAuCl) to 9 (e.g., ReVII in [ReH9]2−). Metals in the f-block (the lanthanoids and actinoids) can accommodate higher coordination number due to their greater ionic radii and availability of more orbitals for bonding. Coordination numbers of 8 to 12 are commonly observed for f-block elements. For example, with bidentate nitrate ions as ligands, CeIV and ThIV form the 12-coordinate ions [Ce(NO3)6]2− (ceric ammonium nitrate) and [Th(NO3)6]2−. When the surrounding ligands are much smaller than the central atom, even higher coordination numbers may be possible. One computational chemistry study predicted a particularly stable ion composed of a central lead ion coordinated with no fewer than 15 helium atoms. Among the Frank–Kasper phases, the packing of metallic atoms can give coordination numbers of up to 16. At the opposite extreme, steric shielding can give rise to unusually low coordination numbers. An extremely rare instance of a metal adopting a coordination number of 1 occurs in the terphenyl-based arylthallium(I) complex 2,6-Tipp2C6H3Tl, where Tipp is the 2,4,6-triisopropylphenyl group. Polyhapto ligands Coordination numbers become ambiguous when dealing with polyhapto ligands. For π-electron ligands such as the cyclopentadienide ion [C5H5]−, alkenes and the cyclooctatetraenide ion [C8H8]2−, the number of adjacent atoms in the π-electron system that bind to the central atom is termed the hapticity. In ferrocene the hapticity, η, of each cyclopentadienide anion is five, Fe(η5-C5H5)2. Various ways exist for assigning the contribution made to the coordination number of the central iron atom by each cyclopentadienide ligand. The contribution could be assigned as one since there is one ligand, or as five since there are five neighbouring atoms, or as three since there are three electron pairs involved. Normally the count of electron pairs is taken. Surfaces and reconstruction The coordination numbers are well defined for atoms in the interior of a crystal lattice: one counts the nearest neighbors in all directions. The number of neighbors of an interior atom is termed the bulk coordination number. For surfaces, the number of neighbors is more limited, so the surface coordination number is smaller than the bulk coordination number. Often the surface coordination number is unknown or variable. The surface coordination number is also dependent on the Miller indices of the surface. In a body-centered cubic (BCC) crystal, the bulk coordination number is 8, whereas, for the (100) surface, the surface coordination number is 4. Experimental determination A common way to determine the coordination number of an atom is by X-ray crystallography. Related techniques include neutron or electron diffraction. The coordination number of an atom can be determined straightforwardly by counting nearest neighbors. α-Aluminium has a regular cubic close packed structure, fcc, where each aluminium atom has 12 nearest neighbors, 6 in the same plane and 3 above and below and the coordination polyhedron is a cuboctahedron. α-Iron has a body centered cubic structure where each iron atom has 8 nearest neighbors situated at the corners of a cube. The two most common allotropes of carbon have different coordination numbers. In diamond, each carbon atom is at the centre of a regular tetrahedron formed by four other carbon atoms, the coordination number is four, as for methane. Graphite is made of two-dimensional layers in which each carbon is covalently bonded to three other carbons; atoms in other layers are further away and are not nearest neighbours, giving a coordination number of 3. For chemical compounds with regular lattices such as sodium chloride and caesium chloride, a count of the nearest neighbors gives a good picture of the environment of the ions. In sodium chloride each sodium ion has 6 chloride ions as nearest neighbours (at 276 pm) at the corners of an octahedron and each chloride ion has 6 sodium atoms (also at 276 pm) at the corners of an octahedron. In caesium chloride each caesium has 8 chloride ions (at 356 pm) situated at the corners of a cube and each chloride has eight caesium ions (also at 356 pm) at the corners of a cube. Complications In some compounds the metal-ligand bonds may not all be at the same distance. For example in PbCl2, the coordination number of Pb2+ could be said to be seven or nine, depending on which chlorides are assigned as ligands. Seven chloride ligands have Pb-Cl distances of 280–309 pm. Two chloride ligands are more distant, with a Pb-Cl distances of 370 pm. In some cases a different definition of coordination number is used that includes atoms at a greater distance than the nearest neighbours. The very broad definition adopted by the International Union of Crystallography, IUCR, states that the coordination number of an atom in a crystalline solid depends on the chemical bonding model and the way in which the coordination number is calculated. Some metals have irregular structures. For example, zinc has a distorted hexagonal close packed structure. Regular hexagonal close packing of spheres would predict that each atom has 12 nearest neighbours and a triangular orthobicupola (also called an anticuboctahedron or twinned cuboctahedron) coordination polyhedron. In zinc there are only 6 nearest neighbours at 266 pm in the same close packed plane with six other, next-nearest neighbours, equidistant, three in each of the close packed planes above and below at 291 pm. It is considered to be reasonable to describe the coordination number as 12 rather than 6. Similar considerations can be applied to the regular body centred cube structure where in addition to the 8 nearest neighbors there 6 more, approximately 15% more distant, and in this case the coordination number is often considered to be 14. Many chemical compounds have distorted structures. Nickel arsenide, NiAs has a structure where nickel and arsenic atoms are 6-coordinate. Unlike sodium chloride where the chloride ions are cubic close packed, the arsenic anions are hexagonal close packed. The nickel ions are 6-coordinate with a distorted octahedral coordination polyhedron where columns of octahedra share opposite faces. The arsenic ions are not octahedrally coordinated but have a trigonal prismatic coordination polyhedron. A consequence of this arrangement is that the nickel atoms are rather close to each other. Other compounds that share this structure, or a closely related one are some transition metal sulfides such as FeS and CoS, as well as some intermetallics. In cobalt(II) telluride, CoTe, the six tellurium and two cobalt atoms are all equidistant from the central Co atom. Two other examples of commonly-encountered chemicals are Fe2O3 and TiO2. Fe2O3 has a crystal structure that can be described as having a near close packed array of oxygen atoms with iron atoms filling two thirds of the octahedral holes. However each iron atom has 3 nearest neighbors and 3 others a little further away. The structure is quite complex, the oxygen atoms are coordinated to four iron atoms and the iron atoms in turn share vertices, edges and faces of the distorted octahedra. TiO2 has the rutile structure. The titanium atoms 6-coordinate, 2 atoms at 198.3 pm and 4 at 194.6 pm, in a slightly distorted octahedron. The octahedra around the titanium atoms share edges and vertices to form a 3-D network. The oxide ions are 3-coordinate in a trigonal planar configuration. Usage in quasicrystal, liquid and other disordered systems The coordination number of systems with disorder cannot be precisely defined. The first coordination number can be defined using the radial distribution function g(r): where r0 is the rightmost position starting from r = 0 whereon g(r) is approximately zero, r1 is the first minimum. Therefore, it is the area under the first peak of g(r). The second coordination number is defined similarly: Alternative definitions for the coordination number can be found in literature, but in essence the main idea is the same. One of those definition are as follows: Denoting the position of the first peak as rp, The first coordination shell is the spherical shell with radius between r0 and r1 around the central particle under investigation.
Physical sciences
Crystallography
Physics
129090
https://en.wikipedia.org/wiki/Scoliosis
Scoliosis
Scoliosis (: scolioses) is a condition in which a person's spine has an irregular curve in the coronal plane. The curve is usually S- or C-shaped over three dimensions. In some, the degree of curve is stable, while in others, it increases over time. Mild scoliosis does not typically cause problems, but more severe cases can affect breathing and movement. Pain is usually present in adults, and can worsen with age. As the condition progresses, it may alter a person's life, and hence can also be considered a disability. It can be compared to kyphosis and lordosis, other abnormal curvatures of the spine which are in the sagittal plane (front-back) rather than the coronal (left-right). The cause of most cases is unknown, but it is believed to involve a combination of genetic and environmental factors. Scoliosis most often occurs during growth spurts right before puberty. Risk factors include other affected family members. It can also occur due to another condition such as muscle spasms, cerebral palsy, Marfan syndrome, and tumors such as neurofibromatosis. Diagnosis is confirmed with X-rays. Scoliosis is typically classified as either structural in which the curve is fixed, or functional in which the underlying spine is normal. Left-right asymmetries, of the vertebrae and their musculature, especially in the thoracic region, may cause mechanical instability of the spinal column. Treatment depends on the degree of curve, location, and cause. The age of the patient is also important, since some treatments are ineffective in adults, who are no longer growing. Minor curves may simply be watched periodically. Treatments may include bracing, specific exercises, posture checking, and surgery. The brace must be fitted to the person and used daily until growing stops. Specific exercises, such as exercises that focus on the core, may be used to try to decrease the risk of worsening. They may be done alone or along with other treatments such as bracing. Evidence that chiropractic manipulation, dietary supplements, or exercises can prevent the condition from worsening is weak. However, exercise is still recommended due to its other health benefits. Scoliosis occurs in about 3% of people. It most commonly develops between the ages of ten and twenty. Females typically are more severely affected than males with a ratio of 4:1. The term is . Signs and symptoms Symptoms associated with scoliosis can include: Pain in the back at the site of the curve, which may radiate to the legs Respiratory or cardiac problems in severe cases Constipation due to curvature causing "tightening" of the stomach, intestines, etc. The signs of scoliosis can include: Uneven musculature on one side of the spine Rib prominence or a prominent shoulder blade, caused by rotation of the rib cage in thoracic scoliosis Uneven posture Heart and lung problems in severe cases Calcium deposits in the cartilage endplate and sometimes in the disc itself Course People who have reached skeletal maturity are less likely to have a worsening case. Some severe cases of scoliosis can lead to diminishing lung capacity, pressure exerted on the heart, and restricted physical activities. Longitudinal studies have revealed that the most common form of the condition, late-onset idiopathic scoliosis, causes little physical impairment other than back pain and cosmetic concerns, even when untreated, with mortality rates similar to the general population. Older beliefs that untreated idiopathic scoliosis necessarily progressed into severe (cardiopulmonary) disability by old age have been refuted. Causes An estimated 65% of scoliosis cases are idiopathic (cause unknown), about 15% are congenital, and about 10% are secondary to a neuromuscular disease. About 38% of variance in scoliosis risk is due to genetic factors, and 62% is due to the environment. The genetics are likely complex, however, given the inconsistent inheritance and discordance among monozygotic twins. The specific genes that contribute to development of scoliosis have not been conclusively identified. Several candidate gene studies have found associations between idiopathic scoliosis and genes mediating bone formation, bone metabolism, and connective tissue structure. Several genome-wide studies have identified a number of loci as significantly linked to idiopathic scoliosis. In 2006, idiopathic scoliosis was linked with three microsatellite polymorphisms in the MATN1 gene (encoding for matrilin 1, cartilage matrix protein). Fifty-three single nucleotide polymorphism markers in the DNA that are significantly associated with adolescent idiopathic scoliosis were identified through a genome-wide association study. Adolescent idiopathic scoliosis has no clear causal agent, and is generally believed to be multifactorial; leading to "progressive functional limitations" for individuals. Research suggests that Posterior Spinal Fusion (PSF) can be used to correct the more severe deformities caused by adolescent idiopathic scoliosis. Such procedures can result in a return to physical activity in about 6 months, which is very promising, although minimal back pain is still to be expected in the most severe cases. The prevalence of scoliosis is 1–2% among adolescents, but the likelihood of progression among adolescents with a Cobb angle less than 20° is about 10–20%. Congenital scoliosis can be attributed to a malformation of the spine during weeks three to six in utero due to a failure of formation, a failure of segmentation, or a combination of stimuli. Incomplete and abnormal segmentation results in an abnormally shaped vertebra, at times fused to a normal vertebra or unilaterally fused vertebrae, leading to the abnormal lateral curvature of the spine. Vertebrae of the spine, especially in the thoracic region, are, on average, asymmetric. The mid-axis of these vertebral bodies tends to point systematically to the right of the median body plane. A strong asymmetry of the vertebrae and their musculature, may lead to mechanical instability of the column, especially during phases of rapid growth. The asymmetry is thought to be caused by an embryological twist of the body. Resulting from other conditions Secondary scoliosis due to neuropathic and myopathic conditions can lead to a loss of muscular support for the spinal column so that the spinal column is pulled in abnormal directions. Some conditions which may cause secondary scoliosis include muscular dystrophy, spinal muscular atrophy, poliomyelitis, cerebral palsy, spinal cord trauma, and myotonia. Scoliosis often presents itself, or worsens, during an adolescent's growth spurt and is more often diagnosed in females than males. Scoliosis associated with known syndromes is often subclassified as "syndromic scoliosis". Scoliosis can be associated with amniotic band syndrome, Arnold–Chiari malformation, Charcot–Marie–Tooth disease, cerebral palsy, congenital diaphragmatic hernia, connective tissue disorders, muscular dystrophy, familial dysautonomia, CHARGE syndrome, Ehlers–Danlos syndrome (hyperflexibility, "floppy baby" syndrome, and other variants of the condition), fragile X syndrome, Friedreich's ataxia, hemihypertrophy, Loeys–Dietz syndrome, Marfan syndrome, nail–patella syndrome, neurofibromatosis, osteogenesis imperfecta, Prader–Willi syndrome, proteus syndrome, spina bifida, spinal muscular atrophy, syringomyelia, and pectus carinatum. Another form of secondary scoliosis is degenerative scoliosis, also known as de novo scoliosis, which develops later in life secondary to degenerative (may or may not be associated with aging) changes. This is a type of deformity that starts and progresses because of the collapse of the vertebral column in an asymmetrical manner. As bones start to become weaker and the ligaments and discs located in the spine become worn as a result of age-related changes, the spine begins to curve. Diagnosis People who initially present with scoliosis undergo a physical examination to determine whether the deformity has an underlying cause and to exclude the possibility of the underlying condition more serious than simple scoliosis. The person's gait is assessed, with an exam for signs of other abnormalities (e.g., spina bifida as evidenced by a dimple, hairy patch, lipoma, or hemangioma). A thorough neurological examination is also performed, the skin for café au lait spots, indicative of neurofibromatosis, the feet for cavovarus deformity, abdominal reflexes and muscle tone for spasticity. When a person can cooperate, he or she is asked to bend forward as far as possible. This is known as the Adams forward bend test and is often performed on school students. If a prominence is noted, then scoliosis is a possibility and an X-ray may be done to confirm the diagnosis. As an alternative, a scoliometer may be used to diagnose the condition. When scoliosis is suspected, weight-bearing, full-spine AP/coronal (front-back view) and lateral/sagittal (side view) X-rays are usually taken to assess the scoliosis curves and the kyphosis and lordosis, as these can also be affected in individuals with scoliosis. Full-length standing spine X-rays are the standard method for evaluating the severity and progression of scoliosis, and whether it is congenital or idiopathic in nature. In growing individuals, serial radiographs are obtained at 3- to 12-month intervals to follow curve progression, and, in some instances, MRI investigation is warranted to look at the spinal cord. An average scoliosis patient has been in contact with around 50–300 mGy of radiation due to these radiographs during this time period. The standard method for assessing the curvature quantitatively is measuring the Cobb angle, which is the angle between two lines, drawn perpendicular to the upper endplate of the uppermost vertebra involved and the lower endplate of the lowest vertebra involved. For people with two curves, Cobb angles are followed for both curves. In some people, lateral-bending X-rays are obtained to assess the flexibility of the curves or the primary and compensatory curves. Congenital and idiopathic scoliosis that develops before the age of 10 is referred to as early-onset scoliosis. Progressive idiopathic early-onset scoliosis can be a life-threatening condition with negative effects on pulmonary function. Scoliosis that develops after 10 is referred to as adolescent idiopathic scoliosis. Screening adolescents without symptoms for scoliosis is of unclear benefit. Definition Scoliosis is defined as a three-dimensional deviation in the axis of a person's spine. Most instances, including the Scoliosis Research Society, define scoliosis as a Cobb angle of more than 10° to the right or left as the examiner faces the person, i.e. in the coronal plane. Scoliosis has been described as a biomechanical deformity, the progression of which depends on asymmetric forces otherwise known as the Hueter–Volkmann Law. Management Scoliosis curves do not straighten out on their own. Many children have slight curves that do not need treatment. In these cases, the children grow up to lead normal body posture by itself, even though their small curves never go away. If the patient is still growing and has a larger curve, it is important to monitor the curve for change by periodic examination and standing x-rays as needed. The rise in spinal abnormalities require examination by a neurosurgeon to determine if active treatment is needed. The traditional medical management of scoliosis is complex and is determined by the severity of the curvature and skeletal maturity, which together help predict the likelihood of progression. The conventional options for children and adolescents are: Observation Bracing Surgery Physical therapy. Evidence suggests use of scoliosis specific exercises might prevent the progression of the curve along with possible bracing and surgery avoidance. For adults, treatment usually focuses on relieving any pain: Pain medication Posture checking Bracing Surgery Treatment for idiopathic scoliosis also depends upon the severity of the curvature, the spine's potential for further growth, and the risk that the curvature will progress. Mild scoliosis (less than 30° deviation) and moderate scoliosis (30–45°) can typically be treated conservatively with bracing in conjunction with scoliosis-specific exercises. Severe curvatures that rapidly progress may require surgery with spinal rod placement and spinal fusion. In all cases, early intervention offers the best results. A specific type of physical therapy may be useful. Evidence to support its use, however, is weak. Low quality evidence suggests scoliosis-specific exercises (SSE) may be more effective than electrostimulation. Evidence for the Schroth method is insufficient to support its use. Significant improvement in function, vertebral angles and trunk asymmetries have been recorded following the implementation of Schroth method in terms of conservative management of scoliosis. Some other forms of exercises interventions have been lately used in the clinical practice for therapeutic management of scoliosis such as global postural reeducation and the Klapp method. Bracing Bracing is normally done when the person has bone growth remaining and is, in general, implemented to hold the curve and prevent it from progressing to the point where surgery is recommended. In some cases with juveniles, bracing has reduced curves significantly, going from a 40° (of the curve, mentioned in length above) out of the brace to 18°. Braces are sometimes prescribed for adults to relieve pain related to scoliosis. Bracing involves fitting the person with a device that covers the torso; in some cases, it extends to the neck (example being the Milwaukee Brace). The most commonly used brace is a TLSO, such as a Boston brace, a corset-like appliance that fits from armpits to hips and is custom-made from fiberglass or plastic. It is typically recommended to be worn 22–23 hours a day, and applies pressure on the curves in the spine. The effectiveness of the brace depends on not only brace design and orthotist skill, but also people's compliance and amount of wear per day. An alternative form of brace is a nighttime only brace, that is worn only at night whilst the child sleeps, and which overcorrects the deformity. Whilst nighttime braces are more convenient for children and families, it is unknown if the effectiveness of the brace is as good as conventional braces. The UK government have funded a large clinical trial (called the BASIS study) to resolve this uncertainty. The BASIS study is ongoing throughout the UK in all of the leading UK children's hospitals that treat scoliosis, with families encouraged to take part. Indications for bracing: people who are still growing who present with Cobb angles less than 20° should be closely monitored. People who are still growing who present with Cobb angles of 20 to 29° should be braced according to the risk of progression by considering age, Cobb angle increase over a six-month period, Risser sign, and clinical presentation. People who are still growing who present with Cobb angles greater than 30° should be braced. However, these are guidelines and not every person will fit into this table. For example, a person who is still growing with a 17° Cobb angle and significant thoracic rotation or flatback could be considered for nighttime bracing. On the opposite end of the growth spectrum, a 29° Cobb angle and a Risser sign three or four might not need to be braced because the potential for progression is reduced. The Scoliosis Research Society's recommendations for bracing include curves progressing to larger than 25°, curves presenting between 30 and 45°, Risser sign 0, 1, or 2 (an X-ray measurement of a pelvic growth area), and less than six months from the onset of menses in girls. Evidence supports that bracing prevents worsening of disease, but whether it changes quality of life, appearance, or back pain is unclear. Surgery Surgery is usually recommended by orthopedists for curves with a high likelihood of progression (i.e., greater than 45–50° of magnitude), curves that would be cosmetically unacceptable as an adult, curves in people with spina bifida and cerebral palsy that interfere with sitting and care, and curves that affect physiological functions such as breathing. Surgery is indicated by the Society on Scoliosis Orthopaedic and Rehabilitation Treatment (SOSORT) at 45–50° and by the Scoliosis Research Society (SRS) at a Cobb angle of 45°. SOSORT uses the 45–50° threshold as a result of the well-documented, plus or minus 5° measurement error that can occur while measuring Cobb angles. Surgeons who are specialized in spine surgery perform surgery for scoliosis. To completely straighten a scoliotic spine is usually impossible, but for the most part, significant corrections are achieved. The two main types of surgery are: Anterior fusion: This surgical approach is through an incision at the side of the chest wall. Posterior fusion: This surgical approach is through an incision on the back and involves the use of metal instrumentation to correct the curve. One or both of these surgical procedures may be needed. The surgery may be done in one or two stages and, on average, takes four to eight hours. A new tethering procedure (anterior vertebral body tethering) may be appropriate for some patients. Spine surgery can be painful and may also be associated with post-surgical pain. Different approaches for pain management are used in surgery including epidural administration and systemic analgesia (also known as general analgesia). Epidural analgesia medication are often used surgically including combinations of local anesthetics and pain medications injected via an epidural injection. Evidence comparing different approaches for analgesia, side effects or benefits, and which approach results in greater pain relief and for how long after this type of surgery is of low to moderate quality. Prognosis A 50-year follow-up study published in the Journal of the American Medical Association (2003) asserted the lifelong physical health, including cardiopulmonary and neurological functions, and mental health of people with idiopathic scoliosis are comparable to those of the general population. Scoliosis that interferes with normal systemic functions is "exceptional" and "rare", and "untreated [scoliosis] people had similar death rates and were just as functional and likely to lead productive lives 50 years after diagnosis as people with normal spines." In an earlier University of Iowa follow-up study, 91% of people with idiopathic scoliosis displayed normal pulmonary function, and their life expectancy was found to be 2% more than that of the general population. Later (2006–) studies corroborate these findings, adding that they are "reassuring for the adult patient who has adolescent onset idiopathic scoliosis in approximately the 50–70° range." These modern landmark studies supersede earlier studies (e.g. Mankin-Graham-Schauk 1964) that did implicate moderate idiopathic scoliosis in impaired pulmonary function. Generally, the prognosis of scoliosis depends on the likelihood of progression. The general rules of progression are larger curves carry a higher risk of progression than smaller curves, and thoracic and double primary curves carry a higher risk of progression than single lumbar or thoracolumbar curves. In addition, people not having yet reached skeletal maturity have a higher likelihood of progression (i.e., if the person has not yet completed the adolescent growth spurt). Epidemiology Scoliosis affects 2–3% of the United States population, or about five to nine million cases. A scoliosis (spinal column curve) of 10° or less affects 1.5–3% of individuals. The age of onset is usually between 10 years and 15 years (but can occur younger) in children and adolescents, making up to 85% of those diagnosed. This is due to rapid growth spurts during puberty when spinal development is most susceptible to genetic and environmental influences. Because female adolescents undergo growth spurts before postural musculoskeletal maturity, scoliosis is more prevalent among females. Although fewer cases are present since using Cobb angle analysis for diagnosis, scoliosis remains significant, appearing in otherwise healthy children. Despite the fact that scoliosis is a disfigurement of the spine, it has been shown to influence the pneumonic function, balance while standing and stride execution in children. The impact of carrying backpacks on these three side effects have been broadly researched. Incidence of idiopathic scoliosis (IS) stops after puberty when skeletal maturity is attained, however further curvature may occur during late adulthood due to vertebral osteoporosis and weakened musculature. History Ever since the condition was discovered by the Greek physician Hippocrates, a cure has been sought. Treatments such as bracing and the insertion of rods into the spine were employed during the 1900s. In the mid-20th century, new treatments and improved screening methods have been developed to reduce the progression of scoliosis in patients and alleviate their associated pain. School children were during this period believed to develop poor posture as a result of working at their desks, and many were diagnosed with scoliosis. It was also considered to be caused by tuberculosis or poliomyelitis, diseases that were successfully managed using vaccines and antibiotics. The American orthopaedic surgeon Alfred Shands Jr. discovered that two percent of patients had non-disease related scoliosis, later termed idiopathic scoliosis, or the "cancer of orthopaedic surgery". These patients were treated with questionable remedies. A theory at the time—now discredited—was that the condition needed to be detected early to halt its progression, and so some schools made screening for scoliosis mandatory. Measurements of shoulder height, leg length and spinal curvature were made, and the ability to bend forwards, along with body posture, was tested, but students were sometimes misdiagnosed because of their poor posture. An early treatment was the Milwaukee brace, a rigid contraption of metal rods attached to a plastic or leather girdle, designed to straighten the spine. Because of the constant pressure applied to the spine, the brace was uncomfortable. It caused jaw and muscle pain, skin irritation, as well as low self-esteem. Surgery In 1962, the American orthopaedic surgeon Paul Harrington introduced a metal spinal system of instrumentation that assisted with straightening the spine, as well as holding it rigid while fusion took place. The now obsolete Harrington rod operated on a ratchet system, attached by hooks to the spine at the top and bottom of the curvature that when cranked would distract—or straighten—the curve. The Harrington rod obviates the need for prolonged casting, allowing patients greater mobility in the postoperative period and significantly reducing the quality of life burden of fusion surgery. The Harrington rod was the precursor to most modern spinal instrumentation systems. A major shortcoming was that it failed to produce a posture wherein the skull would be in proper alignment with the pelvis, and it did not address rotational deformity. As the person aged, there would be increased wear and tear, early onset arthritis, disc degeneration, muscular stiffness, and acute pain. "Flatback" became the medical name for a related complication, especially for those who had lumbar scoliosis. In the 1960s, the gold standard for idiopathic scoliosis was a posterior approach using a single Harrington rod. Post-operative recovery involved bed rest, casts, and braces. Poor results became apparent over time. In the 1970s, an improved technique was developed using two rods and wires attached at each level of the spine. This segmented instrumentation system allowed patients to become mobile soon after surgery. In the 1980s, Cotrel–Dubousset instrumentation improved fixation and addressed sagittal imbalance and rotational defects unresolved by the Harrington rod system. This technique used multiple hooks with rods to give stronger fixation in three dimensions, usually eliminating the need for postoperative bracing. Evolution There are links between human spinal morphology, bipedality, and scoliosis which suggest an evolutionary basis for the condition. Scoliosis has not been found in chimpanzees or gorillas. Thus, it has been hypothesized that scoliosis may actually be related to humans' morphological differences from these apes. Other apes have a shorter and less mobile lower spine than humans. Some of the lumbar vertebrae in Pan are "captured", meaning that they are held fast between the ilium bones of the pelvis. Compared to humans, Old World monkeys have far larger erector spinae muscles, which are the muscles which hold the spine steady. These factors make the lumbar spine of most primates less flexible and far less likely to deviate than those of humans. While this may explicitly relate only to lumbar scolioses, small imbalances in the lumbar spine could precipitate thoracic problems as well. Scoliosis may be a byproduct of strong selection for bipedalism. For a bipedal stance, a highly mobile, elongated lower spine is very beneficial. For instance, the human spine takes on an S-shaped curve with lumbar lordosis, which allows for better balance and support of an upright trunk. Selection for bipedality was likely strong enough to justify the maintenance of such a disorder. Bipedality is hypothesized to have emerged for a variety of different reasons, many of which would have certainly conferred fitness advantages. It may increase viewing distance, which can be beneficial in hunting and foraging as well as protection from predators or other humans; it makes long-distance travel more efficient for foraging or hunting; and it facilitates terrestrial feeding from grasses, trees, and bushes. Given the many benefits of bipedality which depends on a particularly formed spine, it is likely that selection for bipedalism played a large role in the development of the spine as we see it today, in spite of the potential for "scoliotic deviations". According to the fossil record, scoliosis may have been more prevalent among earlier hominids such as Australopithecus and Homo erectus, when bipedality was first emerging. Their fossils indicate that there may have been selected over time for a slight reduction in lumbar length to what we see today, favoring a spine that could efficiently support bipedality with a lower risk of scoliosis. Society and culture The cost of scoliosis involves both monetary loss and lifestyle limitations that increase with severity. Respiratory deficiencies may arise from thoracic deformities and cause abnormal breathing. This directly affects capacity for exercise and work, decreasing the overall quality of life. In the United States, the average hospital cost for cases involving surgical procedures was $30,000 to $60,000 per person in 2010. As of 2006, the cost of bracing was up to $5,000 during rapid growth periods, when braces must be consistently replaced across multiple follow-ups. The month of June is recognized as Scoliosis Awareness Month to highlight and spread awareness of scoliosis. It emphasizes its wide impact and the need for early detection. Research Genetic testing for adolescent idiopathic scoliosis, which became available in 2009 and is still under investigation, attempts to gauge the likelihood of curve progression.
Biology and health sciences
Types
Health
129139
https://en.wikipedia.org/wiki/Periodontal%20disease
Periodontal disease
Periodontal disease, also known as gum disease, is a set of inflammatory conditions affecting the tissues surrounding the teeth. In its early stage, called gingivitis, the gums become swollen and red and may bleed. It is considered the main cause of tooth loss for adults worldwide. In its more serious form, called periodontitis, the gums can pull away from the tooth, bone can be lost, and the teeth may loosen or fall out. Halitosis (bad breath) may also occur. Periodontal disease typically arises from the development of plaque biofilm, which harbors harmful bacteria such as Porphyromonas gingivalis and Treponema denticola. These bacteria infect the gum tissue surrounding the teeth, leading to inflammation and, if left untreated, progressive damage to the teeth and gum tissue. Recent meta-analysis have shown that the composition of the oral microbiota and its response to periodontal disease differ between men and women. These differences are particularly notable in the advanced stages of periodontitis, suggesting that sex-specific factors may influence susceptibility and progression. Factors that increase the risk of disease include smoking, diabetes, HIV/AIDS, family history, high levels of homocysteine in the blood and certain medications. Diagnosis is by inspecting the gum tissue around the teeth both visually and with a probe and X-rays looking for bone loss around the teeth. Treatment involves good oral hygiene and regular professional teeth cleaning. Recommended oral hygiene include daily brushing and flossing. In certain cases antibiotics or dental surgery may be recommended. Clinical investigations demonstrate that quitting smoking and making dietary changes enhance periodontal health. Globally, 538 million people were estimated to be affected in 2015 and has been known to affect 10–15% of the population generally. In the United States, nearly half of those over the age of 30 are affected to some degree and about 70% of those over 65 have the condition. Males are affected more often than females. Signs and symptoms In the early stages, periodontitis has very few symptoms, and in many individuals the disease has progressed significantly before they seek treatment. Symptoms may include: Redness or bleeding of gums while brushing teeth, using dental floss or biting into hard food (e.g., apples) (though this may also occur in gingivitis, where there is no attachment loss gum disease) Gum swelling that recurs Spitting out blood after brushing teeth Halitosis, or bad breath, and a persistent metallic taste in the mouth Gingival recession, resulting in apparent lengthening of teeth (this may also be caused by heavy-handed brushing or with a stiff toothbrush) Deep pockets between the teeth and the gums (pockets are sites where the attachment has been gradually destroyed by collagen-destroying enzymes, known as collagenases) Loose teeth, in the later stages (though this may occur for other reasons, as well) Gingival inflammation and bone destruction are largely painless. Hence, people may wrongly assume painless bleeding after teeth cleaning is insignificant, although this may be a symptom of progressing periodontitis in that person. Associated conditions Periodontitis has been linked to increased inflammation in the body, such as indicated by raised levels of C-reactive protein and interleukin-6. It is associated with an increased risk of stroke, myocardial infarction, atherosclerosis and hypertension. It is also linked in those over 60 years of age to impairments in delayed memory and calculation abilities. Individuals with impaired fasting glucose and diabetes mellitus have a higher degrees of periodontal inflammation and often have difficulties with balancing their blood glucose level, owing to the constant systemic inflammatory state caused by the periodontal inflammation. Although no causal association was proven, there is an association between chronic periodontitis and erectile dysfunction, inflammatory bowel disease, and heart disease. Diabetes and periodontal disease A positive correlation between raised levels of glucose within the blood and the onset or progression of periodontal disease has been shown in the current literature. Data has also shown that there is a significant increase in the incidence or progression of periodontitis in patients with uncontrolled diabetes compared to those who do not have diabetes or have well-controlled diabetes. In uncontrolled diabetes, the formation of reactive oxygen species can damage cells such as those in the connective tissue of the periodontal ligament, resulting in cell necrosis or apoptosis. Furthermore, individuals with uncontrolled diabetes mellitus who have frequent exposure to periodontal pathogens have a greater immune response to these bacteria. This can subsequently cause and/or accelerate periodontal tissue destruction leading to periodontal disease. Oral cancer and periodontal disease Current literature suggests a link between periodontal disease and oral cancer. Studies have confirmed an increase in systemic inflammation markers such as C-Reactive Protein and Interleukin-6 to be found in patients with advanced periodontal disease. The link between systemic inflammation and oral cancer has also been well established. Both periodontal disease and cancer risk are associated with genetic susceptibility and it is possible that there is a positive association by a shared genetic susceptibility in the two diseases. Due to the low incidence rate of oral cancer, studies have not been able to conduct quality studies to prove the association between the two, however future larger studies may aid in the identification of individuals at a higher risk. Systemic implications Periodontal disease (PD) can be described as an inflammatory condition affecting the supporting structures of the teeth. Studies have shown that PD is associated with higher levels of systemic inflammatory markers such as Interleukin-6 (IL-6), C-Reactive Protein (CRP) and Tumor Necrosis Factor (TNF). To compare, elevated levels of these inflammatory markers are also associated with cardiovascular disease and cerebrovascular events such as ischemic strokes. The presence of a wide spectrum inflammatory oral diseases can increase the risk of an episode of stroke in an acute or chronic phase. Inflammatory markers, CRP, IL-6 are known risk factors of stroke. Both inflammatory markers are also biomarkers of PD and found to be an increased level after daily activities, such as mastication or toothbrushing, are performed. Bacteria from the periodontal pockets will enter the bloodstream during these activities and the current literature suggests that this may be a possible triggering of the aggravation of the stroke process. Other mechanisms have been suggested, PD is a known chronic infection. It can aid in the promotion of atherosclerosis by the deposition of cholesterol, cholesterol esters and calcium within the subendothelial layer of vessel walls. Atherosclerotic plaque that is unstable may rupture and release debris and thrombi that may travel to different parts of the circulatory system causing embolization and therefore, an ischemic stroke. Therefore, PD has been suggested as an independent risk factor for stroke. A variety of cardiovascular diseases can also be associated with periodontal disease. Patients with higher levels of inflammatory markers such as TNF, IL-1, IL-6 and IL-8 can lead to progression of atherosclerosis and the development and perpetuation of atrial fibrillation, as it is associated with platelet and coagulation cascade activations, leading to thrombosis and thrombotic complications. Experimental animal studies have shown a link between periodontal disease, oxidative stress and cardiac stress. Oxidative stress favours the development and progression of heart failure as it causes cellular dysfunction, oxidation of proteins and lipids, and damage to the deoxyribonucleic acid (DNA), stimulating fibroblast proliferation and metalloproteinases activation favouring cardiac remodelling. During SARS Covid 19 pandemic, Periodontitis was significantly associated with a higher risk of complications from COVID‐19, including ICU admission, need for assisted ventilation and death and increased blood levels of markers such as D‐dimer, WBC and CRP which are linked with worse disease outcome. Clinical significance Inadequate nutrition and periodontal disease Periodontal disease is multifactorial, and nutrition can significantly affect its prognosis. Studies have shown that a healthy and well-balanced diet is crucial to maintaining periodontal health. Nutritional deficiencies can lead to oral manifestations such as those in scurvy and rickets disease. Different vitamins will play a different role in periodontal health: Vitamin C: Deficiencies may lead to gingival inflammation and bleeding, subsequently advancing periodontal disease Vitamin D: Deficiencies may lead to delayed post-surgical healing Vitamin E: Deficiencies may lead to impaired gingival wound healing Vitamin K: Deficiencies may lead to gingival bleeding Nutritional supplements of vitamins have also been shown to positively affect healing after periodontal surgery and many of these vitamins can be found in a variety of food that we eat within a regular healthy diet. Therefore, vitamin intakes (particularly vitamin C) and dietary supplements not only play a role in improving periodontal health, but also influence the rate of bone formation and periodontal regeneration. However, studies supporting the correlation between nutrition and periodontal health are limited, and more long-term research is required to confirm this. Causes Periodontitis is an inflammation of the periodontium, i.e., the tissues that support the teeth. The periodontium consists of four tissues: gingiva, or gum tissue, cementum, or outer layer of the roots of teeth, alveolar bone, or the bony sockets into which the teeth are anchored, and periodontal ligaments (PDLs), which are the connective tissue fibers that run between the cementum and the alveolar bone. The primary cause of gingivitis is poor or ineffective oral hygiene, which leads to the accumulation of a mycotic and bacterial matrix at the gum line, called dental plaque. Other contributors are poor nutrition and underlying medical issues such as diabetes. Diabetics must be meticulous with their homecare to control periodontal disease. New finger prick tests have been approved by the Food and Drug Administration in the US, and are being used in dental offices to identify and screen people for possible contributory causes of gum disease, such as diabetes. In some people, gingivitis progresses to periodontitis — with the destruction of the gingival fibers, the gum tissues separate from the tooth and deepened sulcus, called a periodontal pocket. Subgingival microorganisms (those that exist under the gum line) colonize the periodontal pockets and cause further inflammation in the gum tissues and progressive bone loss. Examples of secondary causes are those things that, by definition, cause microbic plaque accumulation, such as restoration overhangs and root proximity. Smoking is another factor that increases the occurrence of periodontitis, directly or indirectly, and may interfere with or adversely affect its treatment. It is arguably the most important environmental risk factor for periodontitis. Research has shown that smokers have more bone loss, attachment loss and tooth loss compared to non-smokers. This is likely due to several effects of smoking on the immune response including decreased wound healing, suppression of antibody production, and the reduction of phagocytosis by neutrophils Ehlers–Danlos syndrome and Papillon–Lefèvre syndrome (also known as palmoplantar keratoderma) are also risk factors for periodontitis. If left undisturbed, microbial plaque calcifies to form calculus, which is commonly called tartar. Calculus above and below the gum line must be removed completely by the dental hygienist or dentist to treat gingivitis and periodontitis. Although the primary cause of both gingivitis and periodontitis is the microbial plaque that adheres to the tooth surfaces, there are many other modifying factors. A very strong risk factor is one's genetic susceptibility. Several conditions and diseases, including Down syndrome, diabetes, and other diseases that affect one's resistance to infection, also increase susceptibility to periodontitis. Periodontitis may be associated with higher stress. Periodontitis occurs more often in people in the lower classes than people in the upper classes. Genetics appear to play a role in determining the risk for periodontitis. It is believed genetics could explain why some people with good plaque control have advanced periodontitis, whilst some others with poor oral hygiene are free from the disease. Genetic factors which could modify the risk of a person developing periodontitis include: Defects of phagocytosis: person may have hypo-responsive phagocytes. Hyper-production of interleukins, prostaglandins and cytokines, resulting in an exaggerated immune response. Interleukin 1 (IL-1) gene polymorphism: people with this polymorphism produce more IL-1, and subsequently are more at risk of developing chronic periodontitis. Diabetes appears to exacerbate the onset, progression, and severity of periodontitis. Although the majority of research has focused on type 2 diabetes, type 1 diabetes appears to have an identical effect on the risk for periodontitis. The extent of the increased risk of periodontitis is dependent on the level of glycaemic control. Therefore, in well managed diabetes there seems to be a small effect of diabetes on the risk for periodontitis. However, the risk increases exponentially as glycaemic control worsens. Overall, the increased risk of periodontitis in diabetics is estimated to be between two and three times higher. So far, the mechanisms underlying the link are not fully understood, but it is known to involve aspects of inflammation, immune functioning, neutrophil activity, and cytokine biology. Hormonal fluctuations can also play a significant role in the development and progression of gingivitis and periodontitis. Changes in hormone levels, particularly during puberty, menstruation, pregnancy, and menopause, can lead to increased sensitivity and inflammatory responses in the gums. For example, elevated oestrogen and progesterone during pregnancy can heighten the inflammatory response to dental plaque, making pregnant individuals more susceptible to gingival disease. Mechanism As dental plaque or biofilm accumulates on the teeth near and below the gums there is some dysbiosis of the normal oral microbiome. As of 2017 it was not certain what species were most responsible for causing harm, but gram-negative anaerobic bacteria, spirochetes, and viruses have been suggested; in individual people it is sometimes clear that one or more species is driving the disease. Research in 2004 indicated three gram negative anaerobic species: Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, Bacteroides forsythus and Eikenella corrodens. Plaque may be soft and uncalcified, hard and calcified, or both; for plaques that are on teeth the calcium comes from saliva; for plaques below the gumline, it comes from blood via oozing of inflamed gums. The damage to teeth and gums comes from the immune system as it attempts to destroy the microbes that are disrupting the normal symbiosis between the oral tissues and the oral microbe community. As in other tissues, Langerhans cells in the epithelium take up antigens from the microbes, and present them to the immune system, leading to movement of white blood cells into the affected tissues. This process in turn activates osteoclasts which begin to destroy bone, and it activates matrix metalloproteinases that destroy ligaments. So, in summary, it is bacteria which initiates the disease, but key destructive events are brought about by the exaggerated response from the host's immune system. Classification There were several attempts to introduce an agreed-upon classification system for periodontal diseases: in 1989, 1993, 1999, and 2017. 1999 classification The 1999 classification system for periodontal diseases and conditions listed seven major categories of periodontal diseases, of which 2–6 are termed destructive periodontal disease, because the damage is essentially irreversible. The seven categories are as follows: Gingivitis Chronic periodontitis Aggressive periodontitis Periodontitis as a manifestation of systemic disease Necrotizing ulcerative gingivitis/periodontitis Abscesses of the periodontium Combined periodontic-endodontic lesions Moreover, terminology expressing both the extent and severity of periodontal diseases are appended to the terms above to denote the specific diagnosis of a particular person or group of people. Severity The "severity" of disease refers to the amount of periodontal ligament fibers that have been lost, termed "clinical attachment loss". According to the 1999 classification, the severity of chronic periodontitis is graded as follows: Slight: of attachment loss Moderate: of attachment loss Severe: ≥ of attachment loss Extent The "extent" of disease refers to the proportion of the dentition affected by the disease in terms of percentage of sites. Sites are defined as the positions at which probing measurements are taken around each tooth and, generally, six probing sites around each tooth are recorded, as follows: Mesiobuccal Mid-buccal Distobuccal Mesiolingual Mid-lingual Distolingual If up to 30% of sites in the mouth are affected, the manifestation is classified as "localized"; for more than 30%, the term "generalized" is used. 2017 classification The 2017 classification of periodontal diseases is as follows: Periodontal health, gingival disease and conditions Periodontal health and gingival health Clinical gingival health on an intact periodontium Clinical gingival health on an intact periodontium Stable periodontitis Non periodontitis person Gingivitis — Dental biofilm induced Associated with the dental biofilm alone Mediated by systemic and local risk factors Drug induced gingival enlargement. Gingival diseases — Non dental biofilm induced Genetic/developmental disorders Specific infections Inflammatory and immune conditions Reactive processes Neoplasms Endocrine, nutritional and metabolic Traumatic lesions Gingival pigmentation. Periodontitis Necrotizing periodontal diseases Necrotizing Gingivitis Necrotizing periodontitis Necrotizing stomatitis Periodontitis as a manifestation of systemic disease Periodontitis Other conditions affecting the periodontium (Periodontal Manifestations of Systemic Diseases and Developmental and Acquired Conditions) Systemic disease of conditions affecting the periodontal support tissues Other Periodontal Conditions Periodontal abscesses Endodontic- periodontal lesions Mucogingival deformities and conditions Gingival Phenotype Gingival/Soft Tissue Recession Lack of Gingiva Decreased Vestibular Depth Aberrant Frenum/muscle position Gingival Excess Abnormal Color Condition of the exposed root surface Traumatic occlusal forces Primary Occlusal Trauma Secondary Occlusal Trauma Tooth and prosthesis related factors Localized tooth-related factors Localized dental prostheses-related factors Peri-implant diseases and conditions Peri-implant health Peri-implant mucositis Peri-implantitis Peri-implant soft and hard tissue deficiencies Staging The goals of staging periodontitis is to classify the severity of damage and assess specific factors that may affect management. According to the 2017 classification, periodontitis is divided into four stages; after considering a few factors such as: Amount and percentage bone loss radiographically Clinical attachment loss, probing depth Presence of furcation Vertical bony defects History of tooth loss related to periodontitis Tooth hypermobility due to secondary occlusal trauma Grading According to the 2017 classification, the grading system for periodontitis consists of three grades: Grade A: Slow progression of disease; no evidence of bone loss over last five years Grade B: Moderate progression; < 2mm of bone loss over last five years Grade C: Rapid progression or future progression at high risk; ≥ 2mm bone loss over five years Risk factors affecting which grade a person is classified into include: Smoking Diabetes Periodontal probing Dentists and dental hygienists measure periodontal disease using a device called a periodontal probe. This thin "measuring stick" is gently placed into the space between the gums and the teeth, and slipped below the gumline. If the probe can slip more than below the gumline, the person is said to have a gingival pocket if no migration of the epithelial attachment has occurred or a periodontal pocket if apical migration has occurred. This is somewhat of a misnomer, as any depth is, in essence, a pocket, which in turn is defined by its depth, i.e., a 2-mm pocket or a 6-mm pocket. However, pockets are generally accepted as self-cleansable (at home, by the person, with a toothbrush) if they are 3 mm or less in depth. This is important because if a pocket is deeper than 3 mm around the tooth, at-home care will not be sufficient to cleanse the pocket, and professional care should be sought. When the pocket depths reach in depth, the hand instruments and ultrasonic scalers used by the dental professionals may not reach deeply enough into the pocket to clean out the microbial plaque that causes gingival inflammation. In such a situation, the bone or the gums around that tooth should be surgically altered or it will always have inflammation which will likely result in more bone loss around that tooth. An additional way to stop the inflammation would be for the person to receive subgingival antibiotics (such as minocycline) or undergo some form of gingival surgery to access the depths of the pockets and perhaps even change the pocket depths so they become 3 mm or less in depth and can once again be properly cleaned by the person at home with his or her toothbrush. Prevention Daily oral hygiene measures to prevent periodontal disease include: Brushing properly on a regular basis (at least twice daily), with the person attempting to direct the toothbrush bristles underneath the gumline, helps disrupt the bacterial-mycotic growth and formation of subgingival plaque. Flossing daily and using interdental brushes (if the space between teeth is large enough), as well as cleaning behind the last tooth, the third molar, in each quarter. Using an antiseptic mouthwash: Chlorhexidine gluconate-based mouthwash in combination with careful oral hygiene may cure gingivitis, although they cannot reverse any attachment loss due to periodontitis. Regular dental check-ups and professional teeth cleaning as required: Dental check-ups serve to monitor the person's oral hygiene methods and levels of attachment around teeth, identify any early signs of periodontitis, and monitor response to treatment. Typically, dental hygienists (or dentists) use special instruments to clean (debride) teeth below the gumline and disrupt any plaque growing below the gumline. This is a standard treatment to prevent any further progress of established periodontitis. Studies show that after such a professional cleaning (periodontal debridement), microbial plaque tends to grow back to precleaning levels after about three to four months. Nonetheless, the continued stabilization of a person's periodontal state depends largely, if not primarily, on the person's oral hygiene at home, as well as on the go. Without daily oral hygiene, periodontal disease will not be overcome, especially if the person has a history of extensive periodontal disease. Management The cornerstone of successful periodontal treatment starts with establishing excellent oral hygiene. This includes twice-daily brushing with daily flossing. Also, the use of an interdental brush is helpful if space between the teeth allows. For smaller spaces, products such as narrow picks with soft rubber bristles provide excellent manual cleaning. Persons with dexterity problems, such as with arthritis, may find oral hygiene to be difficult and may require more frequent professional care and/or the use of a powered toothbrush. Persons with periodontitis must realize it is a chronic inflammatory disease and a lifelong regimen of excellent hygiene and professional maintenance care with a dentist/hygienist or periodontist is required to maintain affected teeth. Initial therapy Removal of microbial plaque and calculus is necessary to establish periodontal health. The first step in the treatment of periodontitis involves nonsurgical cleaning below the gum line with a procedure called "root surface instrumentation" or "RSI", this causes a mechanical disturbance to the bacterial biofilm below the gumline. This procedure involves the use of specialized curettes to mechanically remove plaque and calculus from below the gumline, and may require multiple visits and local anesthesia to adequately complete. In addition to initial RSI, it may also be necessary to adjust the occlusion (bite) to prevent excessive force on teeth that have reduced bone support. Also, it may be necessary to complete any other dental needs, such as replacement of rough, plaque-retentive restorations, closure of open contacts between teeth, and any other requirements diagnosed at the initial evaluation. It is important to note that RSI is different to scaling and root planing: RSI only removes the calculus, while scaling and root planing removes the calculus as well as underlying softened dentine, which leaves behind a smooth and glassy surface, which is not a requisite for periodontal healing. Therefore, RSI is now advocated over root planing. Reevaluation Nonsurgical scaling and root planing are usually successful if the periodontal pockets are shallower than . The dentist or hygienist must perform a re-evaluation four to six weeks after the initial scaling and root planing, to determine if the person's oral hygiene has improved and inflammation has regressed. Probing should be avoided then, and an analysis by gingival index should determine the presence or absence of inflammation. The monthly reevaluation of periodontal therapy should involve periodontal charting as a better indication of the success of treatment, and to see if other courses of treatment can be identified. Pocket depths of greater than which remain after initial therapy, with bleeding upon probing, indicate continued active disease and will very likely lead to further bone loss over time. This is especially true in molar tooth sites where furcations (areas between the roots) have been exposed. Surgery If nonsurgical therapy is found to have been unsuccessful in managing signs of disease activity, periodontal surgery may be needed to stop progressive bone loss and regenerate lost bone where possible. Many surgical approaches are used in the treatment of advanced periodontitis, including open flap debridement and osseous surgery, as well as guided tissue regeneration and bone grafting. The goal of periodontal surgery is access for definitive calculus removal and surgical management of bony irregularities which have resulted from the disease process to reduce pockets as much as possible. Long-term studies have shown, in moderate to advanced periodontitis, surgically treated cases often have less further breakdown over time and, when coupled with a regular post-treatment maintenance regimen, are successful in nearly halting tooth loss in nearly 85% of diagnosed people. Local drug delivery Local drug deliveries in periodontology has gained acceptance and popularity compared to systemic drugs due to decreased risk in development of resistant flora and other side effects. A meta analysis of local tetracycline found improvement. Local application of statin may be useful. Systemic drug delivery Systemic drug delivery in conjunction with non-surgical therapy may be used as a means to reduce the percentage of the bacterial plaque load in the mouth. Many different antibiotics and also combinations of them have been tested; however, there is yet very low-certainty evidence of any significant difference in the short and long term compared to non-surgical therapy alone. It may be beneficial to limit the use of systemic drugs, since bacteria can develop antimicrobial resistance and some specific antibiotics might induce temporary mild adverse effects, such as nausea, diarrhoea and gastrointestinal disturbances. Adjunctive systemic antimicrobial treatment There is currently low-quality evidence suggesting if adjunctive systemic antimicrobials are beneficial for the non-surgical treatment of periodontitis. It is not sure whether some antibiotics are better than others when used alongside scaling and root planing). Maintenance Once successful periodontal treatment has been completed, with or without surgery, an ongoing regimen of "periodontal maintenance" is required. This involves regular checkups and detailed cleanings every three months to prevent repopulation of periodontitis-causing microorganisms, and to closely monitor affected teeth so early treatment can be rendered if the disease recurs. Usually, periodontal disease exists due to poor plaque control resulting from inappropriate brushing. Therefore, if the brushing techniques are not modified, a periodontal recurrence is probable. Other Most alternative "at-home" gum disease treatments involve injecting antimicrobial solutions, such as hydrogen peroxide, into periodontal pockets via slender applicators or oral irrigators. This process disrupts anaerobic micro-organism colonies and is effective at reducing infections and inflammation when used daily. A number of other products, functionally equivalent to hydrogen peroxide, are commercially available, but at substantially higher cost. However, such treatments do not address calculus formations, and so are short-lived, as anaerobic microbial colonies quickly regenerate in and around calculus. Doxycycline may be given alongside the primary therapy of scaling (see § initial therapy). Doxycycline has been shown to improve indicators of disease progression (namely probing depth and attachment level). Its mechanism of action involves inhibition of matrix metalloproteinases (such as collagenase), which degrade the teeth's supporting tissues (periodontium) under inflammatory conditions. To avoid killing beneficial oral microbes, only small doses of doxycycline (20 mg) are used. Phage therapy may be a new therapeutic alternative. Prognosis If people have 7-mm or deeper pockets around their teeth, as measured by a periodontal probe, then they would likely risk eventual tooth loss over the years. If this periodontal condition is not identified and people remain unaware of the progressive nature of the disease, then years later, they may be surprised that some teeth will gradually become loose and may need to be extracted, sometimes due to a severe infection or even pain. According to the Sri Lankan tea laborer study, in the absence of any oral hygiene activity, approximately 10% will experience severe periodontal disease with rapid loss of attachment (>2 mm/year). About 80% will experience moderate loss (1–2 mm/year) and the remaining 10% will not experience any loss. Epidemiology Periodontitis is very common, and is widely regarded as the second most common dental disease worldwide, after dental decay, and in the United States has a prevalence of 30–50% of the population, but only about 10% have severe forms. Chronic periodontitis affects about 750 million people or about 10.8% of the world population as of 2010. Like other conditions intimately related to access to hygiene and basic medical monitoring and care, periodontitis tends to be more common in economically disadvantaged populations or regions. Its occurrence decreases with a higher standard of living. In Israeli populations, individuals of Yemenite, North-African, South Asian, or Mediterranean origin have higher prevalence of periodontal disease than individuals from European descent. Periodontitis is frequently reported to be socially patterned, i.e. people from the lower end of the socioeconomic scale are affected more often than people from the upper end of the socioeconomic scale. History An ancient hominid from 3 million years ago had gum disease. Records from China and the Middle East, along with archaeological studies, show that mankind has had periodontal disease for at least many thousands of years. In Europe and the Middle East archaeological research looking at ancient plaque DNA, shows that in the ancient hunter-gatherer lifestyle there was less gum disease, but that it became more common when more cereals were eaten. The Otzi Iceman was shown to have had severe gum disease. Furthermore, research has shown that in the Roman era in the UK, there was less periodontal disease than in modern times. The researchers suggest that smoking may be a key to this. Society and culture Etymology The word "periodontitis" (Greek: ) comes from the Greek peri, "around", (GEN ), "tooth", and the suffix , in medical terminology "inflammation". The word pyorrhea (alternative spelling: pyorrhoea) comes from the Greek (), "discharge of matter", itself from , "discharge from a sore", rhoē, "flow", and the suffix -ia. In English this term can describe, as in Greek, any discharge of pus; i.e. it is not restricted to these diseases of the teeth. Economics It is estimated that lost productivity due to severe periodontitis costs the global economy about US$54 billion each year. Other animals Periodontal disease is the most common disease found in dogs and affects more than 80% of dogs aged three years or older. Its prevalence in dogs increases with age, but decreases with increasing body weight; i.e., toy and miniature breeds are more severely affected. Recent research undertaken at the Waltham Centre for Pet Nutrition has established that the bacteria associated with gum disease in dogs are not the same as in humans. Systemic disease may develop because the gums are very vascular (have a good blood supply). The blood stream carries these anaerobic micro-organisms, and they are filtered out by the kidneys and liver, where they may colonize and create microabscesses. The microorganisms traveling through the blood may also attach to the heart valves, causing vegetative infective endocarditis (infected heart valves). Additional diseases that may result from periodontitis include chronic bronchitis and pulmonary fibrosis.
Biology and health sciences
Specific diseases
Health
129618
https://en.wikipedia.org/wiki/Cyanobacteria
Cyanobacteria
Cyanobacteria (), also called Cyanobacteriota or Cyanophyta, are a phylum of autotrophic gram-negative bacteria that can obtain biological energy via oxygenic photosynthesis. The name "cyanobacteria" () refers to their bluish green (cyan) color, which forms the basis of cyanobacteria's informal common name, blue-green algae, although as prokaryotes they are not scientifically classified as algae. Cyanobacteria are probably the most numerous taxon to have ever existed on Earth and the first organisms known to have produced oxygen, having appeared in the middle Archean eon and apparently originated in a freshwater or terrestrial environment. Their photopigments can absorb the red- and blue-spectrum frequencies of sunlight (thus reflecting a greenish color) to split water molecules into hydrogen ions and oxygen. The hydrogen ions are used to react with carbon dioxide to produce complex organic compounds such as carbohydrates (a process known as carbon fixation), and the oxygen is released as a byproduct. By continuously producing and releasing oxygen over billions of years, cyanobacteria are thought to have converted the early Earth's anoxic, weakly reducing prebiotic atmosphere, into an oxidizing one with free gaseous oxygen (which previously would have been immediately removed by various surface reductants), resulting in the Great Oxidation Event and the "rusting of the Earth" during the early Proterozoic, dramatically changing the composition of life forms on Earth. The subsequent adaptation of early single-celled organisms to survive in oxygenous environments likely had led to endosymbiosis between anaerobes and aerobes, and hence the evolution of eukaryotes during the Paleoproterozoic. Cyanobacteria use photosynthetic pigments such as various forms of chlorophyll, carotenoids, phycobilins to convert the photonic energy in sunlight to chemical energy. Unlike heterotrophic prokaryotes, cyanobacteria have internal membranes. These are flattened sacs called thylakoids where photosynthesis is performed. Photoautotrophic eukaryotes such as red algae, green algae and plants perform photosynthesis in chlorophyllic organelles that are thought to have their ancestry in cyanobacteria, acquired long ago via endosymbiosis. These endosymbiont cyanobacteria in eukaryotes then evolved and differentiated into specialized organelles such as chloroplasts, chromoplasts, etioplasts, and leucoplasts, collectively known as plastids. Sericytochromatia, the proposed name of the paraphyletic and most basal group, is the ancestor of both the non-photosynthetic group Melainabacteria and the photosynthetic cyanobacteria, also called Oxyphotobacteria. The cyanobacteria Synechocystis and Cyanothece are important model organisms with potential applications in biotechnology for bioethanol production, food colorings, as a source of human and animal food, dietary supplements and raw materials. Cyanobacteria produce a range of toxins known as cyanotoxins that can cause harmful health effects in humans and animals. Overview Cyanobacteria are a very large and diverse phylum of photosynthetic prokaryotes. They are defined by their unique combination of pigments and their ability to perform oxygenic photosynthesis. They often live in colonial aggregates that can take on a multitude of forms. Of particular interest are the filamentous species, which often dominate the upper layers of microbial mats found in extreme environments such as hot springs, hypersaline water, deserts and the polar regions, but are also widely distributed in more mundane environments as well. They are evolutionarily optimized for environmental conditions of low oxygen. Some species are nitrogen-fixing and live in a wide variety of moist soils and water, either freely or in a symbiotic relationship with plants or lichen-forming fungi (as in the lichen genus Peltigera). Cyanobacteria are globally widespread photosynthetic prokaryotes and are major contributors to global biogeochemical cycles. They are the only oxygenic photosynthetic prokaryotes, and prosper in diverse and extreme habitats. They are among the oldest organisms on Earth with fossil records dating back at least 2.1 billion years. Since then, cyanobacteria have been essential players in the Earth's ecosystems. Planktonic cyanobacteria are a fundamental component of marine food webs and are major contributors to global carbon and nitrogen fluxes. Some cyanobacteria form harmful algal blooms causing the disruption of aquatic ecosystem services and intoxication of wildlife and humans by the production of powerful toxins (cyanotoxins) such as microcystins, saxitoxin, and cylindrospermopsin. Nowadays, cyanobacterial blooms pose a serious threat to aquatic environments and public health, and are increasing in frequency and magnitude globally. Cyanobacteria are ubiquitous in marine environments and play important roles as primary producers. They are part of the marine phytoplankton, which currently contributes almost half of the Earth's total primary production. About 25% of the global marine primary production is contributed by cyanobacteria. Within the cyanobacteria, only a few lineages colonized the open ocean: Crocosphaera and relatives, cyanobacterium UCYN-A, Trichodesmium, as well as Prochlorococcus and Synechococcus. From these lineages, nitrogen-fixing cyanobacteria are particularly important because they exert a control on primary productivity and the export of organic carbon to the deep ocean, by converting nitrogen gas into ammonium, which is later used to make amino acids and proteins. Marine picocyanobacteria (Prochlorococcus and Synechococcus) numerically dominate most phytoplankton assemblages in modern oceans, contributing importantly to primary productivity. While some planktonic cyanobacteria are unicellular and free living cells (e.g., Crocosphaera, Prochlorococcus, Synechococcus); others have established symbiotic relationships with haptophyte algae, such as coccolithophores. Amongst the filamentous forms, Trichodesmium are free-living and form aggregates. However, filamentous heterocyst-forming cyanobacteria (e.g., Richelia, Calothrix) are found in association with diatoms such as Hemiaulus, Rhizosolenia and Chaetoceros. Marine cyanobacteria include the smallest known photosynthetic organisms. The smallest of all, Prochlorococcus, is just 0.5 to 0.8 micrometres across. In terms of numbers of individuals, Prochlorococcus is possibly the most plentiful genus on Earth: a single millilitre of surface seawater can contain 100,000 cells of this genus or more. Worldwide there are estimated to be several octillion (1027, a billion billion billion) individuals. Prochlorococcus is ubiquitous between latitudes 40°N and 40°S, and dominates in the oligotrophic (nutrient-poor) regions of the oceans. The bacterium accounts for about 20% of the oxygen in the Earth's atmosphere. Morphology Cyanobacteria are variable in morphology, ranging from unicellular and filamentous to colonial forms. Filamentous forms exhibit functional cell differentiation such as heterocysts (for nitrogen fixation), akinetes (resting stage cells), and hormogonia (reproductive, motile filaments). These, together with the intercellular connections they possess, are considered the first signs of multicellularity. Many cyanobacteria form motile filaments of cells, called hormogonia, that travel away from the main biomass to bud and form new colonies elsewhere. The cells in a hormogonium are often thinner than in the vegetative state, and the cells on either end of the motile chain may be tapered. To break away from the parent colony, a hormogonium often must tear apart a weaker cell in a filament, called a necridium. Some filamentous species can differentiate into several different cell types: Vegetative cells – the normal, photosynthetic cells that are formed under favorable growing conditions Akinetes – climate-resistant spores that may form when environmental conditions become harsh Thick-walled heterocysts – which contain the enzyme nitrogenase vital for nitrogen fixation in an anaerobic environment due to its sensitivity to oxygen. Each individual cell (each single cyanobacterium) typically has a thick, gelatinous cell wall. They lack flagella, but hormogonia of some species can move about by gliding along surfaces. Many of the multicellular filamentous forms of Oscillatoria are capable of a waving motion; the filament oscillates back and forth. In water columns, some cyanobacteria float by forming gas vesicles, as in archaea. These vesicles are not organelles as such. They are not bounded by lipid membranes, but by a protein sheath. Nitrogen fixation Some cyanobacteria can fix atmospheric nitrogen in anaerobic conditions by means of specialized cells called heterocysts. Heterocysts may also form under the appropriate environmental conditions (anoxic) when fixed nitrogen is scarce. Heterocyst-forming species are specialized for nitrogen fixation and are able to fix nitrogen gas into ammonia (), nitrites () or nitrates (), which can be absorbed by plants and converted to protein and nucleic acids (atmospheric nitrogen is not bioavailable to plants, except for those having endosymbiotic nitrogen-fixing bacteria, especially the family Fabaceae, among others). Free-living cyanobacteria are present in the water of rice paddies, and cyanobacteria can be found growing as epiphytes on the surfaces of the green alga, Chara, where they may fix nitrogen. Cyanobacteria such as Anabaena (a symbiont of the aquatic fern Azolla) can provide rice plantations with biofertilizer. Photosynthesis Carbon fixation Cyanobacteria use the energy of sunlight to drive photosynthesis, a process where the energy of light is used to synthesize organic compounds from carbon dioxide. Because they are aquatic organisms, they typically employ several strategies which are collectively known as a " concentrating mechanism" to aid in the acquisition of inorganic carbon ( or bicarbonate). Among the more specific strategies is the widespread prevalence of the bacterial microcompartments known as carboxysomes, which co-operate with active transporters of CO2 and bicarbonate, in order to accumulate bicarbonate into the cytoplasm of the cell. Carboxysomes are icosahedral structures composed of hexameric shell proteins that assemble into cage-like structures that can be several hundreds of nanometres in diameter. It is believed that these structures tether the -fixing enzyme, RuBisCO, to the interior of the shell, as well as the enzyme carbonic anhydrase, using metabolic channeling to enhance the local concentrations and thus increase the efficiency of the RuBisCO enzyme. Electron transport In contrast to purple bacteria and other bacteria performing anoxygenic photosynthesis, thylakoid membranes of cyanobacteria are not continuous with the plasma membrane but are separate compartments. The photosynthetic machinery is embedded in the thylakoid membranes, with phycobilisomes acting as light-harvesting antennae attached to the membrane, giving the green pigmentation observed (with wavelengths from 450 nm to 660 nm) in most cyanobacteria. While most of the high-energy electrons derived from water are used by the cyanobacterial cells for their own needs, a fraction of these electrons may be donated to the external environment via electrogenic activity. Respiration Respiration in cyanobacteria can occur in the thylakoid membrane alongside photosynthesis, with their photosynthetic electron transport sharing the same compartment as the components of respiratory electron transport. While the goal of photosynthesis is to store energy by building carbohydrates from CO2, respiration is the reverse of this, with carbohydrates turned back into CO2 accompanying energy release. Cyanobacteria appear to separate these two processes with their plasma membrane containing only components of the respiratory chain, while the thylakoid membrane hosts an interlinked respiratory and photosynthetic electron transport chain. Cyanobacteria use electrons from succinate dehydrogenase rather than from NADPH for respiration. Cyanobacteria only respire during the night (or in the dark) because the facilities used for electron transport are used in reverse for photosynthesis while in the light. Electron transport chain Many cyanobacteria are able to reduce nitrogen and carbon dioxide under aerobic conditions, a fact that may be responsible for their evolutionary and ecological success. The water-oxidizing photosynthesis is accomplished by coupling the activity of photosystem (PS) II and I (Z-scheme). In contrast to green sulfur bacteria which only use one photosystem, the use of water as an electron donor is energetically demanding, requiring two photosystems. Attached to the thylakoid membrane, phycobilisomes act as light-harvesting antennae for the photosystems. The phycobilisome components (phycobiliproteins) are responsible for the blue-green pigmentation of most cyanobacteria. The variations on this theme are due mainly to carotenoids and phycoerythrins that give the cells their red-brownish coloration. In some cyanobacteria, the color of light influences the composition of the phycobilisomes. In green light, the cells accumulate more phycoerythrin, which absorbs green light, whereas in red light they produce more phycocyanin which absorbs red. Thus, these bacteria can change from brick-red to bright blue-green depending on whether they are exposed to green light or to red light. This process of "complementary chromatic adaptation" is a way for the cells to maximize the use of available light for photosynthesis. A few genera lack phycobilisomes and have chlorophyll b instead (Prochloron, Prochlorococcus, Prochlorothrix). These were originally grouped together as the prochlorophytes or chloroxybacteria, but appear to have developed in several different lines of cyanobacteria. For this reason, they are now considered as part of the cyanobacterial group. Metabolism In general, photosynthesis in cyanobacteria uses water as an electron donor and produces oxygen as a byproduct, though some may also use hydrogen sulfide a process which occurs among other photosynthetic bacteria such as the purple sulfur bacteria. Carbon dioxide is reduced to form carbohydrates via the Calvin cycle. The large amounts of oxygen in the atmosphere are considered to have been first created by the activities of ancient cyanobacteria. They are often found as symbionts with a number of other groups of organisms such as fungi (lichens), corals, pteridophytes (Azolla), angiosperms (Gunnera), etc. The carbon metabolism of cyanobacteria include the incomplete Krebs cycle, the pentose phosphate pathway, and glycolysis. There are some groups capable of heterotrophic growth, while others are parasitic, causing diseases in invertebrates or algae (e.g., the black band disease). Ecology Cyanobacteria can be found in almost every terrestrial and aquatic habitat – oceans, fresh water, damp soil, temporarily moistened rocks in deserts, bare rock and soil, and even Antarctic rocks. They can occur as planktonic cells or form phototrophic biofilms. They are found inside stones and shells (in endolithic ecosystems). A few are endosymbionts in lichens, plants, various protists, or sponges and provide energy for the host. Some live in the fur of sloths, providing a form of camouflage. Aquatic cyanobacteria are known for their extensive and highly visible blooms that can form in both freshwater and marine environments. The blooms can have the appearance of blue-green paint or scum. These blooms can be toxic, and frequently lead to the closure of recreational waters when spotted. Marine bacteriophages are significant parasites of unicellular marine cyanobacteria. Cyanobacterial growth is favoured in ponds and lakes where waters are calm and have little turbulent mixing. Their lifecycles are disrupted when the water naturally or artificially mixes from churning currents caused by the flowing water of streams or the churning water of fountains. For this reason blooms of cyanobacteria seldom occur in rivers unless the water is flowing slowly. Growth is also favoured at higher temperatures which enable Microcystis species to outcompete diatoms and green algae, and potentially allow development of toxins. Based on environmental trends, models and observations suggest cyanobacteria will likely increase their dominance in aquatic environments. This can lead to serious consequences, particularly the contamination of sources of drinking water. Researchers including Linda Lawton at Robert Gordon University, have developed techniques to study these. Cyanobacteria can interfere with water treatment in various ways, primarily by plugging filters (often large beds of sand and similar media) and by producing cyanotoxins, which have the potential to cause serious illness if consumed. Consequences may also lie within fisheries and waste management practices. Anthropogenic eutrophication, rising temperatures, vertical stratification and increased atmospheric carbon dioxide are contributors to cyanobacteria increasing dominance of aquatic ecosystems. Cyanobacteria have been found to play an important role in terrestrial habitats and organism communities. It has been widely reported that cyanobacteria soil crusts help to stabilize soil to prevent erosion and retain water. An example of a cyanobacterial species that does so is Microcoleus vaginatus. M. vaginatus stabilizes soil using a polysaccharide sheath that binds to sand particles and absorbs water. M. vaginatus also makes a significant contribution to the cohesion of biological soil crust. Some of these organisms contribute significantly to global ecology and the oxygen cycle. The tiny marine cyanobacterium Prochlorococcus was discovered in 1986 and accounts for more than half of the photosynthesis of the open ocean. Circadian rhythms were once thought to only exist in eukaryotic cells but many cyanobacteria display a bacterial circadian rhythm. "Cyanobacteria are arguably the most successful group of microorganisms on earth. They are the most genetically diverse; they occupy a broad range of habitats across all latitudes, widespread in freshwater, marine, and terrestrial ecosystems, and they are found in the most extreme niches such as hot springs, salt works, and hypersaline bays. Photoautotrophic, oxygen-producing cyanobacteria created the conditions in the planet's early atmosphere that directed the evolution of aerobic metabolism and eukaryotic photosynthesis. Cyanobacteria fulfill vital ecological functions in the world's oceans, being important contributors to global carbon and nitrogen budgets." – Stewart and Falconer Cyanobionts Some cyanobacteria, the so-called cyanobionts (cyanobacterial symbionts), have a symbiotic relationship with other organisms, both unicellular and multicellular. As illustrated on the right, there are many examples of cyanobacteria interacting symbiotically with land plants. Cyanobacteria can enter the plant through the stomata and colonize the intercellular space, forming loops and intracellular coils. Anabaena spp. colonize the roots of wheat and cotton plants. Calothrix sp. has also been found on the root system of wheat. Monocots, such as wheat and rice, have been colonised by Nostoc spp., In 1991, Ganther and others isolated diverse heterocystous nitrogen-fixing cyanobacteria, including Nostoc, Anabaena and Cylindrospermum, from plant root and soil. Assessment of wheat seedling roots revealed two types of association patterns: loose colonization of root hair by Anabaena and tight colonization of the root surface within a restricted zone by Nostoc. The relationships between cyanobionts (cyanobacterial symbionts) and protistan hosts are particularly noteworthy, as some nitrogen-fixing cyanobacteria (diazotrophs) play an important role in primary production, especially in nitrogen-limited oligotrophic oceans. Cyanobacteria, mostly pico-sized Synechococcus and Prochlorococcus, are ubiquitously distributed and are the most abundant photosynthetic organisms on Earth, accounting for a quarter of all carbon fixed in marine ecosystems. In contrast to free-living marine cyanobacteria, some cyanobionts are known to be responsible for nitrogen fixation rather than carbon fixation in the host. However, the physiological functions of most cyanobionts remain unknown. Cyanobionts have been found in numerous protist groups, including dinoflagellates, tintinnids, radiolarians, amoebae, diatoms, and haptophytes. Among these cyanobionts, little is known regarding the nature (e.g., genetic diversity, host or cyanobiont specificity, and cyanobiont seasonality) of the symbiosis involved, particularly in relation to dinoflagellate host. Collective behaviour Some cyanobacteria – even single-celled ones – show striking collective behaviours and form colonies (or blooms) that can float on water and have important ecological roles. For instance, billions of years ago, communities of marine Paleoproterozoic cyanobacteria could have helped create the biosphere as we know it by burying carbon compounds and allowing the initial build-up of oxygen in the atmosphere. On the other hand, toxic cyanobacterial blooms are an increasing issue for society, as their toxins can be harmful to animals. Extreme blooms can also deplete water of oxygen and reduce the penetration of sunlight and visibility, thereby compromising the feeding and mating behaviour of light-reliant species. As shown in the diagram on the right, bacteria can stay in suspension as individual cells, adhere collectively to surfaces to form biofilms, passively sediment, or flocculate to form suspended aggregates. Cyanobacteria are able to produce sulphated polysaccharides (yellow haze surrounding clumps of cells) that enable them to form floating aggregates. In 2021, Maeda et al. discovered that oxygen produced by cyanobacteria becomes trapped in the network of polysaccharides and cells, enabling the microorganisms to form buoyant blooms. It is thought that specific protein fibres known as pili (represented as lines radiating from the cells) may act as an additional way to link cells to each other or onto surfaces. Some cyanobacteria also use sophisticated intracellular gas vesicles as floatation aids. The diagram on the left above shows a proposed model of microbial distribution, spatial organization, carbon and O2 cycling in clumps and adjacent areas. (a) Clumps contain denser cyanobacterial filaments and heterotrophic microbes. The initial differences in density depend on cyanobacterial motility and can be established over short timescales. Darker blue color outside of the clump indicates higher oxygen concentrations in areas adjacent to clumps. Oxic media increase the reversal frequencies of any filaments that begin to leave the clumps, thereby reducing the net migration away from the clump. This enables the persistence of the initial clumps over short timescales; (b) Spatial coupling between photosynthesis and respiration in clumps. Oxygen produced by cyanobacteria diffuses into the overlying medium or is used for aerobic respiration. Dissolved inorganic carbon (DIC) diffuses into the clump from the overlying medium and is also produced within the clump by respiration. In oxic solutions, high O2 concentrations reduce the efficiency of CO2 fixation and result in the excretion of glycolate. Under these conditions, clumping can be beneficial to cyanobacteria if it stimulates the retention of carbon and the assimilation of inorganic carbon by cyanobacteria within clumps. This effect appears to promote the accumulation of particulate organic carbon (cells, sheaths and heterotrophic organisms) in clumps. It has been unclear why and how cyanobacteria form communities. Aggregation must divert resources away from the core business of making more cyanobacteria, as it generally involves the production of copious quantities of extracellular material. In addition, cells in the centre of dense aggregates can also suffer from both shading and shortage of nutrients. So, what advantage does this communal life bring for cyanobacteria? New insights into how cyanobacteria form blooms have come from a 2021 study on the cyanobacterium Synechocystis. These use a set of genes that regulate the production and export of sulphated polysaccharides, chains of sugar molecules modified with sulphate groups that can often be found in marine algae and animal tissue. Many bacteria generate extracellular polysaccharides, but sulphated ones have only been seen in cyanobacteria. In Synechocystis these sulphated polysaccharide help the cyanobacterium form buoyant aggregates by trapping oxygen bubbles in the slimy web of cells and polysaccharides. Previous studies on Synechocystis have shown type IV pili, which decorate the surface of cyanobacteria, also play a role in forming blooms. These retractable and adhesive protein fibres are important for motility, adhesion to substrates and DNA uptake. The formation of blooms may require both type IV pili and Synechan – for example, the pili may help to export the polysaccharide outside the cell. Indeed, the activity of these protein fibres may be connected to the production of extracellular polysaccharides in filamentous cyanobacteria. A more obvious answer would be that pili help to build the aggregates by binding the cells with each other or with the extracellular polysaccharide. As with other kinds of bacteria, certain components of the pili may allow cyanobacteria from the same species to recognise each other and make initial contacts, which are then stabilised by building a mass of extracellular polysaccharide. The bubble flotation mechanism identified by Maeda et al. joins a range of known strategies that enable cyanobacteria to control their buoyancy, such as using gas vesicles or accumulating carbohydrate ballasts. Type IV pili on their own could also control the position of marine cyanobacteria in the water column by regulating viscous drag. Extracellular polysaccharide appears to be a multipurpose asset for cyanobacteria, from floatation device to food storage, defence mechanism and mobility aid. Cellular death One of the most critical processes determining cyanobacterial eco-physiology is cellular death. Evidence supports the existence of controlled cellular demise in cyanobacteria, and various forms of cell death have been described as a response to biotic and abiotic stresses. However, cell death research in cyanobacteria is a relatively young field and understanding of the underlying mechanisms and molecular machinery underpinning this fundamental process remains largely elusive. However, reports on cell death of marine and freshwater cyanobacteria indicate this process has major implications for the ecology of microbial communities/ Different forms of cell demise have been observed in cyanobacteria under several stressful conditions, and cell death has been suggested to play a key role in developmental processes, such as akinete and heterocyst differentiation, as well as strategy for population survival. Cyanophages Cyanophages are viruses that infect cyanobacteria. Cyanophages can be found in both freshwater and marine environments. Marine and freshwater cyanophages have icosahedral heads, which contain double-stranded DNA, attached to a tail by connector proteins. The size of the head and tail vary among species of cyanophages. Cyanophages, like other bacteriophages, rely on Brownian motion to collide with bacteria, and then use receptor binding proteins to recognize cell surface proteins, which leads to adherence. Viruses with contractile tails then rely on receptors found on their tails to recognize highly conserved proteins on the surface of the host cell. Cyanophages infect a wide range of cyanobacteria and are key regulators of the cyanobacterial populations in aquatic environments, and may aid in the prevention of cyanobacterial blooms in freshwater and marine ecosystems. These blooms can pose a danger to humans and other animals, particularly in eutrophic freshwater lakes. Infection by these viruses is highly prevalent in cells belonging to Synechococcus spp. in marine environments, where up to 5% of cells belonging to marine cyanobacterial cells have been reported to contain mature phage particles. The first cyanophage, LPP-1, was discovered in 1963. Cyanophages are classified within the bacteriophage families Myoviridae (e.g. AS-1, N-1), Podoviridae (e.g. LPP-1) and Siphoviridae (e.g. S-1). Movement It has long been known that filamentous cyanobacteria perform surface motions, and that these movements result from type IV pili. Additionally, Synechococcus, a marine cyanobacteria, is known to swim at a speed of 25 μm/s by a mechanism different to that of bacterial flagella. Formation of waves on the cyanobacteria surface is thought to push surrounding water backwards. Cells are known to be motile by a gliding method and a novel uncharacterized, non-phototactic swimming method that does not involve flagellar motion. Many species of cyanobacteria are capable of gliding. Gliding is a form of cell movement that differs from crawling or swimming in that it does not rely on any obvious external organ or change in cell shape and it occurs only in the presence of a substrate. Gliding in filamentous cyanobacteria appears to be powered by a "slime jet" mechanism, in which the cells extrude a gel that expands quickly as it hydrates providing a propulsion force, although some unicellular cyanobacteria use type IV pili for gliding. Cyanobacteria have strict light requirements. Too little light can result in insufficient energy production, and in some species may cause the cells to resort to heterotrophic respiration. Too much light can inhibit the cells, decrease photosynthesis efficiency and cause damage by bleaching. UV radiation is especially deadly for cyanobacteria, with normal solar levels being significantly detrimental for these microorganisms in some cases. Filamentous cyanobacteria that live in microbial mats often migrate vertically and horizontally within the mat in order to find an optimal niche that balances their light requirements for photosynthesis against their sensitivity to photodamage. For example, the filamentous cyanobacteria Oscillatoria sp. and Spirulina subsalsa found in the hypersaline benthic mats of Guerrero Negro, Mexico migrate downwards into the lower layers during the day in order to escape the intense sunlight and then rise to the surface at dusk. In contrast, the population of Microcoleus chthonoplastes found in hypersaline mats in Camargue, France migrate to the upper layer of the mat during the day and are spread homogeneously through the mat at night. An in vitro experiment using Phormidium uncinatum also demonstrated this species' tendency to migrate in order to avoid damaging radiation. These migrations are usually the result of some sort of photomovement, although other forms of taxis can also play a role. Photomovement – the modulation of cell movement as a function of the incident light – is employed by the cyanobacteria as a means to find optimal light conditions in their environment. There are three types of photomovement: photokinesis, phototaxis and photophobic responses. Photokinetic microorganisms modulate their gliding speed according to the incident light intensity. For example, the speed with which Phormidium autumnale glides increases linearly with the incident light intensity. Phototactic microorganisms move according to the direction of the light within the environment, such that positively phototactic species will tend to move roughly parallel to the light and towards the light source. Species such as Phormidium uncinatum cannot steer directly towards the light, but rely on random collisions to orient themselves in the right direction, after which they tend to move more towards the light source. Others, such as Anabaena variabilis, can steer by bending the trichome. Finally, photophobic microorganisms respond to spatial and temporal light gradients. A step-up photophobic reaction occurs when an organism enters a brighter area field from a darker one and then reverses direction, thus avoiding the bright light. The opposite reaction, called a step-down reaction, occurs when an organism enters a dark area from a bright area and then reverses direction, thus remaining in the light. Evolution Earth history Stromatolites are layered biochemical accretionary structures formed in shallow water by the trapping, binding, and cementation of sedimentary grains by biofilms (microbial mats) of microorganisms, especially cyanobacteria. During the Precambrian, stromatolite communities of microorganisms grew in most marine and non-marine environments in the photic zone. After the Cambrian explosion of marine animals, grazing on the stromatolite mats by herbivores greatly reduced the occurrence of the stromatolites in marine environments. Since then, they are found mostly in hypersaline conditions where grazing invertebrates cannot live (e.g. Shark Bay, Western Australia). Stromatolites provide ancient records of life on Earth by fossil remains which date from 3.5 Ga ago. The oldest undisputed evidence of cyanobacteria is dated to be 2.1 Ga ago, but there is some evidence for them as far back as 2.7 Ga ago. Cyanobacteria might have also emerged 3.5 Ga ago. Oxygen concentrations in the atmosphere remained around or below 0.001% of today's level until 2.4 Ga ago (the Great Oxygenation Event). The rise in oxygen may have caused a fall in the concentration of atmospheric methane, and triggered the Huronian glaciation from around 2.4 to 2.1 Ga ago. In this way, cyanobacteria may have killed off most of the other bacteria of the time. Oncolites are sedimentary structures composed of oncoids, which are layered structures formed by cyanobacterial growth. Oncolites are similar to stromatolites, but instead of forming columns, they form approximately spherical structures that were not attached to the underlying substrate as they formed. The oncoids often form around a central nucleus, such as a shell fragment, and a calcium carbonate structure is deposited by encrusting microbes. Oncolites are indicators of warm waters in the photic zone, but are also known in contemporary freshwater environments. These structures rarely exceed 10 cm in diameter. One former classification scheme of cyanobacterial fossils divided them into the porostromata and the spongiostromata. These are now recognized as form taxa and considered taxonomically obsolete; however, some authors have advocated for the terms remaining informally to describe form and structure of bacterial fossils. Origin of photosynthesis Oxygenic photosynthesis only evolved once (in prokaryotic cyanobacteria), and all photosynthetic eukaryotes (including all plants and algae) have acquired this ability from endosymbiosis with cyanobacteria or their endosymbiont hosts. In other words, all the oxygen that makes the atmosphere breathable for aerobic organisms originally comes from cyanobacteria or their plastid descendants. Cyanobacteria remained the principal primary producers throughout the latter half of the Archean eon and most of the Proterozoic eon, in part because the redox structure of the oceans favored photoautotrophs capable of nitrogen fixation. However, their population is argued to have varied considerably across this eon. Archaeplastids such as green and red algae eventually surpassed cyanobacteria as major primary producers on continental shelves near the end of the Neoproterozoic, but only with the Mesozoic (251–65 Ma) radiations of secondary photoautotrophs such as dinoflagellates, coccolithophorids and diatoms did primary production in marine shelf waters take modern form. Cyanobacteria remain critical to marine ecosystems as primary producers in oceanic gyres, as agents of biological nitrogen fixation, and, in modified form, as the plastids of marine algae. Origin of chloroplasts Primary chloroplasts are cell organelles found in some eukaryotic lineages, where they are specialized in performing photosynthesis. They are considered to have evolved from endosymbiotic cyanobacteria. After some years of debate, it is now generally accepted that the three major groups of primary endosymbiotic eukaryotes (i.e. green plants, red algae and glaucophytes) form one large monophyletic group called Archaeplastida, which evolved after one unique endosymbiotic event. The morphological similarity between chloroplasts and cyanobacteria was first reported by German botanist Andreas Franz Wilhelm Schimper in the 19th century Chloroplasts are only found in plants and algae, thus paving the way for Russian biologist Konstantin Mereschkowski to suggest in 1905 the symbiogenic origin of the plastid. Lynn Margulis brought this hypothesis back to attention more than 60 years later but the idea did not become fully accepted until supplementary data started to accumulate. The cyanobacterial origin of plastids is now supported by various pieces of phylogenetic, genomic, biochemical and structural evidence. The description of another independent and more recent primary endosymbiosis event between a cyanobacterium and a separate eukaryote lineage (the rhizarian Paulinella chromatophora) also gives credibility to the endosymbiotic origin of the plastids. In addition to this primary endosymbiosis, many eukaryotic lineages have been subject to secondary or even tertiary endosymbiotic events, that is the "Matryoshka-like" engulfment by a eukaryote of another plastid-bearing eukaryote. Chloroplasts have many similarities with cyanobacteria, including a circular chromosome, prokaryotic-type ribosomes, and similar proteins in the photosynthetic reaction center. The endosymbiotic theory suggests that photosynthetic bacteria were acquired (by endocytosis) by early eukaryotic cells to form the first plant cells. Therefore, chloroplasts may be photosynthetic bacteria that adapted to life inside plant cells. Like mitochondria, chloroplasts still possess their own DNA, separate from the nuclear DNA of their plant host cells and the genes in this chloroplast DNA resemble those in cyanobacteria. DNA in chloroplasts codes for redox proteins such as photosynthetic reaction centers. The CoRR hypothesis proposes this co-location is required for redox regulation. Marine origins Cyanobacteria have fundamentally transformed the geochemistry of the planet. Multiple lines of geochemical evidence support the occurrence of intervals of profound global environmental change at the beginning and end of the Proterozoic (2,500–542 Mya). While it is widely accepted that the presence of molecular oxygen in the early fossil record was the result of cyanobacteria activity, little is known about how cyanobacteria evolution (e.g., habitat preference) may have contributed to changes in biogeochemical cycles through Earth history. Geochemical evidence has indicated that there was a first step-increase in the oxygenation of the Earth's surface, which is known as the Great Oxidation Event (GOE), in the early Paleoproterozoic (2,500–1,600 Mya). A second but much steeper increase in oxygen levels, known as the Neoproterozoic Oxygenation Event (NOE), occurred at around 800 to 500 Mya. Recent chromium isotope data point to low levels of atmospheric oxygen in the Earth's surface during the mid-Proterozoic, which is consistent with the late evolution of marine planktonic cyanobacteria during the Cryogenian; both types of evidence help explain the late emergence and diversification of animals. Understanding the evolution of planktonic cyanobacteria is important because their origin fundamentally transformed the nitrogen and carbon cycles towards the end of the Pre-Cambrian. It remains unclear, however, what evolutionary events led to the emergence of open-ocean planktonic forms within cyanobacteria and how these events relate to geochemical evidence during the Pre-Cambrian. So far, it seems that ocean geochemistry (e.g., euxinic conditions during the early- to mid-Proterozoic) and nutrient availability likely contributed to the apparent delay in diversification and widespread colonization of open ocean environments by planktonic cyanobacteria during the Neoproterozoic. Genetics Cyanobacteria are capable of natural genetic transformation. Natural genetic transformation is the genetic alteration of a cell resulting from the direct uptake and incorporation of exogenous DNA from its surroundings. For bacterial transformation to take place, the recipient bacteria must be in a state of competence, which may occur in nature as a response to conditions such as starvation, high cell density or exposure to DNA damaging agents. In chromosomal transformation, homologous transforming DNA can be integrated into the recipient genome by homologous recombination, and this process appears to be an adaptation for repairing DNA damage. DNA repair Cyanobacteria are challenged by environmental stresses and internally generated reactive oxygen species that cause DNA damage. Cyanobacteria possess numerous E. coli-like DNA repair genes. Several DNA repair genes are highly conserved in cyanobacteria, even in small genomes, suggesting that core DNA repair processes such as recombinational repair, nucleotide excision repair and methyl-directed DNA mismatch repair are common among cyanobacteria. Classification Phylogeny Taxonomy Historically, bacteria were first classified as plants constituting the class Schizomycetes, which along with the Schizophyceae (blue-green algae/Cyanobacteria) formed the phylum Schizophyta, then in the phylum Monera in the kingdom Protista by Haeckel in 1866, comprising Protogens, Protamaeba, Vampyrella, Protomonae, and Vibrio, but not Nostoc and other cyanobacteria, which were classified with algae, later reclassified as the Prokaryotes by Chatton. The cyanobacteria were traditionally classified by morphology into five sections, referred to by the numerals I–V. The first three – Chroococcales, Pleurocapsales, and Oscillatoriales – are not supported by phylogenetic studies. The latter two – Nostocales and Stigonematales – are monophyletic as a unit, and make up the heterocystous cyanobacteria. The members of Chroococales are unicellular and usually aggregate in colonies. The classic taxonomic criterion has been the cell morphology and the plane of cell division. In Pleurocapsales, the cells have the ability to form internal spores (baeocytes). The rest of the sections include filamentous species. In Oscillatoriales, the cells are uniseriately arranged and do not form specialized cells (akinetes and heterocysts). In Nostocales and Stigonematales, the cells have the ability to develop heterocysts in certain conditions. Stigonematales, unlike Nostocales, include species with truly branched trichomes. Most taxa included in the phylum or division Cyanobacteria have not yet been validly published under The International Code of Nomenclature of Prokaryotes (ICNP) and are instead validly published under the International Code of Nomenclature for algae, fungi, and plants. These exceptions are validly published under ICNP: The families Prochloraceae and Prochlorotrichaceae The genera Halospirulina, Planktothricoides, Prochlorococcus, Prochloron, and Prochlorothrix Formerly, some bacteria, like Beggiatoa, were thought to be colorless Cyanobacteria. The currently accepted taxonomy as of 2025 is based on the List of Prokaryotic names with Standing in Nomenclature (LPSN) and National Center for Biotechnology Information (NCBI). Class Cyanophyceae Order Acaryochloridales Miyashita et al. 2003 ex Strunecký & Mareš 2022 Family Thermosynechococcaceae Order Aegeococcales Strunecký & Mareš 2022 Order Chroococcales (synonyms Pleurocapsales, Cyanobacteriales) Order Chroococcidiopsidales Order Coleofasciculales Order Desertifilales Order Geitlerinematales Strunecký & Mareš 2022 Order Gloeobacterales Order Gloeomargaritales Moreira et al. 2016 Order Gomontiellales Order Graniferales Order Leptolyngbyales (synonym Phormidesmiales) Strunecký & Mareš 2022 Order Nodosilineales Strunecký & Mareš 2022 Order Nostocales (synonym Stigonematales) Order Oculatellales (synonym Elainellales) Strunecký & Mareš 2022 Order Oscillatoriales Order Pelonematales Order Prochlorotrichales Strunecký & Mareš 2022 (PCC-9006) Family Prochlorococcaceae Komárek & Strunecky 2020 {"PCC-6307"} Order Sarmaellales Order Spirulinales Order Synechococcales (synonym Pseudanabaenales) Hoffmann, Komárek & Kastovsky 2005 Order Thermostichales Komárek & Strunecký 2020 Class Vampirovibrionophyceae Order Vampirovibrionales Relation to humans Biotechnology The unicellular cyanobacterium Synechocystis sp. PCC6803 was the third prokaryote and first photosynthetic organism whose genome was completely sequenced. It continues to be an important model organism. Crocosphaera subtropica ATCC 51142 is an important diazotrophic model organism. The smallest genomes of a photosynthetic organism have been found in Prochlorococcus spp. (1.7 Mb) and the largest in Nostoc punctiforme (9 Mb). Those of Calothrix spp. are estimated at 12–15 Mb, as large as yeast. Recent research has suggested the potential application of cyanobacteria to the generation of renewable energy by directly converting sunlight into electricity. Internal photosynthetic pathways can be coupled to chemical mediators that transfer electrons to external electrodes. In the shorter term, efforts are underway to commercialize algae-based fuels such as diesel, gasoline, and jet fuel. Cyanobacteria have been also engineered to produce ethanol and experiments have shown that when one or two CBB genes are being over expressed, the yield can be even higher. Cyanobacteria may possess the ability to produce substances that could one day serve as anti-inflammatory agents and combat bacterial infections in humans. Cyanobacteria's photosynthetic output of sugar and oxygen has been demonstrated to have therapeutic value in rats with heart attacks. While cyanobacteria can naturally produce various secondary metabolites, they can serve as advantageous hosts for plant-derived metabolites production owing to biotechnological advances in systems biology and synthetic biology. Spirulina's extracted blue color is used as a natural food coloring. Researchers from several space agencies argue that cyanobacteria could be used for producing goods for human consumption in future crewed outposts on Mars, by transforming materials available on this planet. Human nutrition Some cyanobacteria are sold as food, notably Arthrospira platensis (Spirulina), Aphanizomenon flos-aquae (Klamath Lake AFA), and others. Some microalgae contain substances of high biological value, such as polyunsaturated fatty acids, amino acids, proteins, pigments, antioxidants, vitamins, and minerals. Edible blue-green algae reduce the production of pro-inflammatory cytokines by inhibiting NF-κB pathway in macrophages and splenocytes. Sulfate polysaccharides exhibit immunomodulatory, antitumor, antithrombotic, anticoagulant, anti-mutagenic, anti-inflammatory, antimicrobial, and even antiviral activity against HIV, herpes, and hepatitis. Health risks Some cyanobacteria can produce neurotoxins, cytotoxins, endotoxins, and hepatotoxins (e.g., the microcystin-producing bacteria genus microcystis), which are collectively known as cyanotoxins. Specific toxins include anatoxin-a, guanitoxin, aplysiatoxin, cyanopeptolin, cylindrospermopsin, domoic acid, nodularin R (from Nodularia), neosaxitoxin, and saxitoxin. Cyanobacteria reproduce explosively under certain conditions. This results in algal blooms which can become harmful to other species and pose a danger to humans and animals if the cyanobacteria involved produce toxins. Several cases of human poisoning have been documented, but a lack of knowledge prevents an accurate assessment of the risks, and research by Linda Lawton, FRSE at Robert Gordon University, Aberdeen and collaborators has 30 years of examining the phenomenon and methods of improving water safety. Recent studies suggest that significant exposure to high levels of cyanobacteria producing toxins such as BMAA can cause amyotrophic lateral sclerosis (ALS). People living within half a mile of cyanobacterially contaminated lakes have had a 2.3 times greater risk of developing ALS than the rest of the population; people around New Hampshire's Lake Mascoma had an up to 25 times greater risk of ALS than the expected incidence. BMAA from desert crusts found throughout Qatar might have contributed to higher rates of ALS in Gulf War veterans. Chemical control Several chemicals can eliminate cyanobacterial blooms from smaller water-based systems such as swimming pools. They include calcium hypochlorite, copper sulphate, Cupricide (chelated copper), and simazine. The calcium hypochlorite amount needed varies depending on the cyanobacteria bloom, and treatment is needed periodically. According to the Department of Agriculture Australia, a rate of 12 g of 70% material in 1000 L of water is often effective to treat a bloom. Copper sulfate is also used commonly, but no longer recommended by the Australian Department of Agriculture, as it kills livestock, crustaceans, and fish. Cupricide is a chelated copper product that eliminates blooms with lower toxicity risks than copper sulfate. Dosage recommendations vary from 190 mL to 4.8 L per 1000 m2. Ferric alum treatments at the rate of 50 mg/L will reduce algae blooms. Simazine, which is also a herbicide, will continue to kill blooms for several days after an application. Simazine is marketed at different strengths (25, 50, and 90%), the recommended amount needed for one cubic meter of water per product is 25% product 8 mL; 50% product 4 mL; or 90% product 2.2 mL. Climate change Climate change is likely to increase the frequency, intensity and duration of cyanobacterial blooms in many eutrophic lakes, reservoirs and estuaries. Bloom-forming cyanobacteria produce a variety of neurotoxins, hepatotoxins and dermatoxins, which can be fatal to birds and mammals (including waterfowl, cattle and dogs) and threaten the use of waters for recreation, drinking water production, agricultural irrigation and fisheries. Toxic cyanobacteria have caused major water quality problems, for example in Lake Taihu (China), Lake Erie (USA), Lake Okeechobee (USA), Lake Victoria (Africa) and the Baltic Sea. Climate change favours cyanobacterial blooms both directly and indirectly. Many bloom-forming cyanobacteria can grow at relatively high temperatures. Increased thermal stratification of lakes and reservoirs enables buoyant cyanobacteria to float upwards and form dense surface blooms, which gives them better access to light and hence a selective advantage over nonbuoyant phytoplankton organisms. Protracted droughts during summer increase water residence times in reservoirs, rivers and estuaries, and these stagnant warm waters can provide ideal conditions for cyanobacterial bloom development. The capacity of the harmful cyanobacterial genus Microcystis to adapt to elevated CO2 levels was demonstrated in both laboratory and field experiments. Microcystis spp. take up CO2 and and accumulate inorganic carbon in carboxysomes, and strain competitiveness was found to depend on the concentration of inorganic carbon. As a result, climate change and increased CO2 levels are expected to affect the strain composition of cyanobacterial blooms. Gallery
Biology and health sciences
Other organisms
null
6326413
https://en.wikipedia.org/wiki/Hoist%20%28device%29
Hoist (device)
A hoist is a device used for lifting or lowering a load by means of a drum or lift-wheel around which rope or chain wraps. It may be manually operated, electrically or pneumatically driven and may use chain, fiber or wire rope as its lifting medium. The most familiar form is an elevator, the car of which is raised and lowered by a hoist mechanism. Most hoists couple to their loads using a lifting hook. Today, there are a few governing bodies for the North American overhead hoist industry which include the Hoist Manufactures Institute, ASME, and the Occupational Safety and Health Administration. HMI is a product counsel of the Material Handling Industry of America consisting of hoist manufacturers promoting safe use of their products. Types The word “hoist” is used to describe many different types of equipment that lift and lower loads. For example, many people use “hoist” to describe an elevator. The information contained here pertains specially to overhead, construction and mine hoist. Overhead hoist Overhead hoists are defined in the American Society of Mechanical Engineers (ASME) B30 standards as a machinery unit that is used for lifting or lowering a freely suspended (unguided) load. These units are typically used in an industrial setting and may be part of an overhead crane. A specific overhead hoist configuration is usually defined by the lifting medium, operation and suspension. The lifting medium is the type of component used to transmit and cause the vertical motion and includes wire rope, chain or synthetic strap, or rope. The operation defines the type of power used to operate the hoisting motion and includes manual power, electric power, hydraulic power or air power. The suspension defines the type of mounting method used to suspend the hoist and includes hook, clevis, lug, trolley, deck, base, wall or ceiling. The most commonly used overhead hoist is electrical powered with wire rope or chain as the lifting medium. Both wire rope and chain hoist have been in common use since the 1800s, however mass production of electric hoists did not start until the early 1900s and was first adapted by Germany. A hoist can be a serial production unit or a custom unit. Serial production hoists are typically more cost-effective and designed for a ten-year life in a light to heavy hoist duty service classification. Custom hoists are typically more expensive and are designed for a heavy to severe hoist duty service classification. Serial production hoists were once regarded as being designed for light to moderate hoist duty service classifications, but since the 60's this has changed. Over the years the custom hoist market has decreased in size with the advent of the more durable serial production hoists. A machine shop or fabricating shop will typically use a serial production hoist, while a steel mill or NASA may typically use a custom hoist to meet durability and performance requirements. Overhead hoists require proper installation, operation, inspection, and maintenance. When selecting an overhead hoist, operators must consider the average operating time per day, load spectrum, starts per hour, operating period and equipment life. These parameters determine the Hoist Duty Service Classification, which helps hoist installers and users better understand the hoist's useful life and duty service application. The American Society of Mechanical Engineers also publishes a number or standards related to overhead hoists, including the “ASME B30.16 Standard for Overhead Hoists (Underhung)", which provides additional guidance for the proper design, installation, operation and maintenance of hoists. Construction hoist Also known as a Man-Lift, Buckhoist, temporary elevator, builder hoist, passenger hoist or construction elevator, a construction hoist is commonly used on large scale construction projects, such as high-rise buildings or major hospitals. There are many other uses for the construction elevator. Many other industries use the buckhoist for full-time operations, the purpose being to carry personnel, materials, and equipment quickly between the ground and higher floors, or between floors in the middle of a structure. There are three types: Utility (to move material), personnel (to move personnel), and dual-rated, which can do both. The construction hoist is made up of either one or two cars (cages) which travel vertically along stacked mast tower sections. The mast sections are attached to the structure or building every for added stability. For precisely controlled travel along the mast sections, modern construction hoists use a motorized rack-and-pinion system that climbs the mast sections at various speeds. While hoists have been predominantly produced in Europe and the United States, China is emerging as a manufacturer of hoists to be used in Asia. In the United States and abroad, General Contractors and various other industrial markets rent or lease hoists for a specific projects. Rental or leasing companies provide erection, dismantling, and repair services to their hoists to provide General Contractors with turnkey services. Also, the rental and leasing companies can provide parts and service for the elevators that are under contract. Mine hoist A mining hoist (also known simply as a hoist or winder) is used in underground mining to raise and lower conveyances within the mine shaft. It is similar to an elevator, used for raising humans, equipment, and assorted loads. Human, animal and water power were used to power the mine hoists documented in Agricola's De Re Metallica, published in 1556. Stationary steam engines were commonly used to power mine hoists through the 19th century and into the 20th, as at the Quincy Mine, where a 4-cylinder cross-compound Corliss engine was used. Modern hoists are powered using electric motors, historically with direct current drives utilizing solid-state converters (thyristors); however, modern large hoists use alternating current drives that are variable-frequency controlled. There are three principal types of hoists used in mining applications, Drum Hoists, Friction (or Kope) hoists and Blair multi-rope hoists. Hoist can be defined as anything that is used to lift any heavy materials. Chain hoist Differential pulley Gallery
Technology
Tools
null
6326483
https://en.wikipedia.org/wiki/Observational%20study
Observational study
In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator. This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group. Observational studies, for lacking an assignment mechanism, naturally present difficulties for inferential analysis. Motivation The independent variable may be beyond the control of the investigator for a variety of reasons: A randomized experiment would violate ethical standards. Suppose one wanted to investigate the abortion – breast cancer hypothesis, which postulates a causal link between induced abortion and the incidence of breast cancer. In a hypothetical controlled experiment, one would start with a large subject pool of pregnant women and divide them randomly into a treatment group (receiving induced abortions) and a control group (not receiving abortions), and then conduct regular cancer screenings for women from both groups. Needless to say, such an experiment would run counter to common ethical principles. (It would also suffer from various confounds and sources of bias, e.g. it would be impossible to conduct it as a blind experiment.) The published studies investigating the abortion–breast cancer hypothesis generally start with a group of women who already have received abortions. Membership in this "treated" group is not controlled by the investigator: the group is formed after the "treatment" has been assigned. The investigator may simply lack the requisite influence. Suppose a scientist wants to study the public health effects of a community-wide ban on smoking in public indoor areas. In a controlled experiment, the investigator would randomly pick a set of communities to be in the treatment group. However, it is typically up to each community and/or its legislature to enact a smoking ban. The investigator can be expected to lack the political power to cause precisely those communities in the randomly selected treatment group to pass a smoking ban. In an observational study, the investigator would typically start with a treatment group consisting of those communities where a smoking ban is already in effect. A randomized experiment may be impractical. Suppose a researcher wants to study the suspected link between a certain medication and a very rare group of symptoms arising as a side effect. Setting aside any ethical considerations, a randomized experiment would be impractical because of the rarity of the effect. There may not be a subject pool large enough for the symptoms to be observed in at least one treated subject. An observational study would typically start with a group of symptomatic subjects and work backwards to find those who were given the medication and later developed the symptoms. Thus a subset of the treated group was determined based on the presence of symptoms, instead of by random assignment. Many randomized controlled trials are not broadly representative of real-world patients and this may limit their external validity. Patients who are eligible for inclusion in a randomized controlled trial are usually younger, more likely to be male, healthier and more likely to be treated according to recommendations from guidelines. If and when the intervention is later added to routine-care, a large portion of the patients who will receive it may be old with many concomitant diseases and drug-therapies, altho Types Case-control study: study originally developed in epidemiology, in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute. Cross-sectional study: involves data collection from a population, or a representative subset, at one specific point in time. Longitudinal study: correlational research study that involves repeated observations of the same variables over long periods of time. Cohort study and Panel study are particular forms of longitudinal study. Degree of usefulness and reliability "Although observational studies cannot be used to make definitive statements of fact about the "safety, efficacy, or effectiveness" of a practice, they can: provide information on 'real world' use and practice; detect signals about the benefits and risks of...[the] use [of practices] in the general population; help formulate hypotheses to be tested in subsequent experiments; provide part of the community-level data needed to design more informative pragmatic clinical trials; and inform clinical practice." Bias and compensating methods In all of those cases, if a randomized experiment cannot be carried out, the alternative line of investigation suffers from the problem that the decision of which subjects receive the treatment is not entirely random and thus is a potential source of bias. A major challenge in conducting observational studies is to draw inferences that are acceptably free from influences by overt biases, as well as to assess the influence of potential hidden biases. The following are a non-exhaustive set of problems especially common in observational studies. Matching techniques bias In lieu of experimental control, multivariate statistical techniques allow the approximation of experimental control with statistical control by using matching methods. Matching methods account for the influences of observed factors that might influence a cause-and-effect relationship. In healthcare and the social sciences, investigators may use matching to compare units that nonrandomly received the treatment and control. One common approach is to use propensity score matching in order to reduce confounding, although this has recently come under criticism for exacerbating the very problems it seeks to solve. Multiple comparison bias Multiple comparison bias can occur when several hypotheses are tested at the same time. As the number of recorded factors increases, the likelihood increases that at least one of the recorded factors will be highly correlated with the data output simply by chance. Omitted variable bias An observer of an uncontrolled experiment (or process) records potential factors and the data output: the goal is to determine the effects of the factors. Sometimes the recorded factors may not be directly causing the differences in the output. There may be more important factors which were not recorded but are, in fact, causal. Also, recorded or unrecorded factors may be correlated which may yield incorrect conclusions. Selection bias Another difficulty with observational studies is that researchers may themselves be biased in their observational skills. This would allow for researchers to (either consciously or unconsciously) seek out the information they're looking for while conducting their research. For example, researchers may exaggerate the effect of one variable, or downplay the effect of another: researchers may even select in subjects that fit their conclusions. This selection bias can happen at any stage of the research process. This introduces bias into the data where certain variables are systematically incorrectly measured. Quality A 2014 (updated in 2024) Cochrane review concluded that observational studies produce results similar to those conducted as randomized controlled trials. The review reported little evidence for significant effect differences between observational studies and randomized controlled trials, regardless of design. Differences need to be evaluated by looking at population, comparator, heterogeneity, and outcomes.
Mathematics
Statistics and probability
null
6326659
https://en.wikipedia.org/wiki/Japanese%20giant%20salamander
Japanese giant salamander
The Japanese giant salamander (Andrias japonicus) is a species of fully aquatic giant salamander endemic to Japan, occurring across the western portion of the main island of Honshu, with smaller populations present on Shikoku and in northern Kyushu. With a length of up to , it is the third-largest salamander in the world, only being surpassed by the very similar and closely related Chinese giant salamander and the South China giant salamander. It is known in Japanese as , literally meaning "giant salamander". Other local names include hanzaki, hanzake, and ankou. This salamander was first catalogued by Europeans when the resident physician of Dejima Island in Nagasaki, Philipp Franz von Siebold, captured an individual and shipped it back to Leiden in the Netherlands, in the 1820s. The species was designated as a special natural monument in 1951, and is protected by the Central Government. It is one of only six species of giant salamanders in the world. Description The Japanese giant salamander can grow to a length of and a weight of . The largest wild specimen on record weighed and was long. It is the third-largest amphibian in the world, only smaller than its close relatives, the South China giant salamander and the Chinese giant salamander. The brown and black mottled skin of A. japonicus provides camouflage against the bottoms of streams and rivers. Its body surface is covered with numerous small warts with distinctive warts concentrating on its head. It has very small eyes with no eyelids and poor eye sight. Its mouth extends across the width of its head, and can open to the width of its body. A. japonicus possesses large skin folds on its neck that effectively increase its overall body surface area. This assists in epidermal gas exchanges, which in turn regulates carbon dioxide and oxygen exchange with the water. Capillaries in the surface of the skin facilitate this gas exchange. The skin folds along each side of the body are more pronounced in the hellbender than in the Japanese giant salamander. The Japanese giant salamander can be distinguished from the Chinese giant salamander by the arrangement of tubercles on the head and throat. The tubercles are larger and more numerous compared to the mostly single and irregularly scattered tubercles of the Chinese giant salamander. The snout is also more rounded, and the tail is slightly shorter. Adult males develop enlarged cloacal glands during the breeding season. Compared to an adult female, an adult male typically possesses a larger and wider head in proportion to its body. It is difficult to distinguish sex outside of the breeding season. Distribution The Japanese giant salamander occurs in southwestern Japan (west of Gifu Prefecture in Honshu and parts of Shikoku and Kyushu). In particular, Okayama, Hyogo, Shimane, Tottori, Yamaguchi, Mie, Ehime, Gifu, and Ōita Prefectures are known to harbor its robust populations. They are typically found in fast-flowing mountain streams of these prefectures. It has been speculated that some of the populations in Wakayama Prefecture were introduced by humans and it is unknown whether naturally-distributed populations exist in Wakayama Prefecture. The Japanese giant salamander occurs in freshwater habitats ranging from relatively large river (20–50 m) to small headwater streams (0.5 - 4 m). Smaller breeding adults tend to use small headwater streams presumably in order to avoid intraspecific competition with larger individuals in larger streams. Mark-recapture records suggest that giant salamanders migrate between a mainstem and tributaries of the same river. Environmental DNA surveys and the following physical field surveys suggest that small headwater streams likely serve as important habitats for juveniles and larvae. While habitat degradation threatens the Japanese giant salamander, it can inhabit disturbed streams surrounded by agriculture fields such as rice paddy fields. Adults appear to do well in a stream surrounded by rice paddy fields because rice paddy fields provide habitats for frogs, which serve as primary diet for adult giant salamanders in such a stream. However, streams surrounded by rice paddy fields are typically characterized by agricultural dams and concrete stream banks, which likely imposes a negative impact on their reproduction and thus result in low recruitment. Behavior The Japanese giant salamander is restricted to streams with clear, cool water. Due to its large size and lack of gills, it is confined to flowing water where oxygen is abundant. It is entirely aquatic and almost entirely nocturnal. Unlike typical pond-breeding salamanders whose juveniles migrate to land after losing their gills through metamorphosis, it stays in the aquatic habitat even after metamorphosis and breaches its head above the surface to obtain air without venturing out of the water and onto land. The salamander also absorbs oxygen through its skin, which has many folds to increase surface area. When threatened, the Japanese giant salamander can excrete a strong-smelling, milky substance. It has very poor eyesight, and possesses special sensory cells covering its skin, running from head to toe, the lateral line system. These sensory cells' hair-like shapes detect minute vibrations in the environment, and are quite similar to the hair cells of the human inner ear. This feature is essential for hunting prey due to its poor eyesight. Adults feed mainly on freshwater crabs, other crustaceans, worms, insects, frogs, and fish. It has a very slow metabolism and can sometimes go for weeks without eating. It lacks natural competitors. It is a long-lived species, with the captive record being an individual that lived in the Natura Artis Magistra, the Netherlands, for 52 years. In the wild, it may live for nearly 80 years. Lifecycle The Japanese giant salamander remains in bodies of water its entire life. During the mating season, typically in late August and early September, sexually mature males start actively finding suitable nesting sites and often migrate upstream into smaller sections of the river or its tributaries. Because of the limited availability of suitable nesting sites, only large and competitive males are able to occupy nesting sites and become den masters. A den master diligently cleans his den guards his den against intruders including other males who try to steal the den while allowing a sexually active female enter the den. Mating begins as the female starts laying eggs and the den master starts releasing sperm, which often stimulate other subordinate males hiding around the den to enter the den and join the mating. As a result, a single female often mates with multiple males. The den master stays in the den with the fertilized eggs while the other males and the female leave the den. He provides parental care for the embryos by guarding the eggs and fanning water over them with his tail to increase oxygen flow. As the den-master kicks his back legs and fans with his tail, organic debris is swept out of the nest and carried away from the nest from the water current. If this behavior were not performed, organic material would build up in the nest and lead to water mold infection. Therefore, the behavior is classified as pre-ovipositional parental care. The den-master continues providing parental care for the hatchlings until the following spring when the larvae start dispersing from the nest. Researchers also observed that den masters consumed eggs and larvae that showed the sign of failed fertilization, death, or water mold infection. The researchers termed the behavior of selectively eating his own eggs or larvae "hygienic filial cannibalism" and hypothesize that this behavior importantly increases the survivorship of the remaining offspring by preventing water mold infection on the dead offspring from spreading to the healthy offspring. Conservation Threats The Japanese giant salamander is threatened by pollution, habitat loss (among other changes, by the silting up of the rivers where it lives), dams and concrete banks, and invasive species. In particular, it is important to note that the construction of concrete streambanks and agricultural dams throughout the distribution range has imposed a significant negative impact on giant salamanders. Concrete banks have deprived of habitats suited for nesting sites, and dams block migration paths and have caused habitat fragmentation. With the ongoing climate change, it is predicted that frequency and intensity of rainstorms in Japan will increase. These rainstorms will likely destroy stream banks more frequently, which could result in the construction of more flood-control dams and concrete banks. Introgressive hybridization between the native Japanese giant salamander and the introduced Chinese giant salamander (A. davidianus) is one of the major conservation challenges. It has been suggested that although the details are not known, the Chinese giant salamanders imported for food to Japan in 1972 were the sources of the ongoing introgressive hybridization. In Kamo River in Kyoto Prefecture, the study conducted from 2011 to 2013 found that 95% of the captured giant salamanders were hybrids. The introgressive hybridization appears to be spreading across several watersheds. Although the Chinese giant salamander has recently been split up into multiple species, recent genetic studies have confirmed that the Chinese giant salamander introduced to Japan is the initially described Chinese species, A. davidianus. In some regions, giant salamanders used to be hunted as a source of food, but hunting has ceased because of the protection acts established after World War II. Status As of 2022 the Japanese giant salamander is considered Vulnerable by IUCN, and is included on CITES Appendix I. It is considered Vulnerable by the Japanese Ministry of the Environment. Additionally, it has been given the highest protection as a "Special Natural Monument" by the Japanese Agency for Cultural Affairs since 1952 due to its cultural and educational significance. Efforts Despite the national protection and conservation status, there have been no conservation programs or actions initiated by the government agencies. Instead, nonprofit organizations such as the Japanese Giant Salamander Society and the Hanzaki Research Institute of Japan have organized volunteers to conduct population assessments in some areas. The Japanese Giant Salamander Society also organizes annual meetings to promote the conservation education and information sharing about the species. There is no range-wide conservation or recovery program, which is essential to the conservation of the species whose populations have been declining throughout its range. The Hiroshima City Asa Zoological Park of Japan was the first domestic organization to successfully breed Japanese giant salamanders in captivity. Several of their offspring were given to the National Zoo of the United States to establish a breeding program. Although Asa Zoological Park has not released any offspring to streams, it has a capacity to carry out a headstarting program if needed. Cultural references The Japanese giant salamander has been the subject of legend and artwork in Japan, for example, in the ukiyo-e work by Utagawa Kuniyoshi. The well-known Japanese mythological creature known as the kappa may be inspired by the Japanese giant salamander. There is a giant salamander festival every year on August 8 in Yubara, Maniwa City, Okayama prefecture to honour the animal and celebrate its life. The giant salamanders are called hanzaki in Yubara, due to the belief that even if they are ripped in half ('han') they continue to survive. There are two giant salamander floats: a dark male and a red female.
Biology and health sciences
Salamanders and newts
Animals
6327661
https://en.wikipedia.org/wiki/Technology%20adoption%20life%20cycle
Technology adoption life cycle
The technology adoption lifecycle is a sociological model that describes the adoption or acceptance of a new product or innovation, according to the demographic and psychological characteristics of defined adopter groups. The process of adoption over time is typically illustrated as a classical normal distribution or "bell curve". The model calls the first group of people to use a new product "innovators", followed by "early adopters". Next come the "early majority" and "late majority", and the last group to eventually adopt a product are called "laggards" or "phobics". For example, a phobic may only use a cloud service when it is the only remaining method of performing a required task, but the phobic may not have an in-depth technical knowledge of how to use the service. The demographic and psychological (or "psychographic") profiles of each adoption group were originally specified by agricultural researchers in 1956: innovators – had larger farms, were more educated, more prosperous and more risk-oriented early adopters – younger, more educated, tended to be community leaders, less prosperous early majority – more conservative but open to new ideas, active in community and influence to neighbors late majority – older, less educated, fairly conservative and less socially active laggards – very conservative, had small farms and capital, oldest and least educated The model has subsequently been adapted for many areas of technology adoption in the late 20th century, for example in the spread of policy innovations among U.S. states. Adaptations of the model The model has spawned a range of adaptations that extend the concept or apply it to specific domains of interest. In his book Crossing the Chasm, Geoffrey Moore proposes a variation of the original lifecycle. He suggests that for discontinuous innovations, which may result in a Foster disruption based on an s-curve, there is a gap or chasm between the first two adopter groups (innovators/early adopters), and the vertical markets. Disruption as it is used today are of the Clayton M. Christensen variety. These disruptions are not s-curve based. In educational technology, Lindy McKeown has provided a similar model (a pencil metaphor) describing the Information and Communications Technology uptake in education. In medical sociology, Carl May has proposed normalization process theory that shows how technologies become embedded and integrated in health care and other kinds of organization. Wenger, White and Smith, in their book Digital habitats: Stewarding technology for communities, talk of technology stewards: people with sufficient understanding of the technology available and the technological needs of a community to steward the community through the technology adoption process. Rayna and Striukova (2009) propose that the choice of initial market segment has crucial importance for crossing the chasm, as adoption in this segment can lead to a cascade of adoption in the other segments. This initial market segment has, at the same time, to contain a large proportion of visionaries, to be small enough for adoption to be observed from within the segment and from other segment and be sufficiently connected with other segments. If this is the case, the adoption in the first segment will progressively cascade into the adjacent segments, thereby triggering the adoption by the mass-market. Stephen L. Parente (1995) implemented a Markov Chain to model economic growth across different countries given different technological barriers. In Product marketing, Warren Schirtzinger proposed an expansion of the original lifecycle (the Customer Alignment Lifecycle) which describes the configuration of five different business disciplines that follow the sequence of technology adoption. Examples One way to model product adoption is to understand that people's behaviors are influenced by their peers and how widespread they think a particular action is. For many format-dependent technologies, people have a non-zero payoff for adopting the same technology as their closest friends or colleagues. If two users both adopt product A, they might get a payoff a > 0; if they adopt product B, they get b > 0. But if one adopts A and the other adopts B, they both get a payoff of 0. A threshold can be set for each user to adopt a product. Say that a node v in a graph has d neighbors: then v will adopt product A if a fraction p of its neighbors is greater than or equal to some threshold. For example, if v's threshold is 2/3, and only one of its two neighbors adopts product A, then v will not adopt A. Using this model, we can deterministically model product adoption on sample networks. History The technology adoption lifecycle is a sociological model that is an extension of an earlier model called the diffusion process, which was originally published in 1956 by George M. Beal and Joe M. Bohlen. This article did not acknowledge the contributions of Beal's Ph.D. student Everett M. Rogers; however Beal, Bohlen and Rogers soon co-authored a scholarly article on their methodology. This research built on prior work by Neal C. Gross and Bryce Ryan. Rogers generalized the diffusion process to innovations outside the agricultural sector of the midwestern USA, and successfully popularized his generalizations in his widely acclaimed 1962 book Diffusion of Innovations (now in its fifth edition).
Technology
General
null
3581788
https://en.wikipedia.org/wiki/Opacity
Opacity
Opacity is the measure of impenetrability to electromagnetic or other kinds of radiation, especially visible light. In radiative transfer, it describes the absorption and scattering of radiation in a medium, such as a plasma, dielectric, shielding material, glass, etc. An opaque object is neither transparent (allowing all light to pass through) nor translucent (allowing some light to pass through). When light strikes an interface between two substances, in general, some may be reflected, some absorbed, some scattered, and the rest transmitted (also see refraction). Reflection can be diffuse, for example light reflecting off a white wall, or specular, for example light reflecting off a mirror. An opaque substance transmits no light, and therefore reflects, scatters, or absorbs all of it. Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including opacity, transparency and translucency among the involved aspects. Both mirrors and carbon black are opaque. Opacity depends on the frequency of the light being considered. For instance, some kinds of glass, while transparent in the visual range, are largely opaque to ultraviolet light. More extreme frequency-dependence is visible in the absorption lines of cold gases. Opacity can be quantified in many ways; for example, see the article mathematical descriptions of opacity. Different processes can lead to opacity, including absorption, reflection, and scattering. Etymology Late Middle English opake, from Latin opacus 'darkened'. The current spelling (rare before the 19th century) has been influenced by the French form. Radiopacity Radiopacity is preferentially used to describe opacity of X-rays. In modern medicine, radiodense substances are those that will not allow X-rays or similar radiation to pass. Radiographic imaging has been revolutionized by radiodense contrast media, which can be passed through the bloodstream, the gastrointestinal tract, or into the cerebral spinal fluid and utilized to highlight CT scan or X-ray images. Radiopacity is one of the key considerations in the design of various devices such as guidewires or stents that are used during radiological intervention. The radiopacity of a given endovascular device is important since it allows the device to be tracked during the interventional procedure. Quantitative definition The words "opacity" and "opaque" are often used as colloquial terms for objects or media with the properties described above. However, there is also a specific, quantitative definition of "opacity", used in astronomy, plasma physics, and other fields, given here. In this use, "opacity" is another term for the mass attenuation coefficient (or, depending on context, mass absorption coefficient, the difference is described here) at a particular frequency of electromagnetic radiation. More specifically, if a beam of light with frequency travels through a medium with opacity and mass density , both constant, then the intensity will be reduced with distance x according to the formula where x is the distance the light has traveled through the medium is the intensity of light remaining at distance x is the initial intensity of light, at For a given medium at a given frequency, the opacity has a numerical value that may range between 0 and infinity, with units of length2/mass. Opacity in air pollution work refers to the percentage of light blocked instead of the attenuation coefficient (aka extinction coefficient) and varies from 0% light blocked to 100% light blocked: Planck and Rosseland opacities It is customary to define the average opacity, calculated using a certain weighting scheme. Planck opacity (also known as Planck-Mean-Absorption-Coefficient) uses the normalized Planck black-body radiation energy density distribution, , as the weighting function, and averages directly: where is the Stefan–Boltzmann constant. Rosseland opacity (after Svein Rosseland), on the other hand, uses a temperature derivative of the Planck distribution, , as the weighting function, and averages , The photon mean free path is . The Rosseland opacity is derived in the diffusion approximation to the radiative transport equation. It is valid whenever the radiation field is isotropic over distances comparable to or less than a radiation mean free path, such as in local thermal equilibrium. In practice, the mean opacity for Thomson electron scattering is: where is the hydrogen mass fraction. For nonrelativistic thermal bremsstrahlung, or free-free transitions, assuming solar metallicity, it is: The Rosseland mean attenuation coefficient is:
Physical sciences
Optics
Physics
3583632
https://en.wikipedia.org/wiki/Palaeognathae
Palaeognathae
Palaeognathae (; ) is an infraclass of birds, called paleognaths or palaeognaths, within the class Aves of the clade Archosauria. It is one of the two extant infraclasses of birds, the other being Neognathae, both of which form Neornithes. Palaeognathae contains five extant orders consisting of four flightless lineages (plus two that are extinct), termed ratites, and one flying lineage, the Neotropic tinamous. There are 47 species of tinamous, five of kiwis (Apteryx), three of cassowaries (Casuarius), one of emus (Dromaius) (another became extinct in historic times), two of rheas (Rhea) and two of ostriches (Struthio). Recent research has indicated that paleognaths are monophyletic but the traditional taxonomic split between flightless and flighted forms is incorrect; tinamous are within the ratite radiation, meaning flightlessness arose independently multiple times via parallel evolution. There are three extinct groups that are undisputed members of Palaeognathae: the Lithornithiformes, the Dinornithiformes (moas) and the Aepyornithiformes (elephant birds), the latter two of which became extinct in the last 1250 years. There are other extinct birds which have been allied with the Palaeognathae by at least one author, but their affinities are a matter of dispute. The word paleognath is derived from the Ancient Greek for 'old jaws' in reference to the skeletal anatomy of the palate, which is described as more primitive and reptilian than that in other birds. Paleognathous birds retain some basal morphological characters but are by no means living fossils as their genomes continued to evolve at the DNA level under selective pressure at rates comparable to the Neognathae branch of living birds, though there is some controversy about the precise relationship between them and the other birds. There are also several other scientific controversies about their evolution (see below). Origin and evolution No unambiguously paleognathous fossil birds are known until the Cenozoic (though birds occasionally interpreted as lithornithids occur in Albian appalachian sites), but there have been many reports of putative paleognaths, and it has long been inferred that they may have evolved in the Cretaceous. Given the Northern Hemisphere location of the morphologically most basal fossil forms (such as Lithornis, Pseudocrypturus, Paracathartes and Palaeotis), a Laurasian origin for the group can be inferred. The present almost entirely Gondwanan distribution would then have resulted from multiple colonisations of the southern landmasses by flying forms that subsequently evolved flightlessness, and in many cases, gigantism. One study of molecular and paleontological data found that modern bird orders, including the paleognathous ones, began diverging from one another in the Early Cretaceous. Benton (2005) summarized this and other molecular studies as implying that paleognaths should have arisen 110 to 120 million years ago in the Early Cretaceous. He points out, however, that there is no fossil record until 70 million years ago, leaving a 45 million year gap. He asks whether the paleognath fossils will be found one day, or whether the estimated rates of molecular evolution are too slow, and that bird evolution actually accelerated during an adaptive radiation after the Cretaceous–Paleogene boundary (K–Pg boundary). Other authors questioned the monophyly of the Palaeognathae on various grounds, suggesting that they could be a hodgepodge of unrelated birds that have come to be grouped together because they are coincidentally flightless. Unrelated birds might have developed ratite-like anatomies multiple times around the world through convergent evolution. McDowell (1948) asserted that the similarities in the palate anatomy of paleognaths might actually be neoteny, or retained embryonic features. He noted that there were other features of the skull, such as the retention of sutures into adulthood, that were like those of juvenile birds. Thus, perhaps the characteristic palate was actually a frozen stage that many carinate bird embryos passed through during development. The retention of early developmental stages, then, may have been a mechanism by which various birds became flightless and came to look similar to one another. Hope (2002) reviewed all known bird fossils from the Mesozoic looking for evidence of the origin of the evolutionary radiation of the Neornithes. That radiation would also signal that the paleognaths had already diverged. She notes five Early Cretaceous taxa that have been assigned to the Palaeognathae. She finds that none of them can be clearly assigned as such. However, she does find evidence that the Neognathae and, therefore, also the Palaeognathae had diverged no later than the Early Campanian age of the Cretaceous period. Vegavis is a fossil bird from the Maastrichtian stage of Late Cretaceous Antarctica. Vegavis is most closely related to true ducks. Because virtually all phylogenetic analyses predict that ducks diverged after paleognaths, this is evidence that paleognaths had already arisen well before that time. An exceptionally preserved specimen of the extinct flying paleognathe Lithornis was published by Leonard et al. in 2005. It is an articulated and nearly complete fossil from the early Eocene of Denmark, and thought to have the best preserved lithornithiform skull ever found. The authors concluded that Lithornis was a close sister taxon to tinamous, rather than ostriches, and that the lithornithiforms + tinamous were the most basal paleognaths. They concluded that all ratites, therefore, were monophyletic, descending from one common ancestor that became flightless. They also interpret the paleognath-like Limenavis, from Early Cretaceous Patagonia, as possible evidence of a Cretaceous and monophyletic origin for paleognaths. Mysterious large eggs from the Pliocene of Lanzarote in the Canary Islands have been attributed to ratites. An ambitious genomic analysis of the living birds was performed in 2007, and it contradicted Leonard et al. (2005). It found that tinamous are not primitive within the paleognaths, but among the most advanced. This requires multiple events of flightlessness within the paleognaths and partially refutes the Gondwana vicariance hypothesis (see below). The study looked at DNA sequences from 19 loci in 169 species. It recovered evidence that the paleognaths are one natural group (monophyletic), and that their divergence from other birds is the oldest divergence of any extant bird groups. It also placed the tinamous within the ratites, more derived than ostriches, or rheas and as a sister group to emus and kiwis, and this makes ratites paraphyletic. A related study addressed the issue of paleognath phylogeny exclusively. It used molecular analysis and looked at twenty unlinked nuclear genes. This study concluded that there were at least three events of flightlessness that produced the different ratite orders, that the similarities between the ratite orders are partly due to convergent evolution, and that the Palaeognathae are monophyletic, but the ratites are not. Beginning in 2010, DNA analysis studies have shown that tinamous are the sister group to extinct moa of New Zealand. A 2020 molecular study of all bird orders found paleognaths and neognaths to have diverged in the Late Cretaceous or earlier, before 70 million years ago. However, all modern paleognath orders only originated in the latest Paleocene and afterwards, with ostriches diverging in the latest Paleocene, rheas in the early Eocene, kiwis (and presumably elephant birds) very shortly after in the early Eocene, and finally Casuariiformes and tinamous (and presumably moas) diverging from one another in the mid-Eocene. History of classifications In the history of biology there have been many competing taxonomies of the birds now included in the Palaeognathae. The topic has been studied by Dubois (1891), Sharpe (1891), Shufeldt (1904), Sibley and Ahlquist (1972, 1981) and Cracraft (1981). Merrem (1813) is often credited with classifying the paleognaths together, and he coined the taxon "Ratitae" (see above). However, Linnaeus (1758) placed cassowaries, emus, ostriches, and rheas together in Struthio. Lesson (1831) added the kiwis to the Ratitae. Parker (1864) reported the similarities of the palates of the tinamous and ratites, but Huxley (1867) is more widely credited with this insight. Huxley still placed the tinamous with the Carinatae of Merrem because of their keeled sterna, and thought that they were most closely related to the Galliformes. Pycraft (1900) presented a major advance when he coined the term Palaeognathae. He rejected the Ratitae-Carinatae classification that separated tinamous and ratites. He reasoned that a keelless, or "ratite", sternum could easily evolve in unrelated birds that independently became flightless. He also recognized that the ratites were secondarily flightless. His subdivisions were based on the characters of the palatal skeleton and other organ systems. He established seven roughly modern orders of living and fossil paleognaths (Casuarii, Struthiones, Rheae, Dinornithes, Aepyornithes, Apteryges, and Crypturi – the latter his term for tinamous, after the Tinamou genus Crypturellus). The Palaeognathae are usually considered a superorder, but authors have treated them as a taxon as high as subclass (Stresemann 1927–1934) or as low as an order (Cracraft 1981 and the IUCN, which includes all paleognaths in an expanded Struthioniformes). Palaeognathae was defined in the PhyloCode by George Sangster and colleagues in 2022 as "the least inclusive crown clade containing Tinamus major and Struthio camelus". Cladistics Cladogram based on Mitchell (2014) with some clade names after Yuri et al. (2013) Yuri et al. (2013) named the clades Notopalaeognathae and Novaeratitae, the former defined in the PhyloCode by Sangster et al. (2022) as "the least inclusive crown clade containing Rhea americana, Tinamus major, and Apteryx australis", while the latter also defined in the PhyloCode by Sangster et al. (2022) as "the least inclusive crown clade containing Apteryx australis and Casuarius casuarius". Notopalaeognathae represents the grouping containing the majority of ratites with the exception of ostriches, and the clade Novaeratitae was named to support the relationship between kiwis, cassowaries, emus, and the extinct elephant birds. Cloutier, A. et al. (2019) in their molecular study places ostriches as the basal lineage with the rhea as the next most basal. An alternative phylogeny was found by Kuhl, H. et al. (2020). In this treatment, all members of Palaeognathae are classified in Struthioniformes, but they are still shown as distinct orders here. Other studies have suggested that the relationships between the four main groups of non-ostrich palaeognaths (Casuariiformes, Rheiformes, Apteryformes+Aepyornithformes and Tinamiformes+Dinornithformes) are an effective polytomy, with only slightly more support for Novaeratitae over the alternative hypotheses of Apterygiformes+Aepyornithformes being more closely related to Rheiformes or to Tinamiformes+Dinornithformes. This lineage containing the sister relationship between tinamous and moas was given the clade name Dinocrypturi, being named and defined in the PhyloCode by Sangster et al. (2022) as "the smallest clade containing Tinamus major and Dinornis novaezealandiae". Description Paleognaths are named for a characteristic, complex architecture of the bones in the bony palate. Cracraft (1974) defined it with five characters. The vomer is large and articulates with the premaxillae and maxillopalatines anteriorly. Posteriorly the vomer fuses to the ventral surface of the pterygoid, and the palatines fuse to the ventral surface of this pterygovomer articulation. The pterygoid prevents the palatine from articulating medially with the basisphenoid. The palatine and pterygoid fuse into a rigid joint. The articulation on the pterygoid for the basipterygoid process of the basicranium is located near the articulation between the pterygoid and quadrate. The pterygoid–quadrate articulation is complex and includes the orbital process of the quadrate. Paleognaths share similar pelvis anatomy. There is a large, open ilio–ischiatic fenestra in the pelvis. The pubis and ischium are likely to be longer than the ilium, protruding out beneath the tail. The postacetabular portion of the pelvis is longer than the preacetabular portion. Paleognaths share a pattern of grooves in the horny covering of the bill. This covering is called the rhamphotheca. The paleognath pattern has one central strip of horn, with long, triangular, strips to either side. In paleognaths, the male incubates the eggs. The male may include in his nest the eggs of one female or more than one. He may also have eggs deposited in his nest by females that did not breed with him, in cases of nest parasitism. Only in ostriches and the great spotted kiwi does the female also assist in incubating the eggs. The tinamous of Central and South America are primarily terrestrial, though they fly weakly. Tinamous have very short tail feathers, giving them an almost tailless aspect. In general, they resemble galliform birds like quails and grouse. Tinamous have a very long, keeled, breastbone with an unusual three-pronged shape. This bone, the sternum, has a central blade (the Carina sterni), with two long, slender lateral trabeculae, which curve to either side and nearly touch the keel posteriorly. These trabeculae may also be thought of as the rims of two large foramina that incise the posterior edge of the sternum, and extend almost its whole length. Tinamous have a proper semicircular furcula, with no trace of a hypocleidium. There is an acute angle between the scapula and coracoid, as in all flying birds. The pelvis has an open ilio–ischiatic fenestra that incises the posterior edge between the ilium and ischium, as in all paleognaths. Tinamous have no true pygostyle, their caudal vertebrae remain unfused, as in ratites. Tinamou feathers look like those of volant birds in that they have a rachis and two vanes. The structure of tinamou feathers is unique, however, in that they have barbs that remain joined at their tips. Thus the parallel barbs are separated only by slits between them. Tinamous have uropygial glands. Ratite birds are strictly flightless and their anatomy reflects specializations for terrestrial life. The term "ratite" is from the Latin word for raft, ratis, because they possess a flat breastbone, or sternum, shaped like a raft. This characteristic sternum differs from that in flighted birds, where the pectoral musculature is disproportionately large to provide the power for wingbeats and the sternum develops a prominent keel, or carina sterni to anchor these muscles. The clavicles do not fuse into a furcula. Instead, if present at all, each is splint-like and lies along the medial border of the coracoid, attached there by a coraco–clavicular ligament. There is an obtuse angle between the scapula and coracoid, and the two bones fuse together to form a scapulocoracoid. Ratites have reduced and simplified wing structures and strong legs. Except in some rhea wing feathers, the barb filaments that make up the vanes of the feathers do not lock tightly together, giving the plumage a shaggier look and making it unnecessary to oil their feathers. Adult ratites have no preen gland (uropygial gland) that contains preening oil. Paleognaths as a whole tend to have proportionally small brains, and are among the living birds with the most limited cognitive abilities. Kiwis are exceptional, however, and have large brains comparable to those of parrots and songbirds, though evidence for similar levels of behaviour complexity is currently lacking. Sizes Living members of Palaeognathae range from to and weight can be from . Ostriches are the largest struthioniforms (members of the order Struthioniformes), with long legs and neck. They range in height from and weigh from . They have loose-feathered wings. Males have black and white feathers while the female has grayish brown feathers. They are unique among birds in that they retain only the third and fourth toe on each foot. Ostrich wings have claws, or unguals, on the first and second fingers (and, in some individuals, also on the third). Ostriches differ from other paleognaths in that they have a reduced vomer bone of the skull. Emus are in height and weigh . They have short wings and the adults have brown feathers. Rheas are and weigh . Their feathers are gray or spotted brown and white. They have large wings but no tail feathers. They have no clavicles. Cassowaries are in height and weigh . They have rudimentary wings with black feathers and six stiff, porcupine-like, quills in the place of their primary and secondary feathers. Kiwis are the smallest of ratites, ranging in height from and weight . They have shaggy brown feathers. Tinamous range in size from and weigh . Locomotion Many of the larger ratite birds have extremely long legs and the largest living bird, the ostrich, can run at speeds over 35 mph (60 km/h). Emus have long, strong legs and can run up to . Cassowaries and rheas show a similar likeness in agility and some extinct forms may have reached speeds of 45 mph (75 km/h). Biogeography Today, the ratites are largely restricted to the Southern Hemisphere, though across the Cenozoic they were also present in Europe, North America and Asia. In the Cretaceous, these southern continents were connected, forming a single continent called Gondwana. Gondwana is the crucial territory in a major scientific question about the evolution of Palaeognathae, and thus about the evolution of all of the Neornithes. There are two theories regarding the evolution of paleognaths. According to the Gondwana vicariance hypothesis, the paleognaths evolved once, from one ancestor, on Gondwana during the Cretaceous, and then rode on the daughter landmasses that became today's southern continents. This hypothesis is supported most strongly by molecular clock studies, but it is weakened by the lack of any Cretaceous or southern fossil paleognaths, as well as the early radiation of paleognaths in Laurasian landmasses. According to the Tertiary radiation hypothesis, they evolved after the Cretaceous–Paleogene extinction event from multiple flying ancestors on multiple continents around the world. This hypothesis is supported by molecular phylogeny studies and matches the fossil record, but it is weakened by morphological phylogenetic studies. Both hypotheses have been supported and challenged by many studies by many authors. A 2016 study of both genetic and morphological divergence concludes that the group had a Laurasian origin. Gondwana vicariance hypothesis Cracraft (2001) gave a comprehensive review to the data and strongly supported the Gondwana vicariance hypothesis with phylogenetic evidence and historical biogeography. He cites molecular clock studies that show a basal divergence date for neornithes being around 100 Mya. He credits the authors of the molecular clock studies with the observation that the lack of southern paleognath fossils may correspond to the relatively scarce southern Cretaceous deposits, and the relative lack of paleontological field work in the Southern Hemisphere. Moreover, Cracraft synthesizes the morphological and molecular studies, noting conflicts between the two, and finds that the bulk of the evidence favors paleognath monophyly. He also notes that not only the ratites, but other basal groups of neognathous birds, show trans-Antarctic distribution, as would be expected if the paleognaths and neognaths had diverged in Gondwana. Geological analyses have suggested that New Zealand may have been entirely under water as recently as 28 Mya, making it impossible for flightless birds to have survived. However, the discovery of a Sphenodon fossil dating to the Early Miocene 19–16 Mya raises question as to whether the island mass was completely submerged. This finding offers further evidence that ancient Sphenodon species lived on some portion of the land mass since it separated from Gondwana approximately 82 Mya. Evidence of a sea level rise submerging much of New Zealand is generally accepted, but there is a debate about how much of New Zealand was submerged. A Sphenodon species surviving on a remnant part of the island suggests that larger species may have survived as well. Ultimately, the earliest recorded paleognaths are flying, presumably plesiomorphic lithornithids, found quite possibly as early as the Late Cretaceous in North America, while some of the earliest flightless ratites occur in Europe. The vicariance hypothesis relies on the assumption southern landmasses were more relevant to ratite evolution than the northern ones. Tertiary radiation hypothesis Feduccia (1995) emphasized the extinction event at the Cretaceous-Paleogene boundary as the probable engine of diversification in the Neornithes, picturing only one or very few lineages of birds surviving the end of the Cretaceous. He also noted that birds around the world had developed ratite-like anatomies when they became flightless, and saw the affinities of modern ratites, especially kiwis, as ambiguous. In this emphasis on the Cenozoic, rather than Cretaceous period, as the time of basal divergences between neornithines, he follows Olson. Houde demonstrated that the Lithornithiformes, a group of flying birds that were common in the Cenozoic of the Northern Hemisphere, were also paleognaths. He argues that the lithornithiform bird Paleotis, known from fossils in Denmark (Northern Hemisphere), shared unique anatomical features of the skull that make it a member of the same order as the ostriches. He also argued that the kiwis should not have reached New Zealand, which moved away from the mainland in the Early Cretaceous, if their ancestor was flightless; this claim at least has been vindicated by the discovery of the possibly volant Proapteryx. He therefore deduced that lithornithiform ancestors could have reached the southern continents some 30 to 40 million years ago, and evolved flightless forms which are today's ratites. This hypothesis is contradicted by some later molecular studies, but supported by others. Relationship to humans The human lineage evolved in Africa in sympatry with ostriches. After Homo appeared and left Africa for other continents, they continued to encounter ostriches in Arabia and much of southern and central Asia. No contact was made with other palaeognath genera until the Papuan and Aboriginal Australian peoples populated New Guinea and Australia. Subsequently, Paleo-Indians encountered tinamous and rheas in Central and South America, Austronesian settlers encountered and exterminated the elephant birds of Madagascar, and the Maori did likewise to the moa of New Zealand. The giant ratites of Madagascar and New Zealand had evolved with little or no exposure to mammalian predators, and were unable to cope with predation by humans; many other oceanic species met the same fate (as apparently had the Australian dromornithids earlier). Worldwide, most giant birds became extinct by the end of the 18th century and most surviving species are now endangered and/or are decreasing in population. However, the co-existence between elephant birds and human beings appears to have been longer than previously thought. Today, ratites such as the ostrich are farmed and sometimes even kept as pets. Ratites play a large role in human culture; they are farmed, eaten, raced, protected, and kept in zoos.
Biology and health sciences
General articles
null
3585815
https://en.wikipedia.org/wiki/Health%20effects%20of%20tobacco
Health effects of tobacco
Tobacco products, especially when smoked or used orally, have serious negative effects on human health. Smoking and smokeless tobacco use are the single greatest causes of preventable death globally. Half of tobacco users die from complications related to such use. Current smokers are estimated to die an average of 10 years earlier than non-smokers. The World Health Organization estimates that, in total, about 8 million people die from tobacco-related causes, including 1.3 million non-smokers due to secondhand smoke. It is further estimated to have caused 100 million deaths in the 20th century. Tobacco smoke contains over 70 chemicals, known as carcinogens, that cause cancer. It also contains nicotine, a highly addictive psychoactive drug. When tobacco is smoked, the nicotine causes physical and psychological dependency. Cigarettes sold in least developed countries have higher tar content and are less likely to be filtered, increasing vulnerability to tobacco smoking–related diseases in these regions. Tobacco use most commonly leads to diseases affecting the heart, liver, and lungs. Smoking is a major risk factor for several conditions, namely pneumonia, heart attacks, strokes, chronic obstructive pulmonary disease (COPD)—including emphysema and chronic bronchitis—and multiple cancers (particularly lung cancer, cancers of the larynx and mouth, bladder cancer, and pancreatic cancer). It is also responsible for peripheral arterial disease and high blood pressure. The effects vary depending on how frequently and for how many years a person smokes. Smoking earlier in life and smoking cigarettes higher in tar increase the risk of these diseases. Additionally, environmental tobacco smoke, known as secondhand smoke, has manifested harmful health effects in people of all ages. Tobacco use is also a significant risk factor in miscarriages among pregnant smokers. It contributes to a number of other health problems for the fetus, such as premature birth and low birth weight, and increases the chance of sudden infant death syndrome (SIDS) by 1.4 to 3 times. The incidence of erectile dysfunction is approximately 85 percent higher in male smokers compared to non-smokers. Many countries have taken measures to control the consumption of tobacco by restricting its usage and sales. They have printed warning messages on packaging. Moreover, smoke-free laws that ban smoking in public places like workplaces, theaters, bars, and restaurants have been enacted to reduce exposure to secondhand smoke. Tobacco taxes inflating the price of tobacco products have also been imposed. In the late 1700s and the 1800s, the idea that tobacco use caused certain diseases, including mouth cancers, was initially accepted by the medical community. In the 1880s, automation dramatically reduced the cost of cigarettes, cigarette companies greatly increased their marketing, and use expanded. From the 1890s onwards, associations of tobacco use with cancers and vascular disease were regularly reported. By the 1930s, multiple researchers concluded that tobacco use caused cancer and that tobacco users lived substantially shorter lives. Further studies were published in Nazi Germany in 1939 and 1943, and one in the Netherlands in 1948. However, widespread attention was first drawn in 1950 by researchers from the United States and the United Kingdom, but their research was widely criticized. Follow-up studies in the early 1950s found that smokers died faster and were more likely to die of lung cancer and cardiovascular disease. These results were accepted in the medical community and publicized among the general public in the mid-1960s. Health effects of smoking (
Biology and health sciences
Drugs and pharmacology
null
1271805
https://en.wikipedia.org/wiki/Organochlorine%20chemistry
Organochlorine chemistry
Organochlorine chemistry is concerned with the properties of organochlorine compounds, or organochlorides, organic compounds that contain one or more carbon–chlorine bonds. The chloroalkane class (alkanes with one or more hydrogens substituted by chlorine) includes common examples. The wide structural variety and divergent chemical properties of organochlorides lead to a broad range of names, applications, and properties. Organochlorine compounds have wide use in many applications, though some are of profound environmental concern, with TCDD being one of the most notorious. Organochlorides such as trichloroethylene, tetrachloroethylene, dichloromethane and chloroform are commonly used as solvents and are referred to as "chlorinated solvents". Physical and chemical properties Chlorination modifies the physical properties of hydrocarbons in several ways. These compounds are typically denser than water due to the higher atomic weight of chlorine versus hydrogen. They have higher boiling and melting points compared to related hydrocarbons. Flammability reduces with increased chlorine substitution in hydrocarbons. Aliphatic organochlorides are often alkylating agents as chlorine can act as a leaving group, which can result in cellular damage. Natural occurrence Many organochlorine compounds have been isolated from natural sources ranging from bacteria to humans. Chlorinated organic compounds are found in nearly every class of biomolecules and natural products including alkaloids, terpenes, amino acids, flavonoids, steroids, and fatty acids. Dioxins, which are of particular concern to human and environmental health, are produced in the high temperature environment of forest fires and have been found in the preserved ashes of lightning-ignited fires that predate synthetic dioxins. In addition, a variety of simple chlorinated hydrocarbons including dichloromethane, chloroform, and carbon tetrachloride have been isolated from marine algae. A majority of the chloromethane in the environment is produced naturally by biological decomposition, forest fires, and volcanoes. The natural organochloride epibatidine, an alkaloid isolated from tree frogs, has potent analgesic effects and has stimulated research into new pain medication. However, because of its unacceptable therapeutic index, it is no longer a subject of research for potential therapeutic uses. The frogs obtain epibatidine through their diet which is then sequestered into their skin. Likely dietary sources are beetles, ants, mites, and flies. Preparation From chlorine Alkanes and aryl alkanes may be chlorinated under free radical conditions, with UV light. However, the extent of chlorination is difficult to control. Aryl chlorides may be prepared by the Friedel-Crafts halogenation, using chlorine and a Lewis acid catalyst. The haloform reaction, using chlorine and sodium hydroxide, is also able to generate alkyl halides from methyl ketones, and related compounds. Chloroform was formerly produced thus. Chlorine adds to the multiple bonds on alkenes and alkynes as well, giving di- or tetra-chloro compounds. Reaction with hydrogen chloride Alkenes react with hydrogen chloride (HCl) to give alkyl chlorides. For example, the industrial production of chloroethane proceeds by the reaction of ethylene with HCl: H2C=CH2 + HCl → CH3CH2Cl In oxychlorination, hydrogen chloride instead of the more expensive chlorine is used for the same purpose: CH2=CH2 + 2 HCl + O2 → ClCH2CH2Cl + H2O. Secondary and tertiary alcohols react with hydrogen chloride to give the corresponding chlorides. In the laboratory, the related reaction involving zinc chloride in concentrated hydrochloric acid: {R-OH} + HCl ->[\ce{ZnCl2}][\Delta] \overset{alkyl\ halide}{R-Cl} + H2O Called the Lucas reagent, this mixture was once used in qualitative organic analysis for classifying alcohols. Other chlorinating agents Alkyl chlorides are most easily prepared by treating alcohols with thionyl chloride (SOCl2) or phosphorus pentachloride (PCl5), but also commonly with sulfuryl chloride (SO2Cl2) and phosphorus trichloride (PCl3): ROH + SOCl2 → RCl + SO2 + HCl 3 ROH + PCl3 → 3 RCl + H3PO3 ROH + PCl5 → RCl + POCl3 + HCl In the laboratory, thionyl chloride is especially convenient, because the byproducts are gaseous. Alternatively, the Appel reaction can be used: Reactions Alkyl chlorides are versatile building blocks in organic chemistry. While alkyl bromides and iodides are more reactive, alkyl chlorides tend to be less expensive and more readily available. Alkyl chlorides readily undergo attack by nucleophiles. Heating alkyl halides with sodium hydroxide or water gives alcohols. Reaction with alkoxides or aryloxides give ethers in the Williamson ether synthesis; reaction with thiols give thioethers. Alkyl chlorides readily react with amines to give substituted amines. Alkyl chlorides are substituted by softer halides such as the iodide in the Finkelstein reaction. Reaction with other pseudohalides such as azide, cyanide, and thiocyanate are possible as well. In the presence of a strong base, alkyl chlorides undergo dehydrohalogenation to give alkenes or alkynes. Alkyl chlorides react with magnesium to give Grignard reagents, transforming an electrophilic compound into a nucleophilic compound. The Wurtz reaction reductively couples two alkyl halides to couple with sodium. Some organochlorides (such as ethyl chloride) may be used as alkylating agents. Tetraethyllead was produced from ethyl chloride and a sodium–lead alloy: Reductive dechlorination is rarely useful in chemical synthesis, but is a key step in the biodegradation of several organochlorine persistent pollutants. Applications Vinyl chloride The largest application of organochlorine chemistry is the production of vinyl chloride. The annual production in 1985 was around 13 million tons, almost all of which was converted into polyvinylchloride (PVC). Chlorinated solvents Most low molecular weight and liquid chlorinated hydrocarbons such as dichloromethane, chloroform, carbon tetrachloride, dichloroethylene, trichloroethylene, tetrachloroethylene, 1,2-Dichloroethane and hexachlorobutadiene are useful solvents. These solvents tend to be relatively non-polar; they are therefore immiscible with water and effective in cleaning applications such as degreasing and dry cleaning for their ability to dissolve oils and grease. They are mostly nonflammable or have very low flammability. Some, like carbon tetrachloride and 1,1,1-Trichloroethane have been phased out due to their toxicity or negative environmental impact (ozone depletion by 1,1,1-Trichloroethane). Chloromethanes Several billion kilograms of chlorinated methanes are produced annually, mainly by chlorination of methane: CH4 + x Cl2 → CH4−xClx + x HCl The most important is dichloromethane, which is mainly used as a solvent. Chloromethane is a precursor to chlorosilanes and silicones. Historically significant (as an anaesthetic), but smaller in scale is chloroform, mainly a precursor to chlorodifluoromethane (CHClF2) and tetrafluoroethene which is used in the manufacture of Teflon. Pesticides The two main groups of organochlorine insecticides are the DDT-type compounds and the chlorinated alicyclics. Their mechanism of action differs slightly. The DDT like compounds work on the peripheral nervous system. At the axon's sodium channel, they prevent gate closure after activation and membrane depolarization. Sodium ions leak through the nerve membrane and create a destabilizing negative "afterpotential" with hyperexcitability of the nerve. This leakage causes repeated discharges in the neuron either spontaneously or after a single stimulus. Chlorinated cyclodienes include aldrin, dieldrin, endrin, heptachlor, chlordane and endosulfan. A 2- to 8-hour exposure leads to depressed central nervous system (CNS) activity, followed by hyperexcitability, tremors, and then seizures. The mechanism of action is the insecticide binding at the GABAA site in the GABA-gated chloride channel (IRAC group 2A), which inhibits chloride flow into the nerve. Other examples include dicofol, mirex, kepone, and pentachlorophenol. These can be either hydrophilic or hydrophobic, depending on their molecular structure. Insulators Polychlorinated biphenyls (PCBs) were once commonly used electrical insulators and heat transfer agents. Their use has generally been phased out due to health concerns. PCBs were replaced by polybrominated diphenyl ethers (PBDEs), which bring similar toxicity and bioaccumulation concerns. Toxicity Some types of organochlorides have significant toxicity to plants or animals, including humans. Dioxins, produced when organic matter is burned in the presence of chlorine, are persistent organic pollutants which pose dangers when they are released into the environment, as are some insecticides (such as DDT). For example, DDT, which was widely used to control insects in the mid-20th century, also accumulates in food chains, as do its metabolites DDE and DDD, and causes reproductive problems (e.g., eggshell thinning) in certain bird species. DDT also posed further issues to the environment as it is extremely mobile, traces even being found in Antarctica despite the chemical never being used there. Some organochlorine compounds, such as sulfur mustards, nitrogen mustards, and Lewisite, are even used as chemical weapons due to their toxicity. However, the presence of chlorine in an organic compound does not ensure toxicity. Some organochlorides are considered safe enough for consumption in foods and medicines. For example, peas and broad beans contain the natural chlorinated plant hormone 4-chloroindole-3-acetic acid (4-Cl-IAA); and the sweetener sucralose (Splenda) is widely used in diet products. , at least 165 organochlorides had been approved worldwide for use as pharmaceutical drugs, including the natural antibiotic vancomycin, the antihistamine loratadine (Claritin), the antidepressant sertraline (Zoloft), the anti-epileptic lamotrigine (Lamictal), and the inhalation anesthetic isoflurane.
Physical sciences
Halocarbons
Chemistry
1272738
https://en.wikipedia.org/wiki/Tunneling%20protocol
Tunneling protocol
In computer networks, a tunneling protocol is a communication protocol which allows for the movement of data from one network to another. They can, for example, allow private network communications to be sent across a public network (such as the Internet), or for one network protocol to be carried over an incompatible network, through a process called encapsulation. Because tunneling involves repackaging the traffic data into a different form, perhaps with encryption as standard, it can hide the nature of the traffic that is run through a tunnel. Tunneling protocols work by using the data portion of a packet (the payload) to carry the packets that actually provide the service. Tunneling uses a layered protocol model such as those of the OSI or TCP/IP protocol suite, but usually violates the layering when using the payload to carry a service not normally provided by the network. Typically, the delivery protocol operates at an equal or higher level in the layered model than the payload protocol. Uses A tunneling protocol may, for example, allow a foreign protocol to run over a network that does not support that particular protocol, such as running IPv6 over IPv4. Another important use is to provide services that are impractical or unsafe to be offered using only the underlying network services, such as providing a corporate network address to a remote user whose physical network address is not part of the corporate network. Circumventing firewall policy Users can also use tunneling to "sneak through" a firewall, using a protocol that the firewall would normally block, but "wrapped" inside a protocol that the firewall does not block, such as HTTP. If the firewall policy does not specifically exclude this kind of "wrapping", this trick can function to get around the intended firewall policy (or any set of interlocked firewall policies). Another HTTP-based tunneling method uses the HTTP CONNECT method/command. A client issues the HTTP CONNECT command to an HTTP proxy. The proxy then makes a TCP connection to a particular server:port, and relays data between that server:port and the client connection. Because this creates a security hole, CONNECT-capable HTTP proxies commonly restrict access to the CONNECT method. The proxy allows connections only to specific ports, such as 443 for HTTPS. Other tunneling methods able to bypass network firewalls make use of different protocols such as DNS, MQTT, SMS. Technical overview As an example of network layer over network layer, Generic Routing Encapsulation (GRE), a protocol running over IP (IP protocol number 47), often serves to carry IP packets, with RFC 1918 private addresses, over the Internet using delivery packets with public IP addresses. In this case, the delivery and payload protocols are the same, but the payload addresses are incompatible with those of the delivery network. It is also possible to establish a connection using the data link layer. The Layer 2 Tunneling Protocol (L2TP) allows the transmission of frames between two nodes. A tunnel is not encrypted by default: the TCP/IP protocol chosen determines the level of security. SSH uses port 22 to enable data encryption of payloads being transmitted over a public network (such as the Internet) connection, thereby providing VPN functionality. IPsec has an end-to-end Transport Mode, but can also operate in a tunneling mode through a trusted security gateway. To understand a particular protocol stack imposed by tunneling, network engineers must understand both the payload and delivery protocol sets. Common tunneling protocols IP in IP (IP protocol 4): IP in IPv4/IPv6 SIT/IPv6 (IP protocol 41): IPv6 in IPv4/IPv6 GRE (IP protocol 47): Generic Routing Encapsulation OpenVPN (UDP port 1194) SSTP (TCP port 443): Secure Socket Tunneling Protocol IPSec (IP protocols 50 and 51): Internet Protocol Security L2TP (UDP port 1701): Layer 2 Tunneling Protocol L2TPv3 (IP protocol 115): Layer 2 Tunneling Protocol version 3 VXLAN (UDP port 4789): Virtual Extensible Local Area Network PPTP (TCP port 1723 for control, GRE for data): Point-to-Point Tunneling Protocol PPPoE (EtherType 0x8863 for control, 0x8864 for data): Point-to-Point Protocol over Ethernet GENEVE WireGuard (UDP dynamic port) TCP meltdown problem Tunneling a TCP-encapsulating payload (such as PPP) over a TCP-based connection (such as SSH's port forwarding) is known as "TCP-over-TCP", and doing so can induce a dramatic loss in transmission performance — known as the TCP meltdown problem which is why virtual private network (VPN) software may instead use a protocol simpler than TCP for the tunnel connection. TCP meltdown occurs when a TCP connection is stacked on top of another. The underlying layer may detect a problem and attempt to compensate, and the layer above it then overcompensates because of that, and this overcompensation causes said delays and degraded transmission performance. Secure Shell tunneling A Secure Shell (SSH) tunnel consists of an encrypted tunnel created through an SSH protocol connection. Users may set up SSH tunnels to transfer unencrypted traffic over a network through an encrypted channel. It is a software-based approach to network security and the result is transparent encryption. For example, Microsoft Windows machines can share files using the Server Message Block (SMB) protocol, a non-encrypted protocol. If one were to mount a Microsoft Windows file-system remotely through the Internet, someone snooping on the connection could see transferred files. To mount the Windows file-system securely, one can establish a SSH tunnel that routes all SMB traffic to the remote fileserver through an encrypted channel. Even though the SMB protocol itself contains no encryption, the encrypted SSH channel through which it travels offers security. Once an SSH connection has been established, the tunnel starts with SSH listening to a port on the remote or local host. Any connections to it are forwarded to the specified address and port originating from the opposing (remote or local, as previously) host. The #TCP meltdown problem is often not a problem when using OpenSSH's port forwarding, because many use cases do not entail TCP-over-TCP tunneling; the meltdown is avoided because the OpenSSH client processes the local, client-side TCP connection in order to get to the actual payload that is being sent, and then sends that payload directly through the tunnel's own TCP connection to the server side, where the OpenSSH server similarly "unwraps" the payload in order to "wrap" it up again for routing to its final destination. Naturally, this wrapping and unwrapping also occurs in the reverse direction of the bidirectional tunnel. SSH tunnels provide a means to bypass firewalls that prohibit certain Internet services so long as a site allows outgoing connections. For example, an organization may prohibit a user from accessing Internet web pages (port 80) directly without passing through the organization's proxy filter (which provides the organization with a means of monitoring and controlling what the user sees through the web). But users may not wish to have their web traffic monitored or blocked by the organization's proxy filter. If users can connect to an external SSH server, they can create an SSH tunnel to forward a given port on their local machine to port 80 on a remote web server. To access the remote web server, users would point their browser to the local port at http://localhost/ Some SSH clients support dynamic port forwarding that allows the user to create a SOCKS 4/5 proxy. In this case users can configure their applications to use their local SOCKS proxy server. This gives more flexibility than creating an SSH tunnel to a single port as previously described. SOCKS can free the user from the limitations of connecting only to a predefined remote port and server. If an application does not support SOCKS, a proxifier can be used to redirect the application to the local SOCKS proxy server. Some proxifiers, such as Proxycap, support SSH directly, thus avoiding the need for an SSH client. In recent versions of OpenSSH it is even allowed to create layer 2 or layer 3 tunnels if both ends have enabled such tunneling capabilities. This creates tun (layer 3, default) or tap (layer 2) virtual interfaces on both ends of the connection. This allows normal network management and routing to be used, and when used on routers, the traffic for an entire subnetwork can be tunneled. A pair of tap virtual interfaces function like an Ethernet cable connecting both ends of the connection and can join kernel bridges. Cyberattacks based on tunneling Over the years, tunneling and data encapsulation in general have been frequently adopted for malicious reasons, in order to maliciously communicate outside of a protected network. In this context, known tunnels involve protocols such as HTTP, SSH, DNS, MQTT.
Technology
Networks
null
1273759
https://en.wikipedia.org/wiki/Alosa
Alosa
Alosa is a genus of fish, the river herrings, in the family Alosidae. Along with other genera in the subfamily Alosinae, they are generally known as shads. They are distinct from other herrings by having a deeper body and spawning in rivers. Several species can be found on both sides of the Atlantic Ocean and the Mediterranean Sea. Also, several taxa occur in the brackish-water Caspian Sea and the Black Sea basin. Many are found in fresh water during spawning and some are only found in landlocked fresh water. Fossil record These fishes lived from the Eocene to Quaternary (from 55 million years ago to now). Fossils have been found in Canada, the United States, Greece, Kazakhstan, Azerbaijan, Hungary, Romania, and Italy. Appearance Alosa species are generally dark on the back and top of the head, with blue, violet, or greenish tints. Some can be identified as having a grey or green back. Spots are commonly found behind the head, and the fins may vary from species to species or individually. Most species of Alosa weigh or less, with A. pontica and A. fallax weighing up to 2 kg, and A. alosa can exceed 3–4 kg. Biology Shads are thought to be unique among the fishes in having evolved an ability to detect ultrasound (at frequencies above 20 kHz, which is the limit of human hearing). This was first discovered by fisheries biologists studying a type of shad known as blueback herring, and was later verified in laboratory studies of hearing in American shad. This ability is thought to help them avoid dolphins that find prey using echolocation. Alosa species are generally pelagic. They are mostly anadromous or semianadromous with the exception of strictly freshwater landlocked species. Alosa species are generally migratory and schooling fish. Males usually mature about a year before females; they spawn in the late spring to summer. Most individuals die shortly after spawning. Alosa species seemingly can change readily to adapt to their environments, as species are found in a wide range of temperatures and waters. Lifecycle and reproduction As Alosa species are generally anadromous, they face various obstacles to survival. They may have to pass through numerous barriers and waters to get to either their spawning grounds or normal habitats (the sea in most cases). Estuaries are a major factor in numerous Alosa species' migrations. Estuaries can be highly variable and complex environments contributing to fluctuating biological interactions, with shifts in osmolarity, food sources, predators, etc. Since many adult Alosa species die after spawning, only the young generally migrate to the sea from the spawning grounds. Duration of migration varies among fish, but can greatly affect survival. Reproduction varies by species. Studies done on Alosa in Iranian waters have shown that spawning varies in time, place, and temperature of the waters they inhabit. Fecundity may also vary. Species are known to spawn as early as April or as late as August. Temperatures range from about 11 to 27 °C. Fecundity can range from 20,000 to 312,000 eggs. Eggs are pelagic. Geography and temperature are important environmental factors in egg and young-of-year development. The lifespan of Alosa species can be up to 10 years, but this is generally uncommon, as many die after spawning. Systematics The systematics and distribution of Alosa shads are complex. The genus inhabits a wide range of habitats, and many taxa are migratory. A few forms are landlocked, including one from Killarney in Ireland, two from lakes in northern Italy, and two in Greece. Several species are native to the Black and Caspian Seas. Alosa species of the Caspian are systemically characterized by the number of rakers on the first gill arch. They are classified as being "multirakered", "medium-rakered", or "oligorakered". The multirakered are primarily plankton feeders, the oligorakered have large rakers and are predators, and the medium-rakered generally consume a mixed diet. Most current species of the genus Alosa in North America can be found in Florida, whereas the distribution of most of them is broader. Morphology is notoriously liable to adapt to changing food availability in these fish. Several taxa seem to have evolved quite recently, making molecular analyses difficult. In addition, hybridization may be a factor in shad phylogeny. Nonetheless, some trends are emerging. The North American species except the American shad A. sapidissima can probably be separated in a subgenus Pomolobus. Conversely, the proposed genus (or subgenus) Caspialosa for the Caspian Sea forms is rejected due to paraphyly. Species by geographical origin North America Alosa aestivalis (Mitchill, 1814) (blueback herring) Alosa alabamae D. S. Jordan and Evermann, 1896 (Alabama shad) Alosa chrysochloris (Rafinesque, 1820) (skipjack shad) Alosa mediocris (Mitchill, 1814) (hickory shad) Alosa pseudoharengus (A. Wilson, 1811) (alewife) Alosa sapidissima (A. Wilson, 1811) (American shad) Western Europe and the Mediterranean Alosa agone (Scopoli, 1786) (agone) Alosa algeriensis Regan, 1916 (North African shad) Alosa alosa (Linnaeus, 1758) (allis shad) Alosa fallax (Lacépède, 1803) (twait shad) Alosa killarnensis Regan, 1916 (Killarney shad) Caspian Sea, Black Sea, the Balkans Alosa braschnikowi (Borodin, 1904) (Caspian marine shad) Alosa caspia (Eichwald, 1838) A. c. caspia (Eichwald, 1838) (Caspian shad) A. c. knipowitschi (Iljin, 1927) (Enzeli shad) A. c. persica (Iljin, 1927) (Astrabad shad) Alosa curensis (Suvorov, 1907) (Kura shad) Alosa immaculata E. T. Bennett, 1835 (Pontic shad) Alosa kessleri (Grimm, 1887) (Caspian anadromous shad) Alosa macedonica (Vinciguerra, 1921) (Macedonia shad) Alosa maeotica (Grimm, 1901) (Black Sea shad) Alosa saposchnikowii (Grimm, 1887) (Saposhnikovi shad) Alosa sphaerocephala (L. S. Berg, 1913) (Agrakhan shad) Alosa tanaica (Grimm, 1901) (Azov shad) Alosa vistonica Economidis and Sinis, 1986 (Thracian shad) Alosa volgensis (L. S. Berg, 1913) (Volga shad) Recreational fishing Commercial fishing Management Shad populations have been in decline for years due to spawning areas blocked by dams, habitat destruction, pollution, and overfishing. Management of shad has called for more conservative regulations, and policies to help the species have lower fishing mortality. Political significance Shad serve a peculiar symbolic role in Virginia state politics. On the year of every gubernatorial election, would-be candidates, lobbyists, campaign workers, and reporters gather in the town of Wakefield, Virginia, for shad planking. American shad served as the focal point of John McPhee's book The Founding Fish. Culinary use The roe, or more properly the entire engorged uterus of the American shad—filled with ripening eggs, sautéed in clarified butter and garnished with parsley and a slice of lemon—is considered a great delicacy, and commands high prices when available.
Biology and health sciences
Clupeiformes
Animals
1274169
https://en.wikipedia.org/wiki/Squat%20%28exercise%29
Squat (exercise)
A squat is a strength exercise in which the trainee lowers their hips from a standing position and then stands back up. During the descent, the hip and knee joints flex while the ankle joint dorsiflexes; conversely the hip and knee joints extend and the ankle joint plantarflexes when standing up. Squats also help the hip muscles. Squats are considered a vital exercise for increasing the strength and size of the lower body muscles as well as developing core strength. The primary agonist muscles used during the squat are the quadriceps femoris, the adductor magnus, and the gluteus maximus. The squat also isometrically uses the erector spinae and the abdominal muscles, among others. The squat is one of the three lifts in the strength sport of powerlifting, together with the deadlift and the bench press. It is also considered a staple exercise in many popular recreational exercise programs. In powerlifting, it is categorized as raw squats or equipped squats which involves wearing a squat suit. Form The squat begins from a standing position. Weight is often added and is typically in the form of a loaded barbell. Dumbbells and kettlebells may also be used. When a barbell is used, it may be braced across the upper trapezius muscle, which is termed a high bar squat, or held lower across the back and rear deltoids, termed a low bar squat. Wherever the bar is positioned on the back, various torso bracing actions are taken to ensure that it does not come into direct contact with the spine as this can lead to discomfort and injury. This can be a problem for new squatters who squat in a high bar style as they may not have enough muscle mass to form a cushion for the bar and prevent it from applying pressure directly to their spine. A barbell pad can be used to help alleviate pressure or a low bar style can be used. The squatting movement is initiated by moving the hips back and bending the knees and hips to lower the torso and accompanying weight, then returning to the upright position. Squats can be performed to varying depths. The competition standard is for the crease of the hip (top surface of the leg at the hip joint) to fall below the top of the knee; this is colloquially known as "parallel" depth. Although it may be confusing, many other definitions for "parallel" depth abound, none of which represents the standard in organized powerlifting. From shallowest to deepest, these other standards are: bottom of hamstring parallel to the ground; the hip joint itself below the top of the knee, or femur parallel to the floor; and the top of the upper thigh (i.e., top of the quadriceps) below the top of the knee. Squatting below parallel is considered a full or deep squat, while squatting above it qualifies as shallow. Though the forces on the ACL and PCL decrease at high flexion, compressive forces on the menisci and articular cartilages in the knee peak at these same high angles. This makes the relative safety of deep versus shallow squats difficult to determine. As the body descends, the hips and knees undergo flexion, the ankle extends (dorsiflexes) and muscles around the joint contract eccentrically, reaching maximal contraction at the bottom of the movement while slowing and reversing descent. The muscles around the hips provide the power out of the bottom. If the knees slide forward or cave in then tension is taken from the hamstrings, hindering power on the ascent. Returning to vertical contracts the muscles concentrically, and the hips and knees undergo extension while the ankle plantarflexes. Common errors of squat form include descending too rapidly and flexing the torso too far forward. Rapid descent risks being unable to complete the lift or causing injury. This occurs when the descent causes the squatting muscles to relax and tightness at the bottom is lost as a result. Over-flexing the torso greatly increases the forces exerted on the lower back, risking a spinal disc herniation. Another error is when the knee is not aligned with the direction of the toes, entering a valgus position, which can adversely stress the knee joint. An additional common error is the raising of heels off the floor, which reduces the contribution of the gluteus muscles. Muscles used Agonist muscles Quadriceps femoris Vastus lateralis Vastus medialis oblique Gluteus maximus Adductor magnus Soleus Stabilizing muscles Erector spinae Rectus abdominis Internal and external obliques Hamstrings (Biceps femoris, semitendinosis, semimembranosis) Gluteus medius and minimus Gastrocnemius Equipment Various types of equipment can be used to perform squats. A power cage can be used to reduce risk of injury and eliminate the need for a spotting partner. By putting the bar on a track, the Smith machine reduces the role of hip movement in the squat and in this sense resembles a leg press. The monolift rack allows an athlete to perform a squat without having to take a couple of steps back with weight on as opposed to conventional racks. Not many powerlifting federations allow monolift in competitions (WPO, GPC, IPO). Other equipment used can include a weight lifting belt to support the torso and boards to wedge beneath the ankles to improve stability and allow a deeper squat (weightlifting shoes also have wooden wedges built into the sole to achieve the same effect). Wrist straps are another piece of recommended equipment; they support the wrist and help to keep it in a straightened position. They should be wrapped around the wrist, above and below the joint, thus limiting movement of the joint. Heel wedges and related equipment are discouraged by some as they are thought to worsen form over the long term. The barbell can also be cushioned with a special padded sleeve, called a barbell pad. This helps to reduce pressure from the steel barbell on the back. Chains and thick elastic bands can be attached to either end of the barbell in order to vary resistance at different phases of the movement. This may be done to increase resistance in the stronger upper phase of the movement in order to better meet a person's 1RM for that phase. Bands can also be used to reduce resistance in the lower weaker phase by being hung from a power rack and the barbell being increasingly supported by them as it is lowered. This can help someone to overcome a 'sticking' point. A squat performed using these techniques is called a variable resistance squat. Variants The squat has a number of variants, some of which can be combined: Barbell Back squat – the bar is held on the back of the body upon the upper trapezius muscle, near to the base of the neck. Alternatively, it may be held lower across the upper back and rear deltoids. In powerlifting the barbell is often held in a lower position in order to create a lever advantage, while in weightlifting it is often held in a higher position which produces a posture closer to that of the clean and jerk. These variations are called low bar (or powerlifting squat) and high bar (or Olympic squat), respectively. Sumo squat – A variation of the back squat where the feet are placed slightly wider than shoulder width apart and the feet pointed outwards. Box squat – at the bottom of the motion the squatter will sit down on a bench or other type of support then rise again. The box squat is commonly utilized by powerlifters to train the squat. Front squat – the barbell is held in front of the body across the clavicles and deltoids in either a clean grip, as is used in weightlifting, or with the arms crossed and hands placed on top of the barbell. In addition to the muscles used in the back squat, the front squat also uses muscles of the upper back such as the trapezius to support the bar. Hack squat – the barbell is held in the hands just behind the legs; this exercise was first known as Hacke (heel) in Germany. According to European strength sports expert and Germanist Emmanuel Legeard this name was derived from the original form of the exercise where the heels were joined. The hack squat was thus a squat performed the way Prussian soldiers used to click their heels ("Hacken zusammen"). The hack squat was popularized in the English-speaking countries by early 1900s wrestler George Hackenschmidt. It is also called a rear deadlift. It is different from the hack squat performed with the use of a squat machine. Overhead squat – the barbell is held overhead in a wide-arm snatch grip; however, it is also possible to use a closer grip if balance allows. Zercher squat – the barbell is held in the crooks of the arms, on the inside of the elbow. One method of performing this is to deadlift the barbell, hold it against the thighs, squat into the lower portion of the squat, and then hold the bar on the thighs as you position the crook of your arm under the bar and then stand up. This sequence is reversed once the desired number of repetitions has been performed. Named after Ed Zercher, a 1930s strongman. Steinborn squat – named after the traditional strongman Henry 'Milo' Steinborn, this unique and risky variation literally turns the squat sideways. The complex lift requires an athlete to stand the barbell up so that is perpendicular to the ground and position themselves underneath. Then, they'll pull the barbell like a lever down across their upper back and receive the load in the squatted position. From there, the lifter must stand the weight up, perform some squats, and return the barbell to the floor in the same reverse fashion that they picked it up. Deep knee bend on toes – it is similar to a normal back squat only the lifter is positioned on their forefeet and toes, with their heels raised, throughout the repetition. Usually, the weight used is not more than moderate in comparison to a flat footed, heavy back squat. Single leg squat - The single leg squat (SLS), also known as a unilateral squat, involves squatting with one leg instead of two (which is a bilateral squat). Usually the leg which is held off the ground moves behind the person as they squat, but alternatively the person may position it ahead of themselves. Bilateral split squats which significantly increase the work performed by the front leg are sometimes erroneously referred to as single leg squats due to this emphasis. Single leg squats can be used to strengthen a person's stabilizer muscles more so than two legged squats and improve their ability to balance. They can also be used to remove muscle imbalances in the body by ensuring that, when performed alternatively, the right and left leg do the same amount of work. In comparison to two footed squats, the barbell weight only needs to be half of what it would be, minus the lifter's weight for the legs to perform the same amount of work i.e. for a 80 kg lifter, lifting 40 kg using only the left leg, means the left leg is lifting the equivalent of what it does in a two footed squat with 160 kg. This means that the single leg squat can be used in rehabilitation programmes where there is a need to avoid heavier loading of the back. Loaded squat jump – the barbell is positioned similarly to a back squat. The exerciser squats down, before moving upwards into a jump, and then landing in approximately the same position. The loaded squat jump is a form of loaded plyometric exercise used to increase explosive power. Variations of this exercise may involve the use of a trap bar or dumbbells. Variable resistance squat – In keeping with variable resistance training in general, a variable resistance squat involves altering the resistance during the movement in order that it better matches, in percentage terms, the respective 1RM for each strength phase the person is moving through i.e. more resistance in the higher stronger phase and less in the weaker lower phase e.g. 60 kg in the lower phase and 90 kg in the higher phase. Such an alteration of resistance can be achieved by the use of heavy chains which are attached to either end of the barbell. The chains are gradually lifted from the floor as the barbell is raised and vice versa when it is lowered. Thick elastic bands which are more stretched in the higher phase and less stretched in the lower phase can also be used. Combining heavier partial reps with lighter full reps can also help to train the stronger and weaker phases of the movement so the percentage of 1RM lifted for each phase respectively is more similar. Training with variable resistance squats is a technique used to increase speed and explosive power. Partial rep – Partial rep squats only move through a partial range of movement when compared with full squats which move through a full range of movement. Full range for a squat usually means the higher stronger phase of a squat's strength phase sequence (strength curve), but may also refer to just squatting for the lower weaker phase. When partial squats are used to strengthen the higher ROM this usually involves significantly increasing the weight in comparison to the weight used for a full squat. The percentage lifted of the stronger higher phase's 1RM can therefore be increased and not limited by the requirement to move through the weaker lower range of movement e.g. a person lifts 100% of his 1RM for the higher stronger phase which is 150 kg. If he did a full squat he would only have been able to do about 66% of his stronger phases 1RM because his 1RM for a full squat, including the weaker lower phase, is 100 kg. Training with heavier partial squats can help to improve general strength and power. It can also be more beneficial for sports and athletics as that ROM is more likely to be required in those activities i.e. it is rare to need to perform a full squat in sport, whereas partial squatting happens frequently. Partial squatting with a heavier weight than a full squat allows for can also help to improve a person's 1RM for a full squat. When partial squatting only the lower phase this is usually to strengthen that relatively weak phase of the lift in order to overcome a sticking point i.e. a point a person gets "stuck" at and finds it difficult to progress past. It is commonly recommended that partial squats are best used in conjunction with full squats. Lunge Split squat – an assisted one-legged squat where the non-lifting leg is rested on the ground a few steps behind the lifter, as if it were a static lunge. Bulgarian split squat – performed similarly to a split squat, but the foot of the non-lifting leg is rested on a platform behind the lifter. Other Belt squat – is an exercise performed the same as other squat variations except the weight is attached to a hip belt i.e. a dip belt Goblet squat – a squat performed while holding a kettlebell or dumbbell on to one's chest and abdomen with both hands. Smith squat – a squat using a Smith machine. Machine hack squat – using a squat machine. Trap bar squat – a trap bar is held in the hands while squats are performed. More commonly referred to as "trap bar deadlifts." Monolift squat – a squat using a monolift rack. Safety squat – a squat performed using a safety squat bar which has a camber in the middle, two handles, and padding. The use of a safety squat bar may help to reduce the risk of causing or aggravating an injury. Anderson squat - (aka Pin Squat, Bottoms Up Squat) starting the squat from the bottom position. Body-weight Body-weight or air squat – done with no weight or barbell, often at higher repetitions than other variants. Overhead squat – a non-weight bearing variation of the squat exercise, with the hands facing each other overhead, biceps aligned with the ears, and feet hip-width apart. This exercise is a predictor of total-body flexibility, mobility, and possible lower body dysfunction. Hindu squat – also called a baithak, or a deep knee bend on toes. It is performed without additional weight, and body weight placed on the forefeet and toes with the heels raised throughout; during the movement the knees track far past the toes. The baithak was a staple exercise of ancient Indian wrestlers. It was also used by Bruce Lee in his training regime. It may also be performed with the hands resting on an upturned club or the back of a chair. Jump squat – a plyometrics exercise where the squatter engages in a rapid eccentric contraction and jumps forcefully off the floor at the top of the range of motion. Basic single leg squat – the person stands with one foot on the ground and the other foot raised. They bend their standing leg and move downwards. Their raised leg moves behind them with the knee coming close to the heel of the grounded foot. Due to the extra effort required to balance, one legged squats can help to additionally improve a person's sense of balance. As with other forms of one legged exercise performed alternately, they can also help to mitigate against an excessive strength variation between the legs, as both legs are made to perform the same level of work e.g. in a two legged squat a person's right leg may do 55% of the work and their left leg 45%, which may result in an excessively uneven level of strength developing. By switching between using the right leg and left leg in one legged squats, a person can better ensure that each leg is doing the same level of work i.e. the right or left leg does 100% of the work for each respective one legged squat. Pistol squat – a bodyweight single leg squat done to full depth, while the other leg is extended off the floor and positioned somewhere in front. Sometimes dumbbells, kettlebells or medicine balls are added for resistance. Pistol squats may be performed with the foot flat on the floor or with the heel raised. Shrimp squat – also called the flamingo squat, a version of the pistols squat where instead of extending the non-working leg out in front, it is bent and placed behind the working leg while squatting, perhaps held behind in a hand. Shrimp squats may be performed with the foot flat on the floor or with the heel raised. Jockey squat - a half-squat, performed by being balanced on the forefeet throughout the repetition, with fingertips touching across the chest. This squat can be performed quickly and in high repetitions. Sissy squat – the knees travel over the toes, stretching the quadriceps and the body leans backwards. Can be done in a special sissy squat machine, and can also be weighted. Sumo Squat - also known as Plie Squat, in this variation legs are wider than shoulder width. Clinical significance The squat is a large muscle-mass resistance exercise. As such, squats produce acute increases in testosterone (especially in men) and growth hormone (especially in women). Although insulin-like growth factor 1 (IGF-1) is not raised acutely by squat exercise, resistance-trained men and women have higher resting IGF-1. Catecholamines (epinephrine, norepinephrine, and dopamine) are acutely elevated by resistance exercise, such as squats. The squat has been used in clinical settings to strengthen lower body musculature with little or no harm after joint-related injury. Young people may benefit by enhanced athletic performance and reduced injury as they mature, and movement competency can ensure independent living in the elderly. Injury considerations Although the squat has long been a basic element of weight training, it has not been without controversy over its safety. Some trainers claim that squats are associated with injuries to the lumbar spine and knees. Others, however, continue to advocate the squat as one of the best exercises for building muscle and strength. Some coaches maintain that incomplete squats (those terminating above parallel) are both less effective and more likely to cause injury than full squat (terminating with hips at or below knee level). A 2013 review concluded that deep squats performed with proper technique do not lead to increased rates of degenerative knee injuries and are an effective exercise. The same review also concluded that shallower squats may lead to degeneration in the lumbar spine and knees in the long-term. Squats used in physical therapy Squats can be used for some rehabilitative activities because they hone stability without excessive compression on the tibiofemoral joint and anterior cruciate ligament. Deeper squats are associated with higher compressive loads on patellofemoral joint and it is possible that people who suffer from pain in this joint cannot squat at increased depths. For some knee rehabilitation activities, patients might feel more comfortable with knee flexion between 0 and 50 degrees because it places less force compared to deeper depths. Another study shows that decline squats at angles higher than 16 degrees may not be beneficial for the knee and fail to decrease calf tension. Other studies have indicated that the best squat to hone the quadriceps, without inflaming the patellofemoral joint, occurs between 0 and 50 degrees. A combination of single-limb squats and decline angles has been used to rehabilitate knee extensors. Conducting squats at a declined angle allows the knee to flex despite possible pain or lack of mobilization in the ankle. If therapists are looking to focus on the knee during squats, one study shows that doing single-limb squats at a 16-degree decline angle has the greatest activation of the knee extensors without placing excessive pressure on the ankles. This same study also found that a 24-degree decline angle can be used to strengthen ankles and knee extensors. Different Sets For Squats Forced repetitions are used when training until failure. They are completed by completing an additional 2–4 reps (assisted) at the end of the set. Partial repetitions are also used in order to maintain a constant period of tension in order to promote hypertrophy. Lastly, drop-sets are an intense workout done at the end of a set which runs until failure and continues with a lower weight without rest. World records Men Equipped squat (with multi-ply suit and wraps) – by Nathan Baptist (2021) Raw squat (with wraps) – by Vladislav Alhazov (2018) Raw squat (with sleeves) – by Ray Orlando Williams (2019) Raw squat (without sleeves or wraps) – by Paul Anderson (1965) Playboy bunny smith machine squat – by Don Reinhoudt (1979) Cement block smith machine squat – by Bill Kazmaier (1981) Double T 'cambered bar' squat (with single-ply suit) – by JF Caron (2022) Steinborn squat – by Martins Licis (2019) Squat for reps – (Raw) for 5 reps by Paul Anderson (1965) Squat for reps – (Raw) for 4 reps by Eric Lilliebridge (2014) Squat for reps – (Raw) for 5 reps (paused) by Hafþór Júlíus Björnsson (2024) Squat for reps – (Raw) for 7 reps by Jesus Olivares (2023) Squat for reps – (Raw) for 2 sets of 10 reps by Paul Anderson (1957) Squat for reps – (Raw) for 12 reps by Zahir Khudayarov (2024) Squat for reps – (with singly ply suit) for 15 reps in one minute by Žydrūnas Savickas (2014) Squat for reps – (with singly ply suit) for 23 reps in one minute by Tom Platz (1993) Squat for reps – (Raw) for 29 reps in one minute by Hafþór Júlíus Björnsson (2017) Squat for reps – (own bodyweight) for 42 reps in one minute by Erikas Dovydėnas (2022) Most squats in one minute (no added weight/ bodyweight only) – 84 reps by Tourab Nesanah (2022) Most pistol squats in one minute (no added weight/ bodyweight only) – 52 reps by William Rauhaus (2016) Most squats in one hour (no added weight/ bodyweight only) – 4,708 reps by Paddy Doyle (2007) Most squats in one day (no added weight/ bodyweight only) – 25,000 reps by Joe Reverdes (2020) Women Equipped squat (with multi-ply suit and wraps) – by Leah Reichman (2023) Equipped squat (with single-ply suit and wraps) – by Galina Karpova (2012) Raw squat (with wraps) – by April Mathis (2017) Raw squat (with sleeves) – by Sonita Muluh (2024) Squat for reps – (with singly ply suit) for 29 reps in two minutes by Maria Strik (2013) Squat for reps – (own bodyweight) for 42 reps in one minute by Karenjeet Bains (2022) Most sumo squats in one hour (no added weight/ bodyweight only) – 5,135 reps by Thienna Ho (2007)
Biology and health sciences
Physical fitness
Health
1275331
https://en.wikipedia.org/wiki/Red-giant%20branch
Red-giant branch
The red-giant branch (RGB), sometimes called the first giant branch, is the portion of the giant branch before helium ignition occurs in the course of stellar evolution. It is a stage that follows the main sequence for low- to intermediate-mass stars. Red-giant-branch stars have an inert helium core surrounded by a shell of hydrogen fusing via the CNO cycle. They are K- and M-class much larger and more luminous than main-sequence stars of the same temperature. Discovery Red giants were identified early in the 20th century when the use of the Hertzsprung–Russell diagram made it clear that there were two distinct types of cool stars with very different sizes: dwarfs, now formally known as the main sequence; and giants. The term red-giant branch came into use during the 1940s and 1950s, although initially just as a general term to refer to the red-giant region of the Hertzsprung–Russell diagram. Although the basis of a thermonuclear main-sequence lifetime, followed by a thermodynamic contraction phase to a white dwarf was understood by 1940, the internal details of the various types of giant stars were not known. In 1968, the name asymptotic giant branch (AGB) was used for a branch of stars somewhat more luminous than the bulk of red giants and more unstable, often large-amplitude variable stars such as Mira. Observations of a bifurcated giant branch had been made years earlier but it was unclear how the different sequences were related. By 1970, the red-giant region was well understood as being made up from subgiants, the RGB itself, the horizontal branch, and the AGB, and the evolutionary state of the stars in these regions was broadly understood. The red-giant branch was described as the first giant branch in 1967, to distinguish it from the second or asymptotic giant branch, and this terminology is still frequently used today. Modern stellar physics has modelled the internal processes that produce the different phases of the post-main-sequence life of moderate-mass stars, with ever-increasing complexity and precision. The results of RGB research are themselves being used as the basis for research in other areas. Evolution When a star with a mass from about (solar mass) to ( for low-metallicity stars) exhausts its core hydrogen, it enters a phase of hydrogen shell burning during which it becomes a red giant, larger and cooler than on the main sequence. During hydrogen shell burning, the interior of the star goes through several distinct stages which are reflected in the outward appearance. The evolutionary stages vary depending primarily on the mass of the star, but also on its metallicity. Subgiant phase After a main-sequence star has exhausted its core hydrogen, it begins to fuse hydrogen in a thick shell around a core consisting largely of helium. The mass of the helium core is below the Schönberg–Chandrasekhar limit and is in thermal equilibrium, and the star is a subgiant. Any additional energy production from the shell fusion is consumed in inflating the envelope and the star cools but does not increase in luminosity. Shell hydrogen fusion continues in stars of roughly solar mass until the helium core increases in mass sufficiently that it becomes degenerate. The core then shrinks, heats up and develops a strong temperature gradient. The hydrogen shell, fusing via the temperature-sensitive CNO cycle, greatly increases its rate of energy production and the stars is considered to be at the foot of the red-giant branch. For a star the same mass as the sun, this takes approximately 2 billion years from the time that hydrogen was exhausted in the core. Subgiants more than about reach the Schönberg–Chandrasekhar limit relatively quickly before the core becomes degenerate. The core still supports its own weight thermodynamically with the help of energy from the hydrogen shell, but is no longer in thermal equilibrium. It shrinks and heats causing the hydrogen shell to become thinner and the stellar envelope to inflate. This combination decreases luminosity as the star cools towards the foot of the RGB. Before the core becomes degenerate, the outer hydrogen envelope becomes opaque which causes the star to stop cooling, increases the rate of fusion in the shell, and the star has entered the RGB. In these stars, the subgiant phase occurs within a few million years, causing an apparent gap in the Hertzsprung–Russell diagram between B-type main-sequence stars and the RGB seen in young open clusters such as Praesepe. This is the Hertzsprung gap and is actually sparsely populated with subgiant stars rapidly evolving towards red giants, in contrast to the short densely populated low-mass subgiant branch seen in older clusters such as ω Centauri. Ascending the red-giant branch Stars at the foot of the red-giant branch all have a similar temperature around , corresponding to an early to mid-K spectral type. Their luminosities range from a few times the luminosity of the sun for the least massive red giants to several thousand times as luminous for stars around . As their hydrogen shells continue to produce more helium, the cores of RGB stars increase in mass and temperature. This causes the hydrogen shell to fuse more rapidly. Stars become more luminous, larger and somewhat cooler. They are described as ascending the RGB. On the ascent of the RGB, there are a number of internal events that produce observable external features. The outer convective envelope becomes deeper and deeper as the star grows and shell energy production increases. Eventually it reaches deep enough to bring fusion products to the surface from the formerly convective core, known as the first dredge-up. This changes the surface abundance of helium, carbon, nitrogen and oxygen. A noticeable clustering of stars at one point on the RGB can be detected and is known as the RGB bump. It is caused by a discontinuity in hydrogen abundance left behind by the deep convection. Shell energy production temporarily decreases at this discontinuity, effective stalling the ascent of the RGB and causing an excess of stars at that point. Tip of the red-giant branch For stars with a degenerate helium core, there is a limit to this growth in size and luminosity, known as the tip of the red-giant branch, where the core reaches sufficient temperature to begin fusion. All stars that reach this point have an identical helium core mass of almost , and very similar stellar luminosity and temperature. These luminous stars have been used as standard candle distance indicators. Visually, the tip of the red-giant branch occurs at about absolute magnitude −3 and temperatures around 3,000 K at solar metallicity, closer to 4,000 K at very low metallicity. Models predict a luminosity at the tip of , depending on metallicity. In modern research, infrared magnitudes are more commonly used. Leaving the red-giant branch A degenerate core begins fusion explosively in an event known as the helium flash, but externally there is little immediate sign of it. The energy is consumed in lifting the degeneracy in the core. The star overall becomes less luminous and hotter and migrates to the horizontal branch. All degenerate helium cores have approximately the same mass, regardless of the total stellar mass, so the helium fusion luminosity on the horizontal branch is the same. Hydrogen shell fusion can cause the total stellar luminosity to vary, but for most stars at near solar metallicity, the temperature and luminosity are very similar at the cool end of the horizontal branch. These stars form the red clump at about 5,000 K and . Less massive hydrogen envelopes cause the stars to take up a hotter and less luminous position on the horizontal branch, and this effect occurs more readily at low metallicity so that old metal-poor clusters show the most pronounced horizontal branches. Stars initially more massive than have non-degenerate helium cores on the red-giant branch. These stars become hot enough to start triple-alpha fusion before they reach the tip of the red-giant branch and before the core becomes degenerate. They then leave the red-giant branch and perform a blue loop before returning to join the asymptotic giant branch. Stars only a little more massive than perform a barely noticeable blue loop at a few hundred before continuing on the AGB hardly distinguishable from their red-giant branch position. More massive stars perform extended blue loops which can reach 10,000 K or more at luminosities of . These stars will cross the instability strip more than once and pulsate as Type I (Classical) Cepheid variables. Properties The table below shows the typical lifetimes on the main sequence (MS), subgiant branch (SB) and red-giant branch (RGB), for stars with different initial masses, all at solar metallicity (Z = 0.02). Also shown are the helium core mass, surface effective temperature, radius and luminosity at the start and end of the RGB for each star. The end of the red-giant branch is defined to be when core helium ignition takes place. Intermediate-mass stars only lose a small fraction of their mass as main-sequence and subgiant stars, but lose a significant amount of mass as red giants. The mass lost by a star similar to the Sun affects the temperature and luminosity of the star when it reaches the horizontal branch, so the properties of red-clump stars can be used to determine the mass difference before and after the helium flash. Mass lost from red giants also determines the mass and properties of the white dwarfs that form subsequently. Estimates of total mass loss for stars that reach the tip of the red-giant branch are around . Most of this is lost within the final million years before the helium flash. Mass lost by more massive stars that leave the red-giant branch before the helium flash is more difficult to measure directly. The current mass of Cepheid variables such as δ Cephei can be measured accurately because there are either binaries or pulsating stars. When compared with evolutionary models, such stars appear to have lost around 20% of their mass, much of it during the blue loop and especially during pulsations on the instability strip. Variability Some red giants are large amplitude variables. Many of the earliest-known variable stars are Mira variables with regular periods and amplitudes of several magnitudes, semiregular variables with less obvious periods or multiple periods and slightly lower amplitudes, and slow irregular variables with no obvious period. These have long been considered to be asymptotic giant branch (AGB) stars or supergiants and the red-giant branch (RGB) stars themselves were not generally considered to be variable. A few apparent exceptions were considered to be low-luminosity AGB stars. Studies in the late 20th century began to show that all giants of class M were variable with amplitudes of 10 milli-magnitudes of more, and that late-K-class giants were also likely to be variable with smaller amplitudes. Such variable stars were amongst the more luminous red giants, close to the tip of the RGB, but it was difficult to argue that they were all actually AGB stars. The stars showed a period amplitude relationship with larger-amplitude variables pulsating more slowly. Microlensing surveys in the 21st century have provided extremely accurate photometry of thousands of stars over many years. This has allowed for the discovery of many new variable stars, often of very small amplitudes. Multiple period-luminosity relationships have been discovered, grouped into regions with ridges of closely spaced parallel relationships. Some of these correspond to the known Miras and semi-regulars, but an additional class of variable star has been defined: OGLE Small Amplitude Red Giants, or OSARGs. OSARGs have amplitudes of a few thousandths of a magnitude and semi-regular periods of 10 to 100 days. The OGLE survey published up to three periods for each OSARG, indicating a complex combination of pulsations. Many thousands of OSARGs were quickly detected in the Magellanic Clouds, both AGB and RGB stars. A catalog has since been published of 192,643 OSARGs in the direction of the Milky Way central bulge. Although around a quarter of Magellanic Cloud OSARgs show long secondary periods, very few of the galactic OSARGs do. The RGB OSARGs follow three closely spaced period-luminosity relations, corresponding to the first, second and third overtones of radial pulsation models for stars of certain masses and luminosities, but that dipole and quadrupole non-radial pulsations are also present leading to the semi-regular nature of the variations. The fundamental mode does not appear, and the underlying cause of the excitation is not known. Stochastic convection has been suggested as a cause, similar to solar-like oscillations. Two additional types of variation have been discovered in RGB stars: long secondary periods, which are associated with other variations but can show larger amplitudes with periods of hundreds or thousands of days; and ellipsoidal variations. The cause of the long secondary periods is unknown, but it has been proposed that they are due to interactions with low-mass companions in close orbits. The ellipsoidal variations are also thought to be created in binary systems, in this case contact binaries where distorted stars cause strictly periodic variations as they orbit.
Physical sciences
Stellar astronomy
Astronomy
1275446
https://en.wikipedia.org/wiki/Lanternfish
Lanternfish
Lanternfish (or myctophids, from the Greek μυκτήρ myktḗr, "nose" and ophis, "serpent") are small mesopelagic fish of the large family Myctophidae. One of two families in the order Myctophiformes, the Myctophidae are represented by 246 species in 33 genera, and are found in oceans worldwide. Lanternfishes are aptly named after their conspicuous use of bioluminescence. Their sister family, the Neoscopelidae, are much fewer in number but superficially very similar; at least one neoscopelid shares the common name "lanternfish": the large-scaled lantern fish, Neoscopelus macrolepidotus. Lanternfish are among the most widely distributed, diverse and populous vertebrates, with some estimates suggesting that they may have a total global biomass of 1.8 to 16 gigatonnes, accounting for up to 65% of all deep-sea fish biomass. Commercial fisheries for them exist off South Africa, in the sub-Antarctic, and in the Gulf of Oman. Description Lanternfish typically have a slender, compressed body covered in small, silvery deciduous cycloid scales (ctenoid in four species), a large bluntly rounded head, large elliptical to round lateral eyes (dorsolateral in Protomyctophum species), and a large terminal mouth with jaws closely set with rows of small teeth. The fins are generally small, with a single high dorsal fin, a forked caudal fin, and an adipose fin. The anal fin is supported by a cartilaginous plate at its base, and originates under, or slightly behind, the rear part of the dorsal fin. The pectoral fins, usually with eight rays, may be large and well-developed to small and degenerate, or completely absent in a few species. In some species, such as those of the genus Lampanyctus, the pectorals are greatly elongated. Most lanternfish have a gas bladder, but it degenerates or fills with lipids during the maturation of a few species. The lateral line is uninterrupted. In all but one species, Taaningichthys paurolychnus, a number of photophores (light-producing organs) are present; these are paired and concentrated in ventrolateral rows on the body and head. Some may also possess specialised photophores on the caudal peduncle, in proximity to the eyes (e.g., the "headlights" of Diaphus species), and luminous patches at the base of the fins. The photophores emit a weak blue, green, or yellow light, and are known to be arranged in species-specific patterns. In some species, the pattern varies between males and females. This is true for the luminous caudal patches, with the males' being typically above the tail and the females' being below the tail. Lanternfish are generally small fish, ranging from about in length, with most being under . Shallow-living species are an iridescent blue to green or silver, while deeper-living species are dark brown to black. Ecology Lanternfish are well known for their diel vertical migrations: during daylight hours, most species remain within the gloomy bathypelagic zone, between deep, but towards sundown, the fish begin to rise into the epipelagic zone, between deep. The lanternfish are thought to do this to avoid predation, and because they are following the diel vertical migrations of zooplankton, upon which they feed. After a night spent feeding in the surface layers of the water column, the lanternfish begin to descend back into the lightless depths and are gone by daybreak. By releasing fecal pellets at depth, Laternfish make the carbon capture process called biological pump more efficient. Most species remain near the coast, schooling over the continental slope. Different species are known to segregate themselves by depth, forming dense, discrete conspecific layers, probably to avoid competition between different species. Due to their gas bladders, these layers are visible on sonar scans and give the impression of a "false ocean bottom"; this is the so-called deep scattering layer that so perplexed early oceanographers (see below). Great variability in migration patterns occurs within the family. Some deeper-living species may not migrate at all, while others may do so only sporadically. Migration patterns may also depend on life stage, sex, latitude, and season. The arrangements of lanternfish photophores are different for each species, so their bioluminescence is thought to play a role in communication, specifically in shoaling and courtship behaviour. The concentration of the photophores on the flanks of the fish also indicate the light's use as camouflage; in a strategy termed counterillumination, the lanternfish regulate the brightness of the bluish light emitted by their photophores to match the ambient light level above, effectively masking the lanternfishes' silhouette when viewed from below. A major source of food for many marine animals, lanternfish are an important link in the food chain of many local ecosystems, being heavily preyed upon by whales and dolphins, large pelagic fish such as salmon, tuna and sharks, grenadiers and other deep-sea fish (including other lanternfish), pinnipeds, sea birds, notably penguins, and large squid such as the jumbo squid, Dosidicus gigas. Lanternfish themselves have been found to feed on bits of plastic debris accumulating in the oceans. At least one lanternfish was found with over 80 pieces of plastic chips in its gut, according to scientists monitoring ocean plastic in the Pacific Ocean's eastern garbage patch. Deep scattering layer Sonar operators, using the newly developed sonar technology during World War II, were puzzled by what appeared to be a false sea floor 300–500 metres deep at day, and less deep at night. This turned out to be due to millions of marine organisms, most particularly small mesopelagic fish, with swimbladders that reflected the sonar. These organisms migrate up into shallower water at dusk to feed on plankton. The layer is deeper when the moon is out, and can become shallower when clouds pass over the moon. Sampling via deep trawling indicates that lanternfish account for as much as 65% of all deep sea fish biomass. Indeed, lanternfish are among the most widely distributed, populous, and diverse of all vertebrates, playing an important ecological role as prey for larger organisms. The estimated global biomass of lanternfish is 550–660 million tonnes, several times the entire world fisheries catch. Lanternfish also account for much of the biomass responsible for the deep scattering layer of the world's oceans. Sonar reflects off the millions of lanternfish swim bladders, giving the appearance of a false bottom. Rise to dominance Lanternfish currently represent one of the dominant groups of mesopelagic fishes in terms of abundance, biomass, and diversity. Their otolith record dominates pelagic sediments below 200 m in dredges, especially during the entire Neogene. The diversity and rise to dominance of lanternfish can be examined by analysing these otolith records. The earliest unambiguous fossil lanternfish are known based on otoliths from the late Paleocene and early Eocene. During their early evolutionary history, lanternfish were likely not adapted to a high oceanic lifestyle but occurred over shelf and upper-slope regions, where they were locally abundant during the middle Eocene. A distinct upscaling in otolith size is observed in the early Oligocene, which also marks their earliest occurrence in bathyal sediments. This transition is interpreted to be related to the change from a halothermal deep-ocean circulation to a thermohaline regime and the associated cooling of the deep ocean and rearrangement of nutrient and silica supply. The size of early Oligocene lanternfish is remarkably congruent with diatom abundance, the main food resource for the zooplankton and thus for lanternfish and whales. The warmer late Oligocene to early middle Miocene period was characterised by an increase in the disparity of lanternfish but with a reduction in their otolith sizes. A second and persisting secular pulse in lanternfish diversity (particularly within the genus Diaphus) and increase in size begins with the "biogenic bloom" during the late Miocene, paralleled with diatom abundance and gigantism in baleen whales. Genera Benthosema Bolinichthys Centrobranchus Ceratoscopelus Ctenoscopelus Dasyscopelus Diaphus Diogenichthys Electrona Gonichthys Gymnoscopelus Hintonia Hygophum Idiolychnus Krefftichthys Lampadena Lampanyctodes Lampanyctus Lampichthys Lepidophanes Lobianchia Loweina Metelectrona Myctophum Nannobrachium Notolychnus Notoscopelus Parvilux Protomyctophum Scopelopsis Stenobrachius Symbolophorus Taaningichthys Tarletonbeania Triphoturus The following fossil genera are also known: Eokrefftia Eomyctophum Oligophus
Biology and health sciences
Myctophiformes
Animals
20913273
https://en.wikipedia.org/wiki/Baboon
Baboon
Baboons are primates comprising the genus Papio, one of the 23 genera of Old World monkeys, in the family Cercopithecidae. There are six species of baboon: the hamadryas baboon, the Guinea baboon, the olive baboon, the yellow baboon, the Kinda baboon and the chacma baboon. Each species is native to one of six areas of Africa and the hamadryas baboon is also native to part of the Arabian Peninsula. Baboons are among the largest non-hominoid primates and have existed for at least two million years. Baboons vary in size and weight depending on the species. The smallest, the Kinda baboon, is in length and weighs only , while the largest, the chacma baboon, is up to in length and weighs . All baboons have long, dog-like muzzles, heavy, powerful jaws with sharp canine teeth, close-set eyes, thick fur except on their muzzles, short tails, and nerveless, hairless pads of skin on their protruding buttocks called ischial callosities that provide for sitting comfort. Male hamadryas baboons have large white manes. Baboons exhibit sexual dimorphism in size, colour and/or canine teeth development. Baboons are diurnal and terrestrial, but sleep in trees, or on high cliffs or rocks at night, away from predators. They are found in open savannas and woodlands across Africa. They are omnivorous and their diet consists of a variety of plants and animals. Their principal predators are Nile crocodiles, leopards, lions and hyenas. Most baboons live in hierarchical troops containing harems. Baboons can determine from vocal exchanges what the dominance relations are between individuals. In general, each male can mate with any female; the mating order among the males depends partly on their social rank. Females typically give birth after a six-month gestation, usually to one infant. The females tend to be the primary caretaker of the young, although several females may share the duties for all of their offspring. Offspring are weaned after about a year. They reach sexual maturity around five to eight years. Males leave their birth group, usually before they reach sexual maturity, whereas most females stay in the same group for their lives. Baboons in captivity live up to 45 years, while in the wild they average between 20 and 30 years. Taxonomy Six species of Papio are recognized, although there is some disagreement about whether they are really full species or subspecies. Previously five species of baboon were recognised; the Kinda baboon has gained support for its species status after phylogenetic studies of all members of Papio. Many authors distinguish P. hamadryas as a full species, but regard all the others as subspecies of P. cynocephalus and refer to them collectively as "savanna baboons". This may not be helpful: it is based on the argument that the hamadryas baboon is behaviorally and physically distinct from other baboon species, and that this reflects a separate evolutionary history. However, recent morphological and genetic studies of Papio show the hamadryas baboon to be more closely related to the northern baboon species (the Guinea and olive baboons) than to the southern species (the yellow and chacma baboons). Fossil record In 2015 researchers found the oldest baboon fossil on record, dated at 2 million years old. Characteristics All baboons have long, dog-like muzzles, heavy, powerful jaws with sharp canine teeth, close-set eyes, thick fur except on their muzzles, short tails, and rough spots on their protruding buttocks, called ischial callosities. These calluses are nerveless, hairless pads of skin that provide for the sitting comfort of the baboon. All baboon species exhibit pronounced sexual dimorphism, usually in size, but also sometimes in colour. Males have much larger upper canines compared to females and use them in threat displays. Males of the hamadryas baboon species also have large white manes. Behavior and ecology Baboons are able to acquire orthographic processing skills, which form part of the ability to read. Habitat and prey Baboons are terrestrial (ground dwelling) and are found in open savannah, open woodland and hills across Africa. They are omnivorous, highly opportunistic feeders and will eat virtually anything, including grasses, roots, seeds, leaves, bark, fruits, fungus, insects, spiders, worms, fish, shellfish, rodents, birds, vervet monkeys, and small antelopes. They are foragers and are active at irregular times throughout the day and night. They often raid human dwellings, and in South Africa they break into homes and cars in search of food. Baboons will also raid farms, eating crops and preying on sheep, goats and poultry. Predators Other than humans, the principal predators of baboons are leopards, lions, and spotted and striped hyenas. They are considered a difficult prey for the leopard, though, which is mostly a threat to young baboons. Large males will often confront them by flashing their eyelids, showing their teeth by yawning, making gestures, and chasing after the intruder/predator. Although they are not a prey species, baboons have been killed by the black mamba snake. This usually occurs when a baboon accidentally rouses the snake. Social systems The collective noun for baboons is "troop". Most baboons live in hierarchical troops. Group sizes are typically around 50 animals, but can vary between 5 and 250, depending on species, location and time of year. The structure within the troop varies considerably between hamadryas baboons and the remaining species, sometimes collectively referred to as savanna baboons. The hamadryas baboons often appear in very large groups composed of many smaller harems (one male with four or so females), to which females from elsewhere in the troop are recruited while they are still too young to breed. Other baboon species have a more promiscuous structure with a strict dominance hierarchy based on the matriline. The hamadryas baboon group will typically include a younger male, but he will not attempt to mate with the females unless the older male is removed. In the harems of the hamadryas baboons, the males jealously guard their females, to the point of grabbing and biting the females when they wander too far away. Despite this, some males will raid harems for females. Such situations often cause aggressive fights between the males. Visual threats usually accompany these aggressive fights. These include a quick flashing of the eyelids accompanied by a yawn to show off the teeth. Some males succeed in taking a female from another's harem, called a "takeover". In several species, infant baboons are taken by the males as hostages, or used as shields during fights. Baboons can determine from vocal exchanges what the dominance relations are between individuals. When a confrontation occurs between different families or where a lower-ranking baboon takes the offensive, baboons show more interest in this exchange than those between members of the same family or when a higher-ranking baboon takes the offensive. This is because confrontations between different families or rank challenges can have a wider impact on the whole troop than an internal conflict in a family or a baboon reinforcing its dominance. Baboon social dynamics can also vary; Robert Sapolsky reported on a troop, known as the Forest Troop, during the 1980s, which experienced significantly less aggressive social dynamics after its most aggressive males died off during a tuberculosis outbreak, leaving a skewed gender ratio of majority females and a minority of low-aggression males. This relatively low-aggression culture persisted into the 1990s and extended to new males coming into the troop, though Sapolsky observed that while unique, the troop was not an "unrecognizably different utopia"; there was still a dominance hierarchy and aggressive intrasexual competition amongst males. Furthermore, no new behaviours were created amongst the baboons, rather the difference was the frequency and context of existing baboon behaviour. Mating Baboon mating behavior varies greatly depending on the social structure of the troop. In the mixed groups of savanna baboons, each male can mate with any female. The mating order among the males depends partially on their social ranking, and fights between males are not unusual. There are, however, more subtle possibilities; in mixed groups, males sometimes try to win the friendship of females. To garner this friendship, they may help groom the female, help care for her young, or supply her with food. The probability is high that those young are their offspring. Some females clearly prefer such friendly males as mates. However, males will also take infants during fights to protect themselves from harm. A female initiates mating by presenting her swollen rump to the male's face. In a wild baboon population of the Amboseli ecosystem in Kenya, inbreeding is avoided by mate choice. Inbreeding avoidance through mate choice is thought to only evolve when related possible sexual partners frequently encounter each other and there is a risk of inbreeding depression. Birth, rearing young, and life expectancy Females typically give birth after a six-month gestation, usually to a single infant; twin baboons are rare and often do not survive. The young baboon weighs approximately 400 g and has a black epidermis when born. The females tend to be the primary caretaker of the young, although several females will share the duties for all of their offspring. After about one year, the young animals are weaned. They reach sexual maturity in five to eight years. Baboon males leave their birth group, usually before they reach sexual maturity, whereas females are philopatric and stay in the same group their whole lives. Baboons in captivity have been known to live up to 45 years, while in the wild their life expectancy is between 20 and 30 years. Relationship with humans In Egyptian mythology, Babi was the deification of the hamadryas baboon and was therefore a sacred animal. It was known as the attendant of Thoth, so is also called the sacred baboon. The 2009 documentary Baboon Woman examines the relationship between baboons and humans in South Africa. Diseases Herpesvirus papio family of viruses and strains infect baboons. Their effects on humans are unknown. Humans infected with Mycobacterium tuberculosis can transmit the disease to the primates upon close proximity. Pathogens have a high likelihood of spreading through humans and species of nonhuman primates, such as baboons.
Biology and health sciences
Primates
null
20913655
https://en.wikipedia.org/wiki/Flood%20management
Flood management
Flood management describes methods used to reduce or prevent the detrimental effects of flood waters. Flooding can be caused by a mix of both natural processes, such as extreme weather upstream, and human changes to waterbodies and runoff. Flood management methods can be either of the structural type (i.e. flood control) and of the non-structural type. Structural methods hold back floodwaters physically, while non-structural methods do not. Building hard infrastructure to prevent flooding, such as flood walls, is effective at managing flooding. However, it is best practice within landscape engineering to rely more on soft infrastructure and natural systems, such as marshes and flood plains, for handling the increase in water. Flood management can include flood risk management, which focuses on measures to reduce risk, vulnerability and exposure to flood disasters and providing risk analysis through, for example, flood risk assessment. Flood mitigation is a related but separate concept describing a broader set of strategies taken to reduce flood risk and potential impact while improving resilience against flood events. As climate change has led to increased flood risk an intensity, flood management is an important part of climate change adaptation and climate resilience. For example, to prevent or manage coastal flooding, coastal management practices have to handle natural processes like tides but also sea level rise due to climate change. The prevention and mitigation of flooding can be studied on three levels: on individual properties, small communities, and whole towns or cities. Terminology Flood management is a broad term that includes measures to control or mitigate flood waters, such as actions to prevent floods from occurring or to minimize their impacts when they do occur. Flood management methods can be structural or non-structural: Structural flood management (i.e: flood control) is the reduction of the effects of a flood using physical solutions, such as reservoirs, levees, dredging and diversions. Non-structural flood management includes land-use planning, advanced warning systems and flood insurance. Further examples are: "zoning ordinances and codes, flood forecasting, flood proofing, evacuation and channel clearing, flood fight activities, and upstream land treatment or management to control flood damages without physically restraining flood waters". There are several related terms that are closely connected or encompassed by flood management. Flood management can include flood risk management, which focuses on measures to reduce risk, vulnerability and exposure to flood disasters and providing risk analysis through, for example, flood risk assessment. In the context of natural hazards and disasters, risk management involves "plans, actions, strategies or policies to reduce the likelihood and/or magnitude of adverse potential consequences, based on assessed or perceived risks". Flood control, flood protection, flood defence and flood alleviation are all terms that mean "the detention and/or diversion of water during flood events for the purpose of reducing discharge or downstream inundation". Flood control is part of environmental engineering. It involves the management of water movement, such as redirecting flood run-off through the use of floodwalls and flood gates to prevent floodwaters from reaching a particular area. Flood mitigation is a related but separate concept describing a broader set of strategies taken to reduce flood risk and potential impact while improving resilience against flood events. These methods include prevention, prediction (which enables flood warnings and evacuation), proofing (e.g.: zoning regulations), physical control (nature-based solutions and physical structures like dams and flood walls) and insurance (e.g.: flood insurance policies). Flood relief methods are used to reduce the effects of flood waters or high water levels during a flooding event. They include evacuation plans and rescue operations. Flood relief is part of the response and recovery phase in a flood management plan. Causes of flooding Precipitation, absorption, and runoff Flood levels: blunting the peak Water levels during a flood tend to rise, then fall, very abruptly. The peak flood level occurs as a very steep, short spike; a quick spurt of water. Anything that slows the surface runoff (marshes, meanders, vegetation, porous materials, turbulent flow, the river spreading over a floodplain) will slow some of the flow more than other parts, spreading the flow over time and blunting the spike. Even slightly blunting the spike significantly decreases the peak flood level. Generally, the higher the peak flood level, the more flood damage is done. Modern flood control seeks to "slow the flow", and deliberately flood some low-lying areas, ideally vegetated, to act as sponges, letting them drain again as the floodwaters go down. Purposes Where floods interact with housing, industry and farming that flood management is indicated and in such cases environmentally helpful solutions may provide solutions. Natural flooding has many beneficial environmental effects. This kind of flooding is usually a seasonal occurrence where floods help replenish soil fertility, restore wetlands and promote biodiversity. Reducing the impacts of floods Flooding has many impacts. It damages property and endangers the lives of humans and other species. Rapid water runoff causes soil erosion and concomitant sediment deposition elsewhere (such as further downstream or down a coast). The spawning grounds for fish and other wildlife habitats can become polluted or completely destroyed. Some prolonged high floods can delay traffic in areas which lack elevated roadways. Floods can interfere with drainage and economical use of lands, such as interfering with farming. Structural damage can occur in bridge abutments, bank lines, sewer lines, and other structures within floodways. Waterway navigation and hydroelectric power are often impaired. Financial losses due to floods are typically millions of dollars each year, with the worst floods in recent U.S. history having cost billions of dollars. Protection of individual properties Property owners may fit their homes to stop water entering by blocking doors and air vents, waterproofing important areas and sandbagging the edges of the building. Private precautionary measures are increasingly important in flood risk management. Flood mitigation at the property level may also involve preventative measures focused on the building site, including scour protection for shoreline developments, improving rainwater in filtration through the use of permeable paving materials and grading away from structures, and inclusion of berms, wetlands or swales in the landscape. Protection of communities When more homes, shops and infrastructure are threatened by the effects of flooding, then the benefits of protection are worth the additional cost. Temporary flood defenses can be constructed in certain locations which are prone to floods and provide protection from rising flood waters. Rivers running through large urban developments are often controlled and channeled. Water rising above a canal's full capacity may cause flooding to spread to other waterways and areas of the community, which causes damage. Defenses (both long-term and short-term) can be constructed to minimize damage, which involves raising the edge of the water with levees, embankments or walls. The high population and value of infrastructure at risk often justifies the high cost of mitigation in larger urban areas. Protection of wider areas such as towns or cities The most effective way of reducing the risk to people and property is through the production of flood risk maps. Most countries have produced maps which show areas prone to flooding based on flood data. In the UK, the Environment Agency has produced maps which show areas at risk. The map to the right shows a flood map for the City of York, including the floodplain for a 1 in 100-year flood (dark blue), the predicted floodplain for a 1 in 1000 year flood (light blue) and low-lying areas in need of flood defence (purple). The most sustainable way of reducing risk is to prevent further development in flood-prone areas and old waterways. It is important for at-risk communities to develop a comprehensive Floodplain Management plan. In the US, communities that participate in the National Flood Insurance Program must agree to regulate development in flood-prone areas. Strategic retreat One way of reducing the damage caused by flooding is to remove buildings from flood-prone areas, leaving them as parks or returning them to wilderness. Floodplain buyout programs have been operated in places like New Jersey (both before and after Hurricane Sandy), Charlotte, North Carolina, and Missouri. In the United States, FEMA produces flood insurance rate maps that identify areas of future risk, enabling local governments to apply zoning regulations to prevent or minimize property damage. Resilience Buildings and other urban infrastructure can be designed so that even if a flood does happen, the city can recover quickly and costs are minimized. For example, homes can be put on stilts, electrical and HVAC equipment can be put on the roof instead of in the basement, and subway entrances and tunnels can have built-in movable water barriers. New York City began a substantial effort to plan and build for flood resilience after Hurricane Sandy. Flood resilience technologies support the fast recovery of individuals and communities affected, but their use remains limited. Climate change adaptation Structural methods Some methods of flood control have been practiced since ancient times. These methods include planting vegetation to retain extra water, terracing hillsides to slow flow downhill, and the construction of floodways (man-made channels to divert floodwater). Other techniques include the construction of levees, lakes, dams, reservoirs, retention ponds to hold extra water during times of flooding. Dams Many dams and their associated reservoirs are designed completely or partially to aid in flood protection and control. Many large dams have flood-control reservations in which the level of a reservoir must be kept below a certain elevation before the onset of the rainy/summer melt season to allow a certain amount of space in which floodwaters can fill. Other beneficial uses of dam created reservoirs include hydroelectric power generation, water conservation, and recreation. Reservoir and dam construction and design is based upon standards, typically set out by the government. In the United States, dam and reservoir design is regulated by the US Army Corps of Engineers (USACE). Design of a dam and reservoir follows guidelines set by the USACE and covers topics such as design flow rates in consideration to meteorological, topographic, streamflow, and soil data for the watershed above the structure. The term dry dam refers to a dam that serves purely for flood control without any conservation storage (e.g. Mount Morris Dam, Seven Oaks Dam). Diversion canals Floodplains and groundwater replenishment Excess water can be used for groundwater replenishment by diversion onto land that can absorb the water. This technique can reduce the impact of later droughts by using the ground as a natural reservoir. It is being used in California, where orchards and vineyards can be flooded without damaging crops, or in other places wilderness areas have been re-engineered to act as floodplains. River defenses In many countries, rivers are prone to floods and are often carefully managed. Defenses such as levees, bunds, reservoirs, and weirs are used to prevent rivers from bursting their banks. A weir, also known as a lowhead dam, is most often used to create millponds, but on the Humber River in Toronto, a weir was built near Raymore Drive to prevent a recurrence of the flood damage caused by Hurricane Hazel in October 1954. The Leeds flood alleviation scheme uses movable weirs which are lowered during periods of high water to reduce the chances of flooding upstream. Two such weirs, the first in the UK, were installed on the River Aire in October 2017 at Crown Point, Leeds city centre and Knostrop. The Knostrop weir was operated during the 2019 England floods. They are designed to reduce potential flood levels by up to one metre. Coastal defenses Coastal flooding is addressed with coastal defenses, such as sea walls, beach nourishment, and barrier islands. Tide gates are used in conjunction with dykes and culverts. They can be placed at the mouth of streams or small rivers, where an estuary begins or where tributary streams, or drainage ditches connect to sloughs. Tide gates close during incoming tides to prevent tidal waters from moving upland, and open during outgoing tides to allow waters to drain out via the culvert and into the estuary side of the dike. The opening and closing of the gates is driven by a difference in water level on either side of the gate. Flood barrier Self-closing flood barrier The self-closing flood barrier (SCFB) is a flood defense system designed to protect people and property from inland waterway floods caused by heavy rainfall, gales, or rapid melting snow. The SCFB can be built to protect residential properties and whole communities, as well as industrial or other strategic areas. The barrier system is constantly ready to deploy in a flood situation, it can be installed in any length and uses the rising flood water to deploy. Temporary perimeter barriers When permanent defenses fail, emergency measures such as sandbags, inflatable impermeable sacks, or other temporary barriers are used. In 1988, a method of using water to control flooding was discovered. This was accomplished by containing 2 parallel tubes within a third outer tube. When filled, this structure formed a non-rolling wall of water that can control 80 percent of its height in external water depth, with dry ground behind it. Eight foot tall water filled barriers were used to surround Fort Calhoun Nuclear Generating Station during the 2011 Missouri River Flooding. Instead of trucking in sandbag material for a flood, stacking it, then trucking it out to a hazmat disposal site, flood control can be accomplished by using the on site water. However, these are not fool proof. A high long water filled rubber flood berm that surrounded portions of the plant was punctured by a skid-steer loader and it collapsed flooding a portion of the facility. AquaFence consists of interlocking panels which are waterproof and puncture-resistant, can be bolted down to resist winds, and use the weight of floodwater to hold them in place. Materials include marine-grade batlic laminate, stainless steel, aluminum and reinforced PVC canvas. The panels are reusable and can be stored flat between uses. The technology was designed as an alternative to building seawalls or placing sandbags in the path of floodwaters. Other solutions, such as HydroSack, are polypropylene exteriors with wood pulp within, though they are one-time use. Non-structural methods Flood risk assessment There are several methods of non-structural flood management that form part of flood risk management strategies. These can involve policies that reduces the amount of urban structures built around floodplains or flood prone areas through land zoning regulations. This helps to reduce the amount of mitigation needed to protect humans and buildings from flooding events. Similarly, flood warning systems are important for reducing risks. Following the occurrence of flooding events, other measures such as rebuilding plans and insurance can be integrated into flood risk management plans. Flood risk management strategy diversification is needed to ensure that management strategies cover several different scenarios and ensure best practices. Flood risk management aims to reduce the human and socio-economic losses caused by flooding and is part of the larger field of risk management. Flood risk management analyzes the relationships between physical systems and socio-economic environments through flood risk assessment and tries to create understanding and action about the risks posed by flooding. The relationships cover a wide range of topics, from drivers and natural processes, to models and socio-economic consequences. This relationship examines management methods which includes a wide range of flood management methods including but are not limited to flood mapping and physical implication measures. Flood risk management looks at how to reduce flood risk and how to appropriately manage risks that are associated with flooding. Flood risk management includes mitigating and preparing for flooding disasters, analyzing risk, and providing a risk analysis system to mitigate the negative impacts caused by flooding. Flooding and flood risk are especially important with more extreme weather and sea level rise caused by climate change as more areas will be effected by flood risk. Flood mapping Flood mapping is a tool used by governments and policy makers to delineate the borders of potential flooding events, allowing educated decisions to prevent extreme flooding events. Flood maps are useful to create documentation that allows policy makers to make informed decisions about flood hazards. Flood mapping also provides conceptual models to both the public and private sectors with information about flooding hazards. Flood mapping has been criticized in many areas around the world, due to the absence of public accessibility, technical writing and data, and lack of easy-to-understand information. However, revived attention towards flood mapping has renewed the interest in enhancing current flood mapping for use as a flood risk management method. Flood modelling Flood modelling is a tool used to model flood hazard and the effects on humans and the physical environment. Flood modelling takes into consideration how flood hazards, external and internal processes and factors, and the main drivers of floods interact with each other. Flood modelling combines factors such as terrain, hydrology, and urban topography to reproduce the evolution of a flood in order to identify the different levels of flooding risks associated with each element exposed. The modelling can be carried out using hydraulic models, conceptual models, or geomorphic methods. Nowadays, there is a growing attention also in the production of maps obtained with remote sensing. Flood modelling is helpful for determining building development practices and hazard mitigation methods that reduce the risks associated with flooding. Stakeholder engagement Stakeholder engagement is a useful tool for flood risk management that allows enhanced public engagement for agreements to be reached on policy discussions. Different management considerations can be taken into account including emergency management and disaster risk reduction goals, interactions of land-use planning with the integration of flood risks and required policies. In flood management, stakeholder engagement is seen as an important way to achieve greater cohesion and consensus. Integrating stakeholder engagement into flood management often provides a more complex analysis of the situation; this generally adds more demand in determining collective solutions and increases the time it takes to determine solutions. Costs The costs of flood protection rise as more people and property are to be protected. The US FEMA, for example, estimates that for every $1.00 spent on mitigation, $4.00 is saved. Examples by country North America Canada An elaborate system of flood way defenses can be found in the Canadian province of Manitoba. The Red River flows northward from the United States, passing through the city of Winnipeg (where it meets the Assiniboine River) and into Lake Winnipeg. As is the case with all north-flowing rivers in the temperate zone of the Northern Hemisphere, snow melt in southern sections may cause river levels to rise before northern sections have had a chance to completely thaw. This can lead to devastating flooding, as occurred in Winnipeg during the spring of 1950. To protect the city from future floods, the Manitoba government undertook the construction of a massive system of diversions, dikes, and flood ways (including the Red River Floodway and the Portage Diversion). The system kept Winnipeg safe during the 1997 flood which devastated many communities upriver from Winnipeg, including Grand Forks, North Dakota and Ste. Agathe, Manitoba. United States In the United States, the U.S. Army Corps of Engineers is the lead flood control agency. After Hurricane Sandy, New York City's Metropolitan Transportation Authority (MTA) initiated multiple flood barrier projects to protect the transit assets in Manhattan. In one case, the MTA's New York City Transit Authority (NYCT) sealed subway entrances in lower Manhattan using a deployable fabric cover system called Flex-Gate, a system that protects the subway entrances against of water. Extreme storm flood protection levels have been revised based on new Federal Emergency Management Agency guidelines for 100-year and 500-year design flood elevations. In the New Orleans Metropolitan Area, 35 percent of which sits below sea level, is protected by hundreds of miles of levees and flood gates. This system failed catastrophically, with numerous breaks, during Hurricane Katrina (2005) in the city proper and in eastern sections of the Metro Area, resulting in the inundation of approximately 50 percent of the metropolitan area, ranging from a few inches to twenty feet in coastal communities. The Morganza Spillway provides a method of diverting water from the Mississippi River when a river flood threatens New Orleans, Baton Rouge and other major cities on the lower Mississippi. It is the largest of a system of spillways and floodways along the Mississippi. Completed in 1954, the spillway has been opened twice, in 1973 and in 2011. In an act of successful flood prevention, the federal government offered to buy out flood-prone properties in the United States in order to prevent repeated disasters after the 1993 flood across the Midwest. Several communities accepted and the government, in partnership with the state, bought 25,000 properties which they converted into wetlands. These wetlands act as a sponge in storms and in 1995, when the floods returned, the government did not have to expend resources in those areas. Asia In Kyoto, Japan, the Hata clan successfully controlled floods on the Katsura River in around 500 A.D and also constructed a sluice on the Kazuno River. In China flood diversion areas are rural areas that are deliberately flooded in emergencies in order to protect cities. The consequences of deforestation and changing land use on the risk and severity of flooding are subjects of discussion. In assessing the impacts of Himalayan deforestation on the Ganges-Brahmaputra Lowlands, it was found that forests would not have prevented or significantly reduced flooding in the case of an extreme weather event. However, more general or overview studies agree on the negative impacts that deforestation has on flood safety - and the positive effects of wise land use and reforestation. Many have proposed that loss of vegetation (deforestation) will lead to an increased risk of flooding. With natural forest cover the flood duration should decrease. Reducing the rate of deforestation should improve the incidents and severity of floods. Africa In Egypt, both the Aswan Low Dam (1902) and the Aswan High Dam (1976) have controlled various amounts of flooding along the Nile River. Europe France Following the misery and destruction caused by the 1910 Great Flood of Paris, the French government built a series of reservoirs called (or Great Lakes) which helps remove pressure from the Seine during floods, especially the regular winter flooding. United Kingdom London is protected from flooding by Thames Barrier, a huge mechanical barrier across the River Thames, which is raised when the water level reaches a certain point. This project has been operational since 1982 and was designed to protect against a surge of water such as the North Sea flood of 1953. In 2023 it was found that over 4,000 flood defence schemes in England were ‘almost useless’ with many of them in areas hit by Storm Babet. Russia The Saint Petersburg Dam was completed in 2008 to protect Saint Petersburg from storm surges. It also has a main traffic function, as it completes a ring road around Saint Petersburg. Eleven dams extend for and stand above water level. The Netherlands The Netherlands has one of the best flood control systems in the world, notably through its construction of dykes. The country faces high flooding risk due to the country's low-lying landscapes. The largest and most elaborate flood defenses are referred to as the Delta Works with the Oosterscheldekering as its crowning achievement. These works in the southwestern part of the country were built in response to the North Sea flood of 1953. The Dutch had already built one of the world's largest dams in the north of the country. The Afsluitdijk closing occurred in 1932. New ways to deal with water are constantly being developed and tested, such as the underground storage of water, storing water in reservoirs in large parking garages or on playgrounds. Rotterdam started a project to construct a floating housing development of to deal with rising sea levels. Several approaches, from high-tech sensors detecting imminent levee failure to movable semi-circular structures closing an entire river, are being developed or used around the world. Regular maintenance of hydraulic structures, however, is another crucial part of flood control. Oceania Flooding is the greatest natural hazard in New Zealand (Aotearoa), and its control is primarily managed and funded by local councils. Throughout the country there is a network of more than 5284 km of levees, while gravel extraction to lower river water levels is also a popular flood control technique. The management of flooding in the country is shifting towards nature based solutions, such as the widening of the Hutt River channel in Wellington.
Technology
Hydraulic infrastructure
null
20913864
https://en.wikipedia.org/wiki/Female
Female
An organism's sex is female (symbol: ♀) if it produces the ovum (egg cell), the type of gamete (sex cell) that fuses with the male gamete (sperm cell) during sexual reproduction. A female has larger gametes than a male. Females and males are results of the anisogamous reproduction system, wherein gametes are of different sizes (unlike isogamy where they are the same size). The exact mechanism of female gamete evolution remains unknown. In species that have males and females, sex-determination may be based on either sex chromosomes, or environmental conditions. Most female mammals, including female humans, have two X chromosomes. Characteristics of organisms with a female sex vary between different species, having different female reproductive systems, with some species showing characteristics secondary to the reproductive system, as with mammary glands in mammals. In humans, the word female can also be used to refer to gender in the social sense of gender role or gender identity. Etymology and usage The word female comes from the Latin , the diminutive form of , meaning "woman", by way of the Old French femelle. It is not etymologically related to the word male, but in the late 14th century the English spelling was altered to parallel that of male. It has been used as both noun and adjective since the 14th century. Originally, from its first appearance in the 1300s, female exclusively referred to humans and always indicated that the speaker spoke of a woman or a girl. A century later, the meaning was expanded to include non-human female organisms. For several centuries, using the word female as a noun was considered more respectful than calling her a woman or a lady and was preferred for that reason; however, by 1895, the linguistic fashion had changed, and female was often considered disparaging, usually on the grounds that it grouped humans with other animals. In the 21st century, the noun female is primarily used to describe non-human animals, to refer to biologically female humans in an impersonal technical context (e.g., "Females were more likely than males to develop an autoimmune disease"), or to impartially include a range of people without reference to age (e.g., girls) or social status (e.g., lady). As an adjective, female is still used in some contexts, particularly when the sex of the person is relevant, such as female athletes or to distinguish a male nurse from a female one. Biological sex is conceptually distinct from gender, although they are often used interchangeably. The adjective female can describe a person's sex or gender identity. The word can also refer to the shape of connectors and fasteners, such as screws, electrical pins, and technical equipment. Under this convention, sockets and receptacles are called female, and the corresponding plugs male. Defining characteristics Females produce ova, the larger gametes in a heterogamous reproduction system, while the smaller and usually motile gametes, the spermatozoa, are produced by males. Generally, a female cannot reproduce sexually without access to the gametes of a male, and vice versa, but in some species females can reproduce by themselves asexually, for example via parthenogenesis. Patterns of sexual reproduction include: Isogamous species with two or more mating types with gametes of identical form and behavior (but different at the molecular level), Anisogamous species with gametes of male and female types, Oogamous species, which include humans, in which the female gamete is much larger than the male and has no ability to move. Oogamy is a form of anisogamy. There is an argument that this pattern was driven by the physical constraints on the mechanisms by which two gametes get together as required for sexual reproduction. Other than the defining difference in the type of gamete produced, differences between males and females in one lineage cannot always be predicted by differences in another. The concept is not limited to animals; egg cells are produced by chytrids, diatoms, water moulds and land plants, among others. In land plants, female and male designate not only the egg- and sperm-producing organisms and structures, but also the structures of the sporophytes that give rise to male and female plants. Females across species Species that are divided into females and males are classified as gonochoric in animals, as dioecious in seed plants and as dioicous in cryptogams. In some species, female and hermaphrodite individuals may coexist, a sexual system termed gynodioecy. In a few species, female individuals coexist with males and hermaphrodites; this sexual system is called trioecy. In Thor manningi (a species of shrimp), females coexist with males and protandrous hermaphrodites. Mammalian female A distinguishing characteristic of the class Mammalia is the presence of mammary glands. Mammary glands are modified sweat glands that produce milk, which is used to feed the young for some time after birth. Only mammals produce milk. Mammary glands are obvious in humans, because the female human body stores large amounts of fatty tissue near the nipples, resulting in prominent breasts. Mammary glands are present in all mammals, although they are normally redundant in males of the species. Most mammalian females have two copies of the X chromosome, while males have only one X and one smaller Y chromosome; some mammals, such as the platypus, have different combinations. One of the female's X chromosomes is randomly inactivated in each cell of placental mammals while the paternally derived X is inactivated in marsupials. In birds and some reptiles, by contrast, it is the female which is heterozygous and carries a Z and a W chromosome while the male carries two Z chromosomes. In mammals, females can have XXX or X. Mammalian females bear live young, with the exception of monotreme females, which lay eggs. Some non-mammalian species, such as guppies, have analogous reproductive structures; and some other non-mammals, such as some sharks, also bear live young. Following experiments by French endocrinologist Alfred Jost in the 1940s, it is widely believed that the female is the default sex in mammalian sexual determination. However, this idea was called into question by a 2017 study. Sex determination The sex of a particular organism may be determined by genetic or environmental factors, or may naturally change during the course of an organism's life. Genetic determination The sex of most mammals, including humans, is genetically determined by the XY sex-determination system where females have XX (as opposed to XY in males) sex chromosomes. It is also possible in a variety of species, including humans, to have other karyotypes. During reproduction, the male contributes either an X sperm or a Y sperm, while the female always contributes an X egg. A Y sperm and an X egg produce a male, while an X sperm and an X egg produce a female. The ZW sex-determination system, where females have ZW (as opposed to ZZ in males) sex chromosomes, is found in birds, reptiles and some insects and other organisms. Environmental determination The young of some species develop into one sex or the other depending on local environmental conditions, e.g. the sex of crocodilians is influenced by the temperature of their eggs. Other species (such as the goby) can transform, as adults, from one sex to the other in response to local reproductive conditions (such as a brief shortage of males). In many arthropods, sex is determined by infection with parasitic, endosymbiotic bacteria of the genus Wolbachia. The bacterium can only be transmitted via infected ova, and the presence of the obligate endoparasite may be required for female sexual viability. Evolution The question of how females evolved is mainly a question of why males evolved. The first organisms reproduced asexually, usually via binary fission, wherein a cell splits itself in half. From a strict numbers perspective, a species that is half males/half females can produce half the offspring an asexual population can, because only the females are having offspring. Being male can also carry significant costs, such as in flashy sexual displays in animals (such as big antlers or colorful feathers), or needing to produce an outsized amount of pollen as a plant in order to get a chance to fertilize a female. Yet despite the costs of being male, there must be some advantage to the process. The advantages are explained by the evolution of anisogamy, which led to the evolution of male and female function. Before the evolution of anisogamy, mating types in a species were isogamous: the same size and both could move, catalogued only as "+" or "-" types. In anisogamy, the mating cells are called gametes. The female gamete is larger than the male gamete, and usually immotile. Anisogamy remains poorly understood, as there is no fossil record of its emergence. Numerous theories exist as to why anisogamy emerged. Many share a common thread, in that larger female gametes are more likely to survive, and that smaller male gametes are more likely to find other gametes because they can travel faster. Current models often fail to account for why isogamy remains in a few species. Anisogamy appears to have evolved multiple times from isogamy; for example female Volvocales (a type of green algae) evolved from the plus mating type. Although sexual evolution emerged at least 1.2 billion years ago, the lack of anisogamous fossil records make it hard to pinpoint when females evolved. Female sex organs (genitalia, in animals) have an extreme range of variation among species and even within species. The evolution of female genitalia remains poorly understood compared to male genitalia, reflecting a now-outdated belief that female genitalia are less varied than male genitalia, and thus less useful to study. The difficulty of reaching female genitalia has also complicated their study. New 3D technology has made female genital study simpler. Genitalia evolve very quickly. There are three main hypotheses as to what impacts female genital evolution: lock-and-key (genitals must fit together), cryptic female choice (females affect whether males can fertilize them), and sexual conflict (a sort of sexual arms race). There is also a hypothesis that female genital evolution is the result of pleiotropy, i.e. unrelated genes that are affected by environmental conditions like low food also affect genitals. This hypothesis is unlikely to apply to a significant number of species, but natural selection in general has some role in female genital evolution. Symbol The symbol ♀ (Unicode: U+2640 Alt codes: Alt+12), a circle with a small cross underneath, is commonly used to represent females. Joseph Justus Scaliger once speculated that the symbol was associated with Venus, goddess of beauty, because it resembles a bronze mirror with a handle, but modern scholars consider that fanciful, and the most established view is that the female and male symbols derive from contractions in Greek script of the Greek names of the planets Thouros (Mars) and Phosphoros (Venus).
Biology and health sciences
Biological reproduction
null
20917738
https://en.wikipedia.org/wiki/Pecten%20maximus
Pecten maximus
Pecten maximus, common names the great scallop, king scallop, St James shell or escallop, is a northeast Atlantic species of scallop, an edible saltwater clam, a marine bivalve mollusc in the family Pectinidae. This is the type species of the genus. This species may be conspecific with Pecten jacobaeus, the pilgrim's scallop, which has a much more restricted distribution. Description The shell of Pecten maximus is quite robust and is characterised by having "ears" of equal size on either side of the apex. The right, or lower, valve is convex and slightly overlaps the flat left, or upper, valve. Larger specimens have a nearly circular outline and the largest may measure 21 cm in length. The "ears" are prominent and are a minimum of half the width of the shell, with the byssal notch situated in the right anterior ear being slight and not serrated. The sculpture of the valves is distinctive and consists of 12 to 17 wide radiating ribs and numerous concentric lines, which clearly show the scallop's growth history, while the "ears" show a few thin ribs, which radiate from the beaks. The radiating ribs reach the margins of the valves and this creates a crenulated form. The left valve is normally reddish-brown whilst the right valve varies from white through cream to shades of pale brown contrasting with pink, red or pale yellow tints; either valve may show zigzag patterns and may also show bands and spots of red, pink or bright yellow. The colour of the body of Pecten maximus is pink or red with the mantle marbled brown and white. When young they are attached to the substrate by a byssus but mature animals are capable of swimming by the opening and rapid closing of the valves. The adductor muscle which is used to close and open the valves is very large and powerful. The foot is a finger-like organ, which spins the byssal threads, which pass through the byssal notch on the ears. The margin of the mantle has two layers: the inner layer is finely fringed whilst on the outer is lined with long tentacles with two series totalling 30–36 dark blue or green simple eyes or ocelli in two rows at their base. <div align=center> </div align=center> <div align=center>Forma maculata </div align=center> Distribution Pecten maximus occurs in the eastern Atlantic along the European coast from northern Norway, south to the Iberian Peninsula, it has also been reported off West Africa, off the Macaronesian Islands. In Great Britain and Ireland it is distributed all round the coast but it is uncommon and localised on the eastern North Sea coast. It prefers offshore waters down to depth. Biology Pecten maximus frequently creates a slight hollow in the substrate to lie in by opening and closing for its shells to eject water from the mantle cavity, which raises the shell at an angle to the substrate so that subsequent water jets into the sediment and create a recess. Once settled, sand, mud, gravel or living organisms coat the upper valve and the margin of the shell, with the tentacles and eyes, is all that is visible. They are filter feeders, which extract particles from the surrounding water via a feeding current, which is drawn by cilia across the gills, where the food particles are trapped then taken to the mouth in a stream of mucous. Pecten maximus swims but this is generally limited to escape reactions. The main predators which cause this reaction when detected are the mollusc eating starfish Asterias rubens and Astropecten irregularis, although starfish which do not feed on molluscs can cause limited jumping or valve-closing reactions. The swimming action is performed by rapidly clapping the valves and expelling jets of water from each side of the hinge so that it moves with the curved edge of the shell at the front. The scallop jumps forwards by gradually relaxing the adductor muscle and then rapidly opening and closing the valves. Pecten maximus tends to be more numerous in areas where they are not fully exposed to strong currents. Scallops which live in sheltered habitats grow faster than scallops in areas exposed to wave action, possibly due to the filter feeding apparatus being unable to function because of high concentrations of particulate matter in the water in areas subject to high levels of wave exposure. Another factor that may be significant is that the processes of larval settlement and byssal attachment are rather delicate and would be disturbed in strong currents. Abundance and growth rates are negatively correlated with the amount of mud in the substrate. Scallops use larval motility to distribute themselves, as the adult scallops have limited mobility. The distribution of the larvae is affected by factors such as local hydrographic regimes and their survival, and this results in the scallops having an aggregated distribution within their geographic range. This means that the major fishing grounds are normally widely separated and each fishing ground's environmental conditions mean there are marked differences in structures of the populations, although the genetics of scallops are rather uniform across its range. The reproductive cycle of Pecten maximus is extremely variable and the spawning may be influenced by both internal and external factors such as age and temperature respectively but is also influenced by genetic adaptation. Generally, mature scallops spawn over the summer months starting in April or May and lasting to September. They are hermaphroditic and have distinct tongue-shaped gonads which are red or orange in colour for the female gonad and white for the male. It is estimated that a three-year-old scallop releases 15–21 million oocytes per emission. There appear to be two spawnings in many parts of the range, normally there is a partial one in the Spring and a full one in late August, however younger scallops have a single spawning event in the late summer. In some areas this pattern is reversed and the major spawning is in the Spring. After spawning the animals undergo period of recovery of the gonad before they spawn again. Fertilization of the gametes is external and either sperm or oocytes can be released into the water column first. Since the larval stage of Pecten maximus is relatively long, up to a month, the potential for dispersal is quite high, even smaller adults can use the byssus to drift too. However, in at least some populations, genetic studies show that there is little contribution from more distant populations and that these populations probably sustain themselves. In waters around the United Kingdom Pecten maximus become sexually mature at around 2–3 years old and when they reach 80 to 90 mm in shell length. Where they are not exploited, they may live for more than 20 years and reach shell lengths of more than 200mm. Scallops in shallow water grow faster than those in deeper water; the growth halts in winter and starts again in spring, producing concentric growth rings which are used to age the scallops. Genomics A draft genome is presented by Kenny et al. 2020. Their assembly is 918Mb, and they estimate a genome size of 1,150Mb, 1.7% heterozygosity, with 27.0% being repeats, in total coding 67,741 genes. Recent () improvements in read length helped Kenny to resolve questions of copy number variation in P. maximus which were previously indecipherable. Predators and diseases As well as the starfish species Asterias rubens and Astropecten irregularis, major predators of Pecten maximus are crabs such as Cancer pagurus, Carcinus maenas, Liocarcinus depurator and Necora puber, which will prey on the scallops as they grow. The anemone Anthopleura ballii was found preying on young specimens of P.maximus in south-western Ireland. The larvae of Pecten maximus are attacked by the bacterium Vibrio pectenicida, which was described in 1998 as a new species after incidents of mortality among cultured scallops in France in the early 1990s. Other strains of pathogenic bacteria were detected in Norway following mass mortality of larvae in culture. Fisheries and aquaculture In 1999 the total catch reported by the United Nations Food and Agriculture Organisation was 35,411 tonnes with the two biggest catches being reported from the United Kingdom and France which landed 19,108 tonnes and 12,745 tonnes respectively. It is believed that some natural stocks are showing indications of over exploitation resulting in strict enforcement of fisheries legislation and by the development of stock enhancement practices. Great scallops are fished for using Newhaven scallop dredges, and less than 5% is gathered by hand by divers. In total the scallop fisheries for P. maximus and for the Queen scallop Aequipecten opercularis are one of the top five fisheries by value in United Kingdom waters. However, the use of towed great to harvest scallops causes damage to the wider ecosystem. Pecten maximus can be cultivated in aquaculture and this is reasonably advanced in France and Norway. Spain, France, Ireland, the United Kingdom and Norway have been involved in the aquaculture of scallops; production peaked in 1998 when 512 tonnes were landed but production later decreased, with only 213 tonnes landed in 2004, having an estimated value of €852 000, equivalent to €4 per kilogramme. Pecten maximus has been found to contain domoic acid, a toxin that can cause Amnesic Shellfish Poisoning. The risk associated with scallop consumption is regarded as a significant threat to both public health and the shellfish industry. Cultural significance The oil company Shell plc derives its highly recognizable logo from this species. Pilgrims travelling to the town of Santiago de Compostella in Galicia took the shells of scallop with them, in honor of Saint James. This gave rise to the alternative name St James shell. Michel Callon, a sociologist, used a case study of scallop fishing in St Brieuc Bay in France to illustrate how the sociology of translation can be applied to understand the dynamics of science and technology. This study then became a seminal work of actor-network theory (ANT).
Biology and health sciences
Bivalvia
Animals
271708
https://en.wikipedia.org/wiki/Near%20and%20far%20field
Near and far field
The near field and far field are regions of the electromagnetic (EM) field around an object, such as a transmitting antenna, or the result of radiation scattering off an object. Non-radiative near-field behaviors dominate close to the antenna or scatterer, while electromagnetic radiation far-field behaviors predominate at greater distances. Far-field (electric) and (magnetic) radiation field strengths decrease as the distance from the source increases, resulting in an inverse-square law for the power intensity of electromagnetic radiation in the transmitted signal. By contrast, the near-fields and strengths decrease more rapidly with distance: The radiative field decreases by the inverse-distance squared, the reactive field by an inverse-cube law, resulting in a diminished power in the parts of the electric field by an inverse fourth-power and sixth-power, respectively. The rapid drop in power contained in the near-field ensures that effects due to the near-field essentially vanish a few wavelengths away from the radiating part of the antenna, and conversely ensure that at distances a small fraction of a wavelength from the antenna, the near-field effects overwhelm the radiating far-field. Summary of regions and their interactions In a normally-operating antenna, positive and negative charges have no way of leaving the metal surface, and are separated from each other by the excitation "signal" voltage (a transmitter or other EM exciting potential). This generates an oscillating (or reversing) electrical dipole, which affects both the near field and the far field. The boundary between the near field and far field regions is only vaguely defined, and it depends on the dominant wavelength () emitted by the source and the size of the radiating element. Near field The near field refers to places nearby the antenna conductors, or inside any polarizable media surrounding it, where the generation and emission of electromagnetic waves can be interfered with while the field lines remain electrically attached to the antenna, hence absorption of radiation in the near field by adjacent conducting objects detectably affects the loading on the signal generator (the transmitter). The electric and magnetic fields can exist independently of each other in the near field, and one type of field can be disproportionately larger than the other, in different subregions. An easy-to-observe example of a near-field effect is the change of noise levels picked up by a set of rabbit ear TV antennas when a human body part is moved in close to the "ears". Likewise the change in sound quality of an FM radio tuned to a distant station when a person walks about in the area within an arm's length of its antenna. The near field is governed by multipole type fields, which can be considered as collections of dipoles with a fixed phase relationship. The general purpose of conventional antennas is to communicate wirelessly over long distances, well into their far fields, and for calculations of radiation and reception for many simple antennas, most of the complicated effects in the near field can be conveniently ignored. Reactive near field The interaction with the medium (e.g. body capacitance) can cause energy to deflect back to the source feeding the antenna, as occurs in the reactive near field. This zone is roughly within of a wavelength of the nearest antenna surface. The near field has been of increasing interest, particularly in the development of capacitive sensing technologies such as those used in the touchscreens of smart phones and tablet computers. Although the far field is the usual region of antenna function, certain devices that are called antennas but are specialized for near-field communication do exist. Magnetic induction as seen in a transformer can be seen as a very simple example of this type of near-field electromagnetic interaction. For example send / receive coils for RFID, and emission coils for wireless charging and inductive heating; however their technical classification as "antennas" is contentious. Radiative near field The interaction with the medium can fail to return energy back to the source, but cause a distortion in the electromagnetic wave that deviates significantly from that found in free space, and this indicates the radiative near-field region, which is somewhat further away. Passive reflecting elements can be placed in this zone for the purpose of beam forming, such as the case with the Yagi–Uda antenna. Alternatively, multiple active elements can also be combined to form an antenna array, with lobe shape becoming a factor of element distances and excitation phasing. Transition zone Another intermediate region, called the transition zone, is defined on a somewhat different basis, namely antenna geometry and excitation wavelength. It is approximately one wavelength from the antenna, and is where the electric and magnetic parts of the radiated waves first balance out: The electric field of a linear antenna gains its corresponding magnetic field, and the magnetic field of a loop antenna gains its electric field. It can either be considered the furthest part of the near field, or the nearest part of the far field. It is from beyond this point that the electromagnetic wave becomes self-propagating. The electric and magnetic field portions of the wave are proportional to each other at a ratio defined by the characteristic impedance of the medium through which the wave is propagating. Far field In contrast, the far field is the region in which the field has settled into "normal" electromagnetic radiation. In this region, it is dominated by transverse electric or magnetic fields with electric dipole characteristics. In the far-field region of an antenna, radiated power decreases as the square of distance, and absorption of the radiation does not feed back to the transmitter. In the far-field region, each of the electric and magnetic parts of the EM field is "produced by" (or associated with) a change in the other part, and the ratio of electric and magnetic field intensities is simply the wave impedance in the medium. Also known as the radiation-zone, the far field carries a relatively uniform wave pattern. The radiation zone is important because far fields in general fall off in amplitude by This means that the total energy per unit area at a distance is proportional to The area of the sphere is proportional to , so the total energy passing through the sphere is constant. This means that the far-field energy actually escapes to infinite distance (it radiates). Definitions The separation of the electric and magnetic fields into components is mathematical, rather than clearly physical, and is based on the relative rates at which the amplitude of different terms of the electric and magnetic field equations diminish as distance from the radiating element increases. The amplitudes of the far-field components fall off as , the radiative near-field amplitudes fall off as , and the reactive near-field amplitudes fall off as . Definitions of the regions attempt to characterize locations where the activity of the associated field components are the strongest. Mathematically, the distinction between field components is very clear, but the demarcation of the spatial field regions is subjective. All of the field components overlap everywhere, so for example, there are always substantial far-field and radiative near-field components in the closest-in near-field reactive region. The regions defined below categorize field behaviors that are variable, even within the region of interest. Thus, the boundaries for these regions are approximate rules of thumb, as there are no precise cutoffs between them: All behavioral changes with distance are smooth changes. Even when precise boundaries can be defined in some cases, based primarily on antenna type and antenna size, experts may differ in their use of nomenclature to describe the regions. Because of these nuances, special care must be taken when interpreting technical literature that discusses far-field and near-field regions. The term near-field region (also known as the near field or near zone) has the following meanings with respect to different telecommunications technologies: The close-in region of an antenna where the angular field distribution is dependent upon the distance from the antenna. In the study of diffraction and antenna design, the near field is that part of the radiated field that is below distances shorter than the Fraunhofer distance, which is given by from the source of the diffracting edge or antenna of longitude or diameter . In fiber-optic communication, the region near a source or aperture that is closer than the Rayleigh length. (Presuming a Gaussian beam, which is appropriate for fiber optics.) Regions according to electromagnetic length The most convenient practice is to define the size of the regions or zones in terms of fixed numbers (fractions) of wavelengths distant from the center of the radiating part of the antenna, with the clear understanding that the values chosen are only approximate and will be somewhat inappropriate for different antennas in different surroundings. The choice of the cut-off numbers is based on the relative strengths of the field component amplitudes typically seen in ordinary practice. Electromagnetically short antennas For antennas shorter than half of the wavelength of the radiation they emit (i.e., electromagnetically "short" antennas), the far and near regional boundaries are measured in terms of a simple ratio of the distance from the radiating source to the wavelength of the radiation. For such an antenna, the near field is the region within a radius , while the far-field is the region for which . The transition zone is the region between and . The length of the antenna, , is not important, and the approximation is the same for all shorter antennas (sometimes idealized as so-called point antennas). In all such antennas, the short length means that charges and currents in each sub-section of the antenna are the same at any given time, since the antenna is too short for the RF transmitter voltage to reverse before its effects on charges and currents are felt over the entire antenna length. Electromagnetically long antennas For antennas physically larger than a half-wavelength of the radiation they emit, the near and far fields are defined in terms of the Fraunhofer distance. Named after Joseph von Fraunhofer, the following formula gives the Fraunhofer distance: where is the largest dimension of the radiator (or the diameter of the antenna) and is the wavelength of the radio wave. Either of the following two relations are equivalent, emphasizing the size of the region in terms of wavelengths or diameters : This distance provides the limit between the near and far field. The parameter corresponds to the physical length of an antenna, or the diameter of a reflector ("dish") antenna. Having an antenna electromagnetically longer than one-half the dominated wavelength emitted considerably extends the near-field effects, especially that of focused antennas. Conversely, when a given antenna emits high frequency radiation, it will have a near-field region larger than what would be implied by a lower frequency (i.e. longer wavelength). Additionally, a far-field region distance must satisfy these two conditions. where is the largest physical linear dimension of the antenna and is the far-field distance. The far-field distance is the distance from the transmitting antenna to the beginning of the Fraunhofer region, or far field. Transition zone The transition zone between these near and far field regions, extending over the distance from one to two wavelengths from the antenna, is the intermediate region in which both near-field and far-field effects are important. In this region, near-field behavior dies out and ceases to be important, leaving far-field effects as dominant interactions. (See the "Far Field" image above.) Regions according to diffraction behavior Far-field diffraction As far as acoustic wave sources are concerned, if the source has a maximum overall dimension or aperture width () that is large compared to the wavelength , the far-field region is commonly taken to exist at distances, when the Fresnel parameter is larger than 1: For a beam focused at infinity, the far-field region is sometimes referred to as the Fraunhofer region. Other synonyms are far field, far zone, and radiation field. Any electromagnetic radiation consists of an electric field component and a magnetic field component . In the far field, the relationship between the electric field component and the magnetic component is that characteristic of any freely propagating wave, where and have equal magnitudes at any point in space (where measured in units where [[speed of light|). Near-field diffraction In contrast to the far field, the diffraction pattern in the near field typically differs significantly from that observed at infinity and varies with distance from the source. In the near field, the relationship between and becomes very complex. Also, unlike the far field where electromagnetic waves are usually characterized by a single polarization type (horizontal, vertical, circular, or elliptical), all four polarization types can be present in the near field. The near field is a region in which there are strong inductive and capacitive effects from the currents and charges in the antenna that cause electromagnetic components that do not behave like far-field radiation. These effects decrease in power far more quickly with distance than do the far-field radiation effects. Non-propagating (or evanescent) fields extinguish very rapidly with distance, which makes their effects almost exclusively felt in the near-field region. Also, in the part of the near field closest to the antenna (called the reactive near field, see below), absorption of electromagnetic power in the region by a second device has effects that feed back to the transmitter, increasing the load on the transmitter that feeds the antenna by decreasing the antenna impedance that the transmitter "sees". Thus, the transmitter can sense when power is being absorbed in the closest near-field zone (by a second antenna or some other object) and is forced to supply extra power to its antenna, and to draw extra power from its own power supply, whereas if no power is being absorbed there, the transmitter does not have to supply extra power. Near-field characteristics The near field itself is further divided into the reactive near field and the radiative near field. The reactive and radiative near-field designations are also a function of wavelength (or distance). However, these boundary regions are a fraction of one wavelength within the near field. The outer boundary of the reactive near-field region is commonly considered to be a distance of times the wavelength (i.e., or approximately ) from the antenna surface. The reactive near-field is also called the inductive near-field. The radiative near field (also called the Fresnel region) covers the remainder of the near-field region, from out to the Fraunhofer distance. Reactive near field, or the nearest part of the near field In the reactive near field (very close to the antenna), the relationship between the strengths of the and fields is often too complicated to easily predict, and difficult to measure. Either field component ( or ) may dominate at one point, and the opposite relationship dominate at a point only a short distance away. This makes finding the true power density in this region problematic. This is because to calculate power, not only and both have to be measured but the phase relationship between and as well as the angle between the two vectors must also be known in every point of space. In this reactive region, not only is an electromagnetic wave being radiated outward into far space but there is a reactive component to the electromagnetic field, meaning that the strength, direction, and phase of the electric and magnetic fields around the antenna are sensitive to EM absorption and re-emission in this region, and respond to it. In contrast, absorption far from the antenna has negligible effect on the fields near the antenna, and causes no back-reaction in the transmitter. Very close to the antenna, in the reactive region, energy of a certain amount, if not absorbed by a receiver, is held back and is stored very near the antenna surface. This energy is carried back and forth from the antenna to the reactive near field by electromagnetic radiation of the type that slowly changes electrostatic and magnetostatic effects. For example, current flowing in the antenna creates a purely magnetic component in the near field, which then collapses as the antenna current begins to reverse, causing transfer of the field's magnetic energy back to electrons in the antenna as the changing magnetic field causes a self-inductive effect on the antenna that generated it. This returns energy to the antenna in a regenerative way, so that it is not lost. A similar process happens as electric charge builds up in one section of the antenna under the pressure of the signal voltage, and causes a local electric field around that section of antenna, due to the antenna's self-capacitance. When the signal reverses so that charge is allowed to flow away from this region again, the built-up electric field assists in pushing electrons back in the new direction of their flow, as with the discharge of any unipolar capacitor. This again transfers energy back to the antenna current. Because of this energy storage and return effect, if either of the inductive or electrostatic effects in the reactive near field transfer any field energy to electrons in a different (nearby) conductor, then this energy is lost to the primary antenna. When this happens, an extra drain is seen on the transmitter, resulting from the reactive near-field energy that is not returned. This effect shows up as a different impedance in the antenna, as seen by the transmitter. The reactive component of the near field can give ambiguous or undetermined results when attempting measurements in this region. In other regions, the power density is inversely proportional to the square of the distance from the antenna. In the vicinity very close to the antenna, however, the energy level can rise dramatically with only a small decrease in distance toward the antenna. This energy can adversely affect both humans and measurement equipment because of the high powers involved. Radiative near field (Fresnel region), or farthest part of the near field The radiative near field (sometimes called the Fresnel region) does not contain reactive field components from the source antenna, since it is far enough from the antenna that back-coupling of the fields becomes out of phase with the antenna signal, and thus cannot efficiently return inductive or capacitive energy from antenna currents or charges. The energy in the radiative near field is thus all radiant energy, although its mixture of magnetic and electric components are still different from the far field. Further out into the radiative near field (one half wavelength to 1 wavelength from the source), the and field relationship is more predictable, but the to relationship is still complex. However, since the radiative near field is still part of the near field, there is potential for unanticipated (or adverse) conditions. For example, metal objects such as steel beams can act as antennas by inductively receiving and then "re-radiating" some of the energy in the radiative near field, forming a new radiating surface to consider. Depending on antenna characteristics and frequencies, such coupling may be far more efficient than simple antenna reception in the yet-more-distant far field, so far more power may be transferred to the secondary "antenna" in this region than would be the case with a more distant antenna. When a secondary radiating antenna surface is thus activated, it then creates its own near-field regions, but the same conditions apply to them. Compared to the far field The near field is remarkable for reproducing classical electromagnetic induction and electric charge effects on the EM field, which effects "die-out" with increasing distance from the antenna: The magnetic field component that’s in phase quadrature to electric fields is proportional to the inverse-cube of the distance () and electric field strength proportional to inverse-square of distance (). This fall-off is far more rapid than the classical radiated far-field ( and fields, which are proportional to the simple inverse-distance (). Typically near-field effects are not important farther away than a few wavelengths of the antenna. More-distant near-field effects also involve energy transfer effects that couple directly to receivers near the antenna, affecting the power output of the transmitter if they do couple, but not otherwise. In a sense, the near field offers energy that is available to a receiver if the energy is tapped, and this is sensed by the transmitter by means of responding to electromagnetic near fields emanating from the receiver. Again, this is the same principle that applies in induction coupled devices, such as a transformer, which draws more power at the primary circuit, if power is drawn from the secondary circuit. This is different with the far field, which constantly draws the same energy from the transmitter, whether it is immediately received, or not. The amplitude of other components (non-radiative/non-dipole) of the electromagnetic field close to the antenna may be quite powerful, but, because of more rapid fall-off with distance than behavior, they do not radiate energy to infinite distances. Instead, their energies remain trapped in the region near the antenna, not drawing power from the transmitter unless they excite a receiver in the area close to the antenna. Thus, the near fields only transfer energy to very nearby receivers, and, when they do, the result is felt as an extra power draw in the transmitter. As an example of such an effect, power is transferred across space in a common transformer or metal detector by means of near-field phenomena (in this case inductive coupling), in a strictly short-range effect (i.e., the range within one wavelength of the signal). Classical EM modelling Solving Maxwell's equations for the electric and magnetic fields for a localized oscillating source, such as an antenna, surrounded by a homogeneous material (typically vacuum or air), yields fields that, far away, decay in proportion to where is the distance from the source. These are the radiating fields, and the region where is large enough for these fields to dominate is the far field. In general, the fields of a source in a homogeneous isotropic medium can be written as a multipole expansion. The terms in this expansion are spherical harmonics (which give the angular dependence) multiplied by spherical Bessel functions (which give the radial dependence). For large , the spherical Bessel functions decay as , giving the radiated field above. As one gets closer and closer to the source (smaller ), approaching the near field, other powers of become significant. The next term that becomes significant is proportional to and is sometimes called the induction term. It can be thought of as the primarily magnetic energy stored in the field, and returned to the antenna in every half-cycle, through self-induction. For even smaller , terms proportional to become significant; this is sometimes called the electrostatic field term and can be thought of as stemming from the electrical charge in the antenna element. Very close to the source, the multipole expansion is less useful (too many terms are required for an accurate description of the fields). Rather, in the near field, it is sometimes useful to express the contributions as a sum of radiating fields combined with evanescent fields, where the latter are exponentially decaying with . And in the source itself, or as soon as one enters a region of inhomogeneous materials, the multipole expansion is no longer valid and the full solution of Maxwell's equations is generally required. Antennas If an oscillating electrical current is applied to a conductive structure of some type, electric and magnetic fields will appear in space about that structure. If those fields are lost to a propagating space wave the structure is often termed an antenna. Such an antenna can be an assemblage of conductors in space typical of radio devices or it can be an aperture with a given current distribution radiating into space as is typical of microwave or optical devices. The actual values of the fields in space about the antenna are usually quite complex and can vary with distance from the antenna in various ways. However, in many practical applications, one is interested only in effects where the distance from the antenna to the observer is very much greater than the largest dimension of the transmitting antenna. The equations describing the fields created about the antenna can be simplified by assuming a large separation and dropping all terms that provide only minor contributions to the final field. These simplified distributions have been termed the "far field" and usually have the property that the angular distribution of energy does not change with distance, although the energy levels still vary with distance and time. Such an angular energy distribution is usually termed an antenna pattern. Note that, by the principle of reciprocity, the pattern observed when a particular antenna is transmitting is identical to the pattern measured when the same antenna is used for reception. Typically one finds simple relations describing the antenna far-field patterns, often involving trigonometric functions or at worst Fourier or Hankel transform relationships between the antenna current distributions and the observed far-field patterns. While far-field simplifications are very useful in engineering calculations, this does not mean the near-field functions cannot be calculated, especially using modern computer techniques. An examination of how the near fields form about an antenna structure can give great insight into the operations of such devices. Impedance The electromagnetic field in the far-field region of an antenna is independent of the details of the near field and the nature of the antenna. The wave impedance is the ratio of the strength of the electric and magnetic fields, which in the far field are in phase with each other. Thus, the far field "impedance of free space" is resistive and is given by: With the usual approximation for the speed of light in free space m/s, this gives the frequently used expression: The electromagnetic field in the near-field region of an electrically small coil antenna is predominantly magnetic. For small values of the impedance of a magnetic loop is low and inductive, at short range being asymptotic to: The electromagnetic field in the near-field region of an electrically short rod antenna is predominantly electric. For small values of the impedance is high and capacitive, at short range being asymptotic to: In both cases, the wave impedance converges on that of free space as the range approaches the far field.
Physical sciences
Electromagnetic radiation
Physics
271860
https://en.wikipedia.org/wiki/Petroleum%20jelly
Petroleum jelly
Petroleum jelly, petrolatum (), white petrolatum, soft paraffin, or multi-hydrocarbon, CAS number 8009-03-8, is a semi-solid mixture of hydrocarbons (with carbon numbers mainly higher than 25), originally promoted as a topical ointment for its healing properties. Vaseline has been an American brand of petroleum jelly since 1870. After petroleum jelly became a medicine-chest staple, consumers began to use it for cosmetic purposes and for many ailments including toenail fungus, genital rashes (non-STI), nosebleeds, diaper rash, and common colds. Its folkloric medicinal value as a "cure-all" has since been limited by a better scientific understanding of appropriate and inappropriate uses. It is recognized by the U.S. Food and Drug Administration (FDA) as an approved over-the-counter (OTC) skin protectant and remains widely used in cosmetic skin care, where it is often loosely referred to as mineral oil. History Marco Polo in 1273 described the oil exportation of Baku oil by hundreds of camels and ships for burning and as an ointment for treating mange. Native Americans discovered the use of petroleum jelly for protecting and healing skin. Sophisticated oil pits had been built as early as 1415–1450 in Western Pennsylvania. In 1859, workers operating the United States's first oil rigs noticed a paraffin-like material forming on rigs in the course of investigating malfunctions. Believing the substance hastened healing, the workers used the jelly on cuts and burns. Robert Chesebrough, a young chemist whose previous work of distilling fuel from the oil of sperm whales had been rendered obsolete by petroleum, went to Titusville, Pennsylvania, to see what new materials had commercial potential. Chesebrough took the unrefined green-to-gold-colored "rod wax", as the drillers called it, back to his laboratory to refine it and explore potential uses. He discovered that by distilling the lighter, thinner oil products from the rod wax, he could create a light-colored gel. Chesebrough patented the process of making petroleum jelly by in 1872. The process involved vacuum distillation of the crude material followed by filtration of the still residue through bone char. Chesebrough traveled around New York demonstrating the product to encourage sales by burning his skin with acid or an open flame, then spreading the ointment on his injuries and showing his past injuries healed, he said, by his miracle product. He opened his first factory in 1870 in Brooklyn using the name Vaseline. Physical properties Petroleum jelly is a mixture of hydrocarbons, with a melting point that depends on the exact proportions. The melting point is typically between . It is flammable only when heated to liquid; then the fumes will light, not the liquid itself, so a wick material is needed to ignite petroleum jelly. It is colorless (or of a pale yellow color when not highly distilled), translucent, and devoid of taste and smell when pure. It does not oxidize on exposure to the air and is not readily acted on by chemical reagents. It is insoluble in water. It is soluble in dichloromethane, chloroform, benzene, diethyl ether, carbon disulfide and turpentine. Petroleum jelly is slightly soluble in alcohol. It acts as a plasticizer on polypropylene (PP), but is compatible with a wide range of materials and chemicals. It is a semi-solid, in that it holds its shape indefinitely like a solid, but it can be forced to take the shape of its container without breaking apart, like a liquid, though it does not flow on its own. At room temperature, it has 20.9% solid fat content. Its partially crystalline stacks of lamellar sheets, which immobilize the liquid portion, make up its microstructure. In general, only 7–13% of it is made up of high molecular weight paraffins, 30–45% of smaller paraffins, and 48–60% of small paraffins. Depending on the specific application of petroleum jelly, it may be USP, B.P., or Ph. Eur. grade. This pertains to the processing and handling of the petroleum jelly so it is suitable for medicinal and personal-care applications. Uses Petroleum jelly has lubricating and coating properties, including use on dry lips and dry skin. Below are some examples of the uses of petroleum jelly. Medical treatment Vaseline brand First Aid Petroleum Jelly, or carbolated petroleum jelly containing phenol to give the jelly additional antibacterial effect, has been discontinued. During World War II, a variety of petroleum jelly called red veterinary petrolatum, or Red Vet Pet for short, was often included in life raft survival kits. Acting as a sunscreen, it provides protection against ultraviolet rays. The American Academy of Dermatology recommends keeping skin injuries moist with petroleum jelly to reduce scarring. A verified medicinal use is to protect and prevent moisture loss of the skin of a patient in the initial post-operative period following laser skin resurfacing. Petroleum jelly is used extensively by otorhinolaryngologists—ear, nose, and throat doctors—for nasal moisture and epistaxis treatment, and to combat nasal crusting. Large studies have found petroleum jelly applied to the nose for short durations to have no significant side effects. Historically, it was also consumed for internal use and even promoted as "Vaseline confection". Skin and hair care Most petroleum jelly today is used as an ingredient in skin lotions and cosmetics, providing various types of skin care and protection by minimizing friction or reducing moisture loss, or by functioning as a grooming aid (e.g., pomade). It is also used for treating dry scalp and dandruff. Although long known as just an occlusive, recent studies show that it is actually able to penetrate into the stratum corneum and helps in better absorption of other cosmetic products. Applying a significant amount of petroleum jelly onto one's face before bed is known as "slugging". Preventing moisture loss By reducing the loss of moisture via transepidermal water loss, petroleum jelly can prevent chapped hands and lips, and soften nail cuticles. This property is exploited to provide heat insulation: petroleum jelly can be used to keep swimmers warm in water when training, or during channel crossings or long ocean swims. It can prevent chilling of the face due to evaporation of skin moisture during cold weather outdoor sports. Hair grooming In the first part of the twentieth century, petroleum jelly, either pure or as an ingredient, was also popular as a hair pomade. When used in a 50/50 mixture with pure beeswax, it makes an effective moustache wax. Skin lubrication Petroleum jelly can be used to reduce the friction between skin and clothing during various sport activities, for example to prevent chafing of the seat region of cyclists, or the nipples of long distance runners wearing loose T-shirts, and is commonly used in the groin area of wrestlers and footballers. Petroleum jelly is commonly used as a personal lubricant, because it does not dry out like water-based lubricants, and has a distinctive "feel", different from that of K-Y and related methylcellulose products. However, it is not recommended for use with latex condoms during sexual activity, as it increases the chance of rupture. In addition, petroleum jelly is difficult for the body to break down naturally, and may cause vaginal health problems when used for intercourse. Product care and protection Coating Petroleum jelly can be used to coat corrosion-prone items such as metallic trinkets, non-stainless steel blades, and gun barrels prior to storage as it serves as an excellent and inexpensive water repellent. It is used as an environmentally friendly underwater antifouling coating for motor boats and sailing yachts. It was recommended in the Porsche owner's manual as a preservative for light alloy (alleny) anodized Fuchs wheels to protect them against corrosion from road salts and brake dust. Finishing It can be used to finish and protect wood, much like a mineral oil finish. It is used to condition and protect smooth leather products like bicycle saddles, boots, motorcycle clothing, and used to put a shine on patent leather shoes (when applied in a thin coat and then gently buffed off). Lubrication Petroleum jelly can be used to lubricate zippers and slide rules. It was also recommended by Porsche in maintenance training documentation for lubrication (after cleaning) of "Weatherstrips on Doors, Hood, Tailgate, Sun Roof". It is used in bullet lubricant compounds. Industrial production processes Petroleum jelly is a useful material when incorporated into candle wax formulas. It softens the overall blend, allows the candle to incorporate additional fragrance oil, and facilitates adhesion to the sidewall of the glass. Petroleum jelly is used to moisten nondrying modelling clay such as plasticine, as part of a mix of hydrocarbons including those with greater (paraffin wax) and lesser (mineral oil) molecular weights. It is used as a tack reducer additive to printing inks to reduce paper lint "picking" from uncalendered paper stocks. It can be used as a release agent for plaster molds and castings. It is used in the leather industry as a waterproofing cream. Other Explosives Petroleum jelly can be mixed with a high proportion of strong inorganic chlorates due to it acting as a plasticizer and a fuel source. An example of this is Cheddite C which consists of a ratio of 9:1, KClO3 to petroleum jelly. This mixture is unable to detonate without the use of a blasting cap. It is also used as a stabiliser in the manufacture of the propellant Cordite. Mechanical, barrier functions Petroleum jelly can be used to fill copper or fibre-optic cables using plastic insulation to prevent the ingress of water, see icky-pick. Petroleum jelly can be used to coat the inner walls of terrariums to prevent animals from crawling out to escape. A stripe of petroleum jelly can be used to prevent the spread of a liquid (retain or confine a liquid to a specific area). For example, it can be applied close to the hairline when using a home hair dye kit to prevent the hair dye from irritating or staining the skin. It is also used to prevent diaper rash. Petroleum jelly is sometimes used to protect the terminals on batteries. However, automobiles batteries require a silicone-based battery grease because it is less likely to melt and thus offers better protection. Surface cleansing Petroleum jelly is used to gently clean a variety of surfaces, ranging from makeup removal from faces to tar stain removal from leather. Pet care Petroleum jelly is used to moisturize the paws of dogs. It is a common ingredient in hairball remedies for domestic cats. Sports Some goalkeepers in association football put petroleum jelly on their gloves to make them stickier. Health Petroleum jelly contains mineral oil aromatic hydrocarbons (MOAH). Many MOAH, mainly polycyclic aromatic hydrocarbons (PAH), are considered carcinogenic. The content of both MOAH and PAH in petroleum jelly products varies. The EU limits PAH content in cosmetics to 0.005%. The risks of PAH exposure through cosmetics have not been comprehensively studied, but food products with low levels (<3%) are not considered carcinogenic (by the EU). A 2012 scientific opinion by the European Food Safety Authority stated that mineral oil aromatic hydrocarbons (MOAH) and polyaromatics were potentially carcinogenic and may present a health risk. In 2015, German consumer watchdog Stiftung Warentest analyzed cosmetics containing mineral oils, finding significant concentrations of MOAH and polyaromatics in products containing mineral oils. Vaseline products contained the most MOAH of all tested cosmetics (up to 9%). Based on the 2015 results, Stiftung Warentest warned consumers not to use Vaseline or any product that is based on mineral oils for lip care. A study published in 2017 found levels of MOAH levels to be up to 1% in petroleum jelly and likewise to be less than 1% in petroleum jelly-based beauty products.
Physical sciences
Hydrocarbons
Chemistry
271890
https://en.wikipedia.org/wiki/Ball%20python
Ball python
The ball python (Python regius), also called the royal python, is a python species native to West and Central Africa, where it lives in grasslands, shrublands and open forests. This nonvenomous constrictor is the smallest of the African pythons, growing to a maximum length of . The name "ball python" refers to its tendency to curl into a ball when stressed or frightened. Taxonomy Boa regia was the scientific name proposed by George Shaw in 1802 for a pale variegated python from an indistinct place in Africa. The generic name Python was proposed by François Marie Daudin in 1803 for non-venomous flecked snakes. Between 1830 and 1849, several generic names were proposed for the same zoological specimen described by Shaw, including Enygrus by Johann Georg Wagler, Cenchris and Hertulia by John Edward Gray. Gray also described four specimens that were collected in Gambia and were preserved in spirits and fluid. Description The ball python is black, or albino and dark brown with light brown blotches on the back and sides. Its white or cream belly is scattered with black markings. It is a stocky snake with a relatively small head and smooth scales. It reaches a maximum adult length of . Males typically measure eight to ten subcaudal scales, and females typically measure two to four subcaudal scales. Females reach an average snout-to-vent length of , a long jaw, an long tail and a maximum weight of . Males are smaller with an average snout-to-vent length of , a long jaw, an long tail and a maximum weight of . Both sexes have pelvic spurs on both sides of the vent. During copulation, males use these spurs for gripping females. Males tend to have larger spurs, and sex is best determined by manual eversion of the male hemipenes or inserting a probe into the cloaca to check the presence of an inverted hemipenis. Distribution and habitat The ball python is native to west Sub Saharan Africa from Senegal through Cameroon to Sudan and Uganda. It prefers grasslands, savannas, and sparsely wooded areas. Behavior and ecology Ball pythons are typically nocturnal or crepuscular, meaning that they are active during dusk, dawn, and/or nighttime. This species is known for its defense strategy that involves coiling into a tight ball when threatened, with its head and neck tucked away in the middle. This defense behavior is typically employed in lieu of biting, which makes this species easy for humans to handle and has contributed to their popularity as a pet. In the wild, ball pythons favor mammal burrows and other underground hiding places, where they also aestivate. Males tend to display more semi-arboreal behaviors, whilst females tend towards terrestrial behaviors. Diet The diet of the ball python in the wild consists mostly of small mammals and birds. Young ball pythons of less than prey foremost on small birds. Ball pythons longer than prey foremost on small mammals. Males prey more frequently on birds, and females more frequently on mammals. Rodents make up a large percentage of the diet; Gambian pouched rats, black rats, rufous-nosed rats, shaggy rats, and striped grass mice are among the species consumed. Reproduction Females are oviparous and lay three to 11 rather large, leathery eggs. The eggs hatch after 55 to 60 days. Young male pythons reach sexual maturity at 11–18 months, and females at 20–36 months. Age is only one factor in determining sexual maturity and the ability to breed; weight is the second factor. Males breed at or more, but in captivity are often not bred until they are , although in captivity, some males have been known to begin breeding at . Females breed in the wild at weights as low as though or more in weight is most common; in captivity, breeders generally wait until they are no less than . Parental care of the eggs ends once they hatch, and the female leaves the offspring to fend for themselves. Parthenogenetic reproduction was demonstrated in a pet ball python. A genetic comparison of a mother and her early-stage embryos demonstrated the parthenogenetic origin of the latter. Threats The ball python is listed as Near Threatened on the IUCN Red List; it experiences a high level of exploitation and the population is believed to be in decline in most of West Africa. The ball python is primarily threatened by poaching for the international exotic pet trade. It is also hunted for its skin, meat and use in traditional medicine. Other threats include habitat loss as a result of intensified agriculture and pesticide use. Rural hunters in Togo collect gravid females and egg clutches, which they sell to snake ranches. In 2019 alone, 58 interviewed hunters had collected 3,000 live ball pythons and 5,000 eggs. In captivity Ball pythons are the most popular pet snake and the second most popular pet reptile after the bearded dragon. According to the IUCN Red List, while captive bred animals are widely available in the pet trade, capture of wild specimens for sale continues to cause significant damage to wild populations. Wild-caught specimens have greater difficulty adapting to a captive environment, which can result in refusal to feed, and they generally carry internal or external parasites. This species can do quite well in captivity, regularly living for 15–30 years with good care. The oldest recorded ball python in captivity is 62 years, 59 of those at the Saint Louis Zoo. Breeding Captive ball pythons are often bred for specific patterns that do not occur in the wild, called "morphs." Breeders are continuously creating new designer morphs, and over 7,500 different morphs currently exist. Most morphs are considered solely cosmetic with no harm or benefit to the individual animal. However, the "spider" morph gene has been linked to neurological disease, typically involving symptoms such as head tremors and lack of coordination that are collectively referred to as "wobble syndrome." Due to the ethical concerns associated with intentionally breeding a color pattern linked to genetic disease, the International Herpetological Society banned the sale of spider morphs at their events beginning in 2018. In culture The ball python is particularly revered by the Igbo people in southeastern Nigeria, who consider it symbolic of the earth, being an animal that travels so close to the ground. Even Christian Igbos treat ball pythons with great care whenever they come across one in a village or on someone's property; they either let them roam or pick them up gently and return them to a forest or field away from houses. If one is accidentally killed, many communities on Igbo land still build a coffin for the snake's remains and give it a short funeral. In northwestern Ghana, there is a taboo towards pythons as people consider them a savior and cannot hurt or eat them. According to folklore a python once helped them flee from their enemies by transforming into a log to allow them to cross a river.
Biology and health sciences
Snakes
Animals
272134
https://en.wikipedia.org/wiki/Survey%20methodology
Survey methodology
Survey methodology is "the study of survey methods". As a field of applied statistics concentrating on human-research surveys, survey methodology studies the sampling of individual units from a population and associated techniques of survey data collection, such as questionnaire construction and methods for improving the number and accuracy of responses to surveys. Survey methodology targets instruments or procedures that ask one or more questions that may or may not be answered. Researchers carry out statistical surveys with a view towards making statistical inferences about the population being studied; such inferences depend strongly on the survey questions used. Polls about public opinion, public-health surveys, market-research surveys, government surveys and censuses all exemplify quantitative research that uses survey methodology to answer questions about a population. Although censuses do not include a "sample", they do include other aspects of survey methodology, like questionnaires, interviewers, and non-response follow-up techniques. Surveys provide important information for all kinds of public-information and research fields, such as marketing research, psychology, health-care provision and sociology. Overview A single survey is made of at least a sample (or full population in the case of a census), a method of data collection (e.g., a questionnaire) and individual questions or items that become data that can be analyzed statistically. A single survey may focus on different types of topics such as preferences (e.g., for a presidential candidate), opinions (e.g., should abortion be legal?), behavior (smoking and alcohol use), or factual information (e.g., income), depending on its purpose. Since survey research is almost always based on a sample of the population, the success of the research is dependent on the representativeness of the sample with respect to a target population of interest to the researcher. That target population can range from the general population of a given country to specific groups of people within that country, to a membership list of a professional organization, or list of students enrolled in a school system (see also sampling (statistics) and survey sampling). The persons replying to a survey are called respondents, and depending on the questions asked their answers may represent themselves as individuals, their households, employers, or other organization they represent. Survey methodology as a scientific field seeks to identify principles about the sample design, data collection instruments, statistical adjustment of data, and data processing, and final data analysis that can create systematic and random survey errors. Survey errors are sometimes analyzed in connection with survey cost. Cost constraints are sometimes framed as improving quality within cost constraints, or alternatively, reducing costs for a fixed level of quality. Survey methodology is both a scientific field and a profession, meaning that some professionals in the field focus on survey errors empirically and others design surveys to reduce them. For survey designers, the task involves making a large set of decisions about thousands of individual features of a survey in order to improve it. The most important methodological challenges of a survey methodologist include making decisions on how to: Identify and select potential sample members. Contact sampled individuals and collect data from those who are hard to reach (or reluctant to respond) Evaluate and test questions. Select the mode for posing questions and collecting responses. Train and supervise interviewers (if they are involved). Check data files for accuracy and internal consistency. Adjust survey estimates to correct for identified errors. Complement survey data with new data sources (if appropriate) Selecting samples The sample is chosen from the sampling frame, which consists of a list of all members of the population of interest. The goal of a survey is not to describe the sample, but the larger population. This generalizing ability is dependent on the representativeness of the sample, as stated above. Each member of the population is termed an element. There are frequent difficulties one encounters while choosing a representative sample. One common error that results is selection bias. Selection bias results when the procedures used to select a sample result in over representation or under representation of some significant aspect of the population. For instance, if the population of interest consists of 75% females, and 25% males, and the sample consists of 40% females and 60% males, females are under represented while males are overrepresented. In order to minimize selection biases, stratified random sampling is often used. This is when the population is divided into sub-populations called strata, and random samples are drawn from each of the strata, or elements are drawn for the sample on a proportional basis. Modes of data collection There are several ways of administering a survey. The choice between administration modes is influenced by several factors, including costs, coverage of the target population, flexibility of asking questions, respondents' willingness to participate and response accuracy. Different methods create mode effects that change how respondents answer, and different methods have different advantages. The most common modes of administration can be summarized as: Telephone Mail (post) Online surveys Mobile surveys Personal in-home surveys Personal mall or street intercept survey Mixed modes Research designs There are several different designs, or overall structures, that can be used in survey research. The three general types are cross-sectional, successive independent samples, and longitudinal studies. Cross-sectional studies In cross-sectional studies, a sample (or samples) is drawn from the relevant population and studied once. A cross-sectional study describes characteristics of that population at one time, but cannot give any insight as to the causes of population characteristics because it is a predictive, correlational design. Successive independent samples studies A successive independent samples design draws multiple random samples from a population at one or more times. This design can study changes within a population, but not changes within individuals because the same individuals are not surveyed more than once. Such studies cannot, therefore, identify the causes of change over time necessarily. For successive independent samples designs to be effective, the samples must be drawn from the same population, and must be equally representative of it. If the samples are not comparable, the changes between samples may be due to demographic characteristics rather than time. In addition, the questions must be asked in the same way so that responses can be compared directly. Longitudinal studies Longitudinal studies take measure of the same random sample at multiple time points. Unlike with a successive independent samples design, this design measures the differences in individual participants' responses over time. This means that a researcher can potentially assess the reasons for response changes by assessing the differences in respondents' experiences. Longitudinal studies are the easiest way to assess the effect of a naturally occurring event, such as divorce that cannot be tested experimentally. However, longitudinal studies are both expensive and difficult to do. It is harder to find a sample that will commit to a months- or years-long study than a 15-minute interview, and participants frequently leave the study before the final assessment. In addition, such studies sometimes require data collection to be confidential or anonymous, which creates additional difficulty in linking participants' responses over time. One potential solution is the use of a self-generated identification code (SGIC). These codes usually are created from elements like 'month of birth' and 'first letter of the mother's middle name.' Some recent anonymous SGIC approaches have also attempted to minimize use of personalized data even further, instead using questions like 'name of your first pet. Depending on the approach used, the ability to match some portion of the sample can be lost. In addition, the overall attrition of participants is not random, so samples can become less representative with successive assessments. To account for this, a researcher can compare the respondents who left the survey to those that did not, to see if they are statistically different populations. Respondents may also try to be self-consistent in spite of changes to survey answers. Questionnaires Questionnaires are the most commonly used tool in survey research. However, the results of a particular survey are worthless if the questionnaire is written inadequately. Questionnaires should produce valid and reliable demographic variable measures and should yield valid and reliable individual disparities that self-report scales generate. Questionnaires as tools A variable category that is often measured in survey research are demographic variables, which are used to depict the characteristics of the people surveyed in the sample. Demographic variables include such measures as ethnicity, socioeconomic status, race, and age. Surveys often assess the preferences and attitudes of individuals, and many employ self-report scales to measure people's opinions and judgements about different items presented on a scale. Self-report scales are also used to examine the disparities among people on scale items. These self-report scales, which are usually presented in questionnaire form, are one of the most used instruments in psychology, and thus it is important that the measures be constructed carefully, while also being reliable and valid. Reliability and validity of self-report measures Reliable measures of self-report are defined by their consistency. Thus, a reliable self-report measure produces consistent results every time it is executed. A test's reliability can be measured a few ways. First, one can calculate a test-retest reliability. A test-retest reliability entails conducting the same questionnaire to a large sample at two different times. For the questionnaire to be considered reliable, people in the sample do not have to score identically on each test, but rather their position in the score distribution should be similar for both the test and the retest. Self-report measures will generally be more reliable when they have many items measuring a construct. Furthermore, measurements will be more reliable when the factor being measured has greater variability among the individuals in the sample that are being tested. Finally, there will be greater reliability when instructions for the completion of the questionnaire are clear and when there are limited distractions in the testing environment. Contrastingly, a questionnaire is valid if what it measures is what it had originally planned to measure. Construct validity of a measure is the degree to which it measures the theoretical construct that it was originally supposed to measure. Composing a questionnaire Six steps can be employed to construct a questionnaire that will produce reliable and valid results. First, one must decide what kind of information should be collected. Second, one must decide how to conduct the questionnaire. Thirdly, one must construct a first draft of the questionnaire. Fourth, the questionnaire should be revised. Next, the questionnaire should be pretested. Finally, the questionnaire should be edited and the procedures for its use should be specified. Guidelines for the effective wording of questions The way that a question is phrased can have a large impact on how a research participant will answer the question. Thus, survey researchers must be conscious of their wording when writing survey questions. It is important for researchers to keep in mind that different individuals, cultures, and subcultures can interpret certain words and phrases differently from one another. There are two different types of questions that survey researchers use when writing a questionnaire: free response questions and closed questions. Free response questions are open-ended, whereas closed questions are usually multiple choice. Free response questions are beneficial because they allow the responder greater flexibility, but they are also very difficult to record and score, requiring extensive coding. Contrastingly, closed questions can be scored and coded more easily, but they diminish expressivity and spontaneity of the responder. In general, the vocabulary of the questions should be very simple and direct, and most should be less than twenty words. Each question should be edited for "readability" and should avoid leading or loaded questions. Finally, if multiple items are being used to measure one construct, the wording of some of the items should be worded in the opposite direction to evade response bias. A respondent's answer to an open-ended question can be coded into a response scale afterwards, or analysed using more qualitative methods. Order of questions Survey researchers should carefully construct the order of questions in a questionnaire. For questionnaires that are self-administered, the most interesting questions should be at the beginning of the questionnaire to catch the respondent's attention, while demographic questions should be near the end. Contrastingly, if a survey is being administered over the telephone or in person, demographic questions should be administered at the beginning of the interview to boost the respondent's confidence. Another reason to be mindful of question order may cause a survey response effect in which one question may affect how people respond to subsequent questions as a result of priming. Translating a questionnaire Translation is crucial to collecting comparable survey data. Questionnaires are translated from a source language into one or more target languages, such as translating from English into Spanish and German. A team approach is recommended in the translation process to include translators, subject-matter experts and persons helpful to the process. Survey translation best practice includes parallel translation, team discussions, and pretesting with real-life people. It is not a mechanical word placement process. The model TRAPD - Translation, Review, Adjudication, Pretest, and Documentation - originally developed for the European Social Surveys, is now "widely used in the global survey research community, although not always labeled as such or implemented in its complete form". For example, sociolinguistics provides a theoretical framework for questionnaire translation and complements TRAPD. This approach states that for the questionnaire translation to achieve the equivalent communicative effect as the source language, the translation must be linguistically appropriate while incorporating the social practices and cultural norms of the target language. Nonresponse reduction The following ways have been recommended for reducing nonresponse in telephone and face-to-face surveys: Advance letter. A short letter is sent in advance to inform the sampled respondents about the upcoming survey. The style of the letter should be personalized but not overdone. First, it announces that a phone call will be made, or an interviewer wants to make an appointment to do the survey face-to-face. Second, the research topic will be described. Last, it allows both an expression of the surveyor's appreciation of cooperation and an opening to ask questions on the survey. Training. The interviewers are thoroughly trained in how to ask respondents questions, how to work with computers and making schedules for callbacks to respondents who were not reached. Short introduction. The interviewer should always start with a short introduction about him or herself. She/he should give her name, the institute she is working for, the length of the interview and goal of the interview. Also it can be useful to make clear that you are not selling anything: this has been shown to lead to a slightly higher responding rate. Respondent-friendly survey questionnaire. The questions asked must be clear, non-offensive and easy to respond to for the subjects under study. Brevity is also often cited as increasing response rate. A 1996 literature review found mixed evidence to support this claim for both written and verbal surveys, concluding that other factors may often be more important. A 2010 study looking at 100,000 online surveys found response rate dropped by about 3% at 10 questions and about 6% at 20 questions, with drop-off slowing (for example, only 10% reduction at 40 questions). Other studies showed that quality of response degraded toward the end of long surveys. Some researchers have also discussed the recipient's role or profession as a potential factor affecting how nonresponse is managed. For example, faxes are not commonly used to distribute surveys, but in a recent study were sometimes preferred by pharmacists, since they frequently receive faxed prescriptions at work but may not always have access to a generally-addressed piece of mail. Interviewer effects Survey methodologists have devoted much effort to determining the extent to which interviewee responses are affected by physical characteristics of the interviewer. Main interviewer traits that have been demonstrated to influence survey responses are race, gender, and relative body weight (BMI). These interviewer effects are particularly operant when questions are related to the interviewer trait. Hence, race of interviewer has been shown to affect responses to measures regarding racial attitudes, interviewer sex responses to questions involving gender issues, and interviewer BMI answers to eating and dieting-related questions. While interviewer effects have been investigated mainly for face-to-face surveys, they have also been shown to exist for interview modes with no visual contact, such as telephone surveys and in video-enhanced web surveys. The explanation typically provided for interviewer effects is social desirability bias: survey participants may attempt to project a positive self-image in an effort to conform to the norms they attribute to the interviewer asking questions. Interviewer effects are one example survey response effects. The role of big data Since 2018, survey methodologists have started to examine how big data can complement survey methodology to allow researchers and practitioners to improve the production of survey statistics and its quality. Big data has low cost per data point, applies analysis techniques via machine learning and data mining, and includes diverse and new data sources, e.g., registers, social media, apps, and other forms of digital data. There have been three Big Data Meets Survey Science (BigSurv) conferences in 2018, 2020, 2023, and a conference forthcoming in 2025, a special issue in the Social Science Computer Review, a special issue in the Journal of the Royal Statistical Society, and a special issue in EP J Data Science, and a book called Big Data Meets Social Sciences edited by Craig A. Hill and five other Fellows of the American Statistical Association.
Mathematics
Statistics and probability
null
272231
https://en.wikipedia.org/wiki/ISDB
ISDB
Integrated Services Digital Broadcasting (ISDB; Japanese: , Tōgō dejitaru hōsō sābisu) is a Japanese broadcasting standard for digital television (DTV) and digital radio. ISDB supersedes both the NTSC-J analog television system and the previously used MUSE Hi-vision analog HDTV system in Japan. An improved version of ISDB-T (ISDB-T International) will soon replace the NTSC, PAL-M, and PAL-N broadcast standards in South America and the Philippines. Digital Terrestrial Television Broadcasting (DTTB) services using ISDB-T started in Japan in December 2003, and since then, many countries have adopted ISDB over other digital broadcasting standards. A newer and "advanced" version of the ISDB standard (that will eventually allow up to 8K terrestrial broadcasts and 1080p mobile broadcasts via the VVC codec, including HDR and HFR) is currently under development. Countries and territories using ISDB-T Asia (officially adopted ISDB-T, started broadcasting in digital) (officially adopted ISDB-T) (officially adopted ISDB-T HD) Americas (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started broadcasting in digital) (officially adopted ISDB-T International, started pre-implementation stage) (officially adopted ISDB-T International, started pre-implementation stage, briefly experimented with ATSC) (officially adopted ISDB-T International, briefly experimented with ATSC, started broadcasting in digital) (officially adopted ISDB-T international, started pre-implementation stage) (currently assessing digital platform) (officially adopted ISDB-T international, started pre-implementation stage) Africa (officially adopted ISDB-T International (SBTVD), started pre-implementation stage) (In 2013, decided on European digital terrestrial TV. However, Angola reviewed the adoption to ISDB-T International system in March 2019.) Countries and territories are available using ISDB-T Americas Asia Africa Europe Introduction ISDB is maintained by the Japanese organization ARIB. The standards can be obtained for free at the Japanese organization DiBEG website and at ARIB. The core standards of ISDB are ISDB-S (satellite television), ISDB-T (terrestrial), ISDB-C (cable) and 2.6 GHz band mobile broadcasting which are all based on MPEG-2, MPEG-4, or HEVC standard for multiplexing with transport stream structure and video and audio coding (MPEG-2, H.264, or HEVC) and are capable of UHD, high-definition television (HDTV) and standard-definition television. ISDB-T and ISDB-Tsb are for mobile reception in TV bands. 1seg is the name of an ISDB-T component that allows viewers to watch TV channels via cell phones, laptop computers, and vehicles. The concept was named for its similarity to ISDN as both allow multiple channels of data to be transmitted together (a process called multiplexing). This broadcast standard is also much like another digital radio system, Eureka 147, which calls each group of stations on a transmitter an ensemble; this is very much like the multi-channel digital TV standard DVB-T. ISDB-T operates on unused TV channels, an approach that was taken by other countries for TV but never before for radio. Transmission The various flavors of ISDB differ mainly in the modulations used, due to the requirements of different frequency bands. The 12 GHz band ISDB-S uses PSK modulation, 2.6 GHz band digital sound broadcasting uses CDM, and ISDB-T (in VHF and/or UHF band) uses COFDM with PSK/QAM. Interaction Besides audio and video transmission, ISDB also defines data connections (Data broadcasting) with the internet as a return channel over several media (10/100 Ethernet, telephone line modem, mobile phone, wireless LAN (IEEE 802.11), etc.) and with different protocols. This component is used, for example, for interactive interfaces like data broadcasting (ARIB STD-B24) and electronic program guides (EPG). Interfaces and Encryption The ISDB specification describes a lot of (network) interfaces, but most importantly, the Common Interface for Conditional Access System (CAS). While ISDB has examples of implementing various kinds of CAS systems, in Japan, a CAS system called "B-CAS" is used. ARIB STD-B25 defines the Common Scrambling Algorithm (CSA) system called MULTI2 required for (de-)scrambling television. The ISDB CAS system in Japan is operated by a company named B-CAS; the CAS card is called B-CAS card. The Japanese ISDB signal is always encrypted by the B-CAS system even if it is a free television program. That is why it is commonly called "Pay per view system without charge". An interface for mobile reception is under consideration. ISDB supports RMP (Rights management and protection). Since all digital television (DTV) systems carry digital data content, a DVD or high-definition (HD) recorder could easily copy content losslessly. US major film studios requested copy protection; this was the main reason for RMP being mandated. The content has three modes: "copy once", "copy free" and "copy never". In "copy once" mode, a program can be stored on a hard disk recorder, but cannot be further copied; only moved to another copy-protected media—and this move operation will mark the content "copy one generation", which is mandated to prevent further copying permanently. "Copy never" programs may only be timeshifted and cannot be permanently stored. In 2006, the Japanese government is evaluating using the Digital Transmission Content Protection (DTCP) "Encryption plus Non-Assertion" mechanism to allow making multiple copies of digital content between compliant devices. Receiver There are two types of ISDB receiver: Television and set-top box. The aspect ratio of an ISDB-receiving television set is 16:9; televisions fulfilling these specs are called Hi-Vision TV. There are four TV types: Cathode-ray tube (CRT), plasma display panel (PDP), organic light-emitting diode (OLED) and liquid crystal display (LCD), with LCD being the most popular Hi-Vision TV on the Japanese market nowadays. The LCD share, as measured by JEITA in November 2004, was about 60%. While PDP sets occupy the high-end market with units that are over 50 inches (1270 mm), PDP and CRT set shares are about 20% each. CRT sets are considered low end for Hi-Vision. An STB is sometimes referred to as a digital tuner. Typical middle to high-end ISDB receivers marketed in Japan have several interfaces: F connectors for RF input. HDMI or D4 connector for an HDTV monitor in a home cinema. Optical digital audio interface for an audio amplifier and speakers for 5.1 surround audio in a home cinema. IEEE 1394 (aka FireWire) interface for digital data recorders (like DVD recorders) in a home cinema. RCA video jack provides SDTV signal that is sampled down from the HDTV signal for analog CRT television sets or VCRs. RCA audio jacks provide stereo audio for analog CRT television sets or VCRs. S video is for VCRs or analog CRT television sets. 10/100 and modular jack telephone line modem interfaces are for an internet connection. B-CAS card interface to de-scramble. IR interface jack for controlling a VHS or DVD player. Services A typical Japanese broadcast service consists as follows: One HDTV or up to three SDTV services within one channel. Provides interactive television through datacasting. Interactive services such as games or shopping, via telephone line or broadband internet. Equipped with an electronic program guide. Ability to send firmware patches for the TV/tuner over the air. During emergencies, the service utilizes Emergency Warning Broadcast system to quickly inform the public of various threats for the areas at risk. There are examples providing more than 10 SDTV services with H.264 coding in some countries. ISDB-S History Japan started digital broadcasting using the DVB-S standard by PerfecTV in October/1996, and DirecTV in December/1997, with communication satellites. Still, DVB-S did not satisfy the requirements of Japanese broadcasters, such as NHK, key commercial broadcasting stations like Nippon Television, TBS, Fuji Television, TV Asahi, TV Tokyo, and WOWOW (Movie-only Pay-TV broadcasting). Consequently, ARIB developed a new broadcast standard called ISDB-S. The requirements were HDTV capability, interactive services, network access and effective frequency utilization, and other technical requirements. The DVB-S standard allows the transmission of a bitstream of roughly 34 Mbit/s with a satellite transponder, which means the transponder can send one HDTV channel. Unfortunately, the NHK broadcasting satellite had only four vacant transponders, which led ARIB and NHK to work on ISDB-S: the new standard could transmit at 51 Mbit/s with a single transponder, which means that ISDB-S is 1.5 times more efficient than DVB-S and that one transponder can transmit two HDTV channels, along with other independent audio and data. Digital satellite broadcasting (BS digital) was started by NHK and followed commercial broadcasting stations on 1 December 2000. Today, SKY PerfecTV! (the successor of Skyport TV and Sky D), CS burn, Platone, EP, DirecTV, J Sky B, and PerfecTV!, adopted the ISDB-S system for use on the 110-degree (east longitude) wide-band communication satellite. Technical specification This table shows the summary of ISDB-S (satellite digital broadcasting). Channel Frequency and channel specification of Japanese Satellites using ISDB-S ISDB-S3 ISDB-S3 is a satellite digital broadcasting specification supporting 4K, 8K, HDR, HFR, and 22.2 audio. ISDB-C ISDB-C is a cable digital broadcasting specification. The technical specification J.83/C is developed by JCTEA. ISDB-C is identical to DVB-C but has a different channel bandwidth of 6 MHz (instead of 8 MHz) and roll-off factor. ISDB-T History HDTV was invented at NHK Science & Technology Research Laboratories (Japan Broadcasting Corporation's Science & Technical Research Laboratories). The research for HDTV started as early as the 1960s, though a standard was proposed to the ITU-R (CCIR) only in 1973. By the 1980s, a high definition television camera, cathode-ray tube, videotape recorder, and editing equipment, among others, had been developed. In 1982 NHK developed MUSE (Multiple sub-Nyquist sampling encoding), the first HDTV video compression and transmission system. MUSE used digital video compression, but for transmission frequency modulation was used after a digital-to-analog converter converted the digital signal. In 1987, NHK demonstrated MUSE in Washington D.C. as well as NAB. The demonstration made a great impression in the U.S., leading to the development of the ATSC terrestrial DTV system. Europe also developed a DTV system called DVB. Japan began R&D of a completely digital system in the 1980s that led to ISDB. Japan began terrestrial digital broadcasting, using ISDB-T standard by NHK and commercial broadcasting stations, on 1 December 2003. Features ISDB-T is characterized by the following features: ISDB-T (Integrated Services Digital Broadcasting-Terrestrial) in Japan use UHF 470 MHz-710 MHz, bandwidth of 240 MHz, allocate 40 channels namely channels 13 to 52 (previously used also 710 MHz-770 MHz, 53 to 62, but this range was re-assigned to cell phones), each channel is 6 MHz width (actually 5.572 MHz effective bandwidth and 430 kHz guard band between channels). These channels are called "physical channel(物理チャンネル)". For other countries, US channel table or European channel table are used. For channel tables with 6 MHz width, ISDB-T single channel bandwidths 5.572 MHz has number of carriers 5,617 with interval of 0.99206 kHz. For 7 MHz channel, channel bandwidth is 6.50 MHz; for 8 MHz 7.42 MHz. ISDB-T allows to accommodate any combination of HDTV (roughly 8 Mbit/s in H.264) and SDTV (roughly 2 Mbit/s in H.264) within the given bitrate determined by the transmission parameters such as bandwidth, code-rate, guard interval, etc. Typically, among the 13 segments, the center segment is used for 1seg with QPSK modulation and the remaining 12 segments for the HDTV or SDTV payloads for 64QAM modulation. The bitstream of the 12 segments are combined into one transport stream, within which any combination of programs can be carried based on the MPEG-2 transport stream definition. ISDB-T transmits an HDTV channel and a mobile TV channel 1seg within one channel. 1seg is a mobile terrestrial digital audio/video broadcasting service in Japan. Although 1seg is designed for mobile usage, reception is sometimes problematic in moving vehicles. Because of reception on high speed vehicle, UHF transmission is shaded by buildings and hills frequently, but reported well receiving in Shinkansen as far as run in flat or rural area. ISDB-T provides interactive services with data broadcasting. Such as Electronic Program Guides. ISDB-T supports internet access as a return channel that works to support the data broadcasting. Internet access is also provided on mobile phones. ISDB-T provides Single-Frequency Network (SFN) and on-channel repeater technology. SFN makes efficient utilization of the frequency resource (spectrum). For example, the Kanto area (greater Tokyo area including most part of Tokyo prefecture and some part of Chiba, Ibaragi, Tochigi, Saitama and Kanagawa prefecture) are covered with SFN with roughly 10 million population coverage. ISDB-T can be received indoors with a simple indoor antenna. ISDB-T provides robustness to multipath interference ("ghosting"), co-channel analog television interference, and electromagnetic interferences that come from motor vehicles and power lines in urban environments. ISDB-T is claimed to allow HDTV to be received on moving vehicles at over 100 km/h; DVB-T can only receive SDTV on moving vehicles, and it is claimed that ATSC can not be received on moving vehicles at all (however, in early 2007 there were reports of successful reception of ATSC on laptops using USB tuners in moving vehicles). Adoption ISDB-T was adopted for commercial transmissions in Japan in December 2003. It currently comprises a market of about 100 million television sets. ISDB-T had 10 million subscribers by the end of April 2005. Along with the wide use of ISDB-T, the price of receivers is getting low. The price of ISDB-T STB in the lower end of the market is ¥19800 as of 19 April 2006. By November 2007 only a few older, low-end STB models could be found in the Japanese market (average price U$180), showing a tendency towards replacement by mid to high-end equipment like PVRs and TV sets with inbuilt tuners. In November 2009, a retail chain AEON introduced STB in 40 USD, followed by variety of low-cost tuners. The Dibeg web page confirms this tendency by showing low significance of the digital tuner STB market in Japan. Brazil, which used an analogue TV system (PAL-M) that slightly differed from any other countries, has chosen ISDB-T as a base for its DTV format, calling it ISDB-Tb or internally SBTVD (Sistema Brasileiro de Televisão Digital-Terrestre). The Japanese DiBEG group incorporated the advancements made by Brazil -MPEG4 video codec instead of ISDB-T's MPEG2 and a powerful interaction middleware called Ginga- and has renamed the standard to "ISDB-T International". Other than Argentina, Brazil, Peru, Chile and Ecuador which have selected ISDB-Tb, there are other South American countries, mainly from Mercosur, such as Venezuela, that chose ISDB-Tb, which providing economies of scale and common market benefits from the regional South American manufacturing instead of importing ready-made STBs as is the case with the other standards. Also, it has been confirmed with extensive tests realized by Brazilian Association of Radio and Television Broadcasters (ABERT), Brazilian Television Engineering Society (SET) and Universidade Presbiteriana Mackenzie the insufficient quality for indoor reception presented by ATSC and, between DVB-T and ISDB-T, the latter presented superior performance in indoor reception and flexibility to access digital services and TV programs through non-mobile, mobile or portable receivers with impressive quality. The ABERT–SET group in Brazil did system comparison tests of DTV under the supervision of the CPqD foundation. The comparison tests were done under the direction of a work group of SET and ABERT. The ABERT/SET group selected ISDB-T as the best choice in digital broadcasting modulation systems among ATSC, DVB-T and ISDB-T. Another study found that ISDB-T and DVB-T performed similarly, and that both were outperformed by DVB-T2. ISDB-T was singled out as the most flexible of all for meeting the needs of mobility and portability. It is most efficient for mobile and portable reception. On June 29, 2006, Brazil announced ISDB-T-based SBTVD as the chosen standard for digital TV transmissions, to be fully implemented by 2016. By November 2007 (one month prior DTTV launch), a few suppliers started to announce zapper STBs of the new Nippon-Brazilian SBTVD-T standard, at that time without interactivity. As in 2019, the implementation rollout in Brazil proceeded successfully, with terrestrial analog services (PAL-M) phased out in most of the country (for some less populated regions, analog signal shutdown was postponed to 2023). Adoption by country This lists the other countries who adopted the ISDB-T standard, chronologically arranged. On June 30, 2006, Brazil announced its decision to adopt ISDB-T as the digital terrestrial television standard, by means of presidential decree 5820/2006. On April 23, 2009, Peru announced its decision to adopt ISDB-T as the digital terrestrial television standard. This decision was taken on the basis of the recommendations by the Multi-sectional Commission to assess the most appropriate standard for the country. On August 28, 2009, Argentina officially adopted the ISDB-T system calling it internally SATVD-T (Sistema Argentino de Televisión – Terrestre). On September 14, 2009, Chile announced it was adopting the ISDB-T standard because it adapts better to the geographical makeup of the country, while allowing signal reception in cell phones, high-definition content delivery and a wider variety of channels. On October 6, 2009, Venezuela officially adopted the ISDB-T standard. On March 26, 2010, Ecuador announced its decision to adopt ISDB-T standard. This decision was taken on the basis of the recommendations by the Superintendent of Telecommunications. On April 29, 2010, Costa Rica officially announced the adoption of ISDB-Tb standard based upon a commission in charge of analyzing which protocol to accept. On June 1, 2010, Paraguay officially adopted ISDB-T International, via a presidential decree #4483. On June 11, 2010, the Philippines (NTC) officially adopted the ISDB-T standard. On July 6, 2010, Bolivia announced its decision to adopt ISDB-T standard as well. On December 27, 2010, the Uruguayan Government adopts the ISDB-T standard., voiding a previous 2007 decree which adopted the European DVB system. On November 15, 2011, the Maldivian Government adopts the ISDB-T standard. As the first country in the region that use European channel table and 1 channel bandwidth is 8 MHz. On February 26, 2013, the Botswana government adopts the ISDB-T standard; as the first country within the SADC region and even the first country within the continent of Africa as a whole. On September 12, 2013, Honduras adopted the ISDB-T standard. On May 20, 2014, Government of Sri Lanka officially announced its decision to adopt ISDB-T standard, and on September 7, 2014 Japanese Prime Minister Shinzo Abe signed an agreement with Sri Lankan President Mahinda Rajapakse for constructing infrastructure such as ISDB-T networks with a view to smooth conversion to ISDB-T, and cooperating in the field of content and developing human resources. On January 23, 2017, El Salvador adopted the ISDB-T standard. On March 20, 2019, Angola adopted the ISDB-T standard. Technical specification Segment structure ARIB has developed a segment structure called BST-OFDM (see figure). ISDB-T divides the frequency band of one channel into thirteen segments. The broadcaster can select which combination of segments to use; this choice of segment structure allows for service flexibility. For example, ISDB-T can transmit both LDTV and HDTV using one TV channel or change to 3 SDTV, a switch that can be performed at any time. ISDB-T can also change the modulation scheme at the same time. The above figure shows the spectrum of 13 segments structure of ISDB-T. (s0 is generally used for 1seg, s1-s12 are used for one HDTV or three SDTVs) Summary of ISDB-T H.264 Baseline profile is used in one segment (1seg) broadcasting for portables and Mobile phone. H.264 High-profile is used in ISDB-Tb to high definition broadcasts. Channel Specification of Japanese terrestrial digital broadcasting using ISDB-T. ISDB-Tsb ISDB-Tsb is the terrestrial digital sound broadcasting specification. The technical specification is the same as ISDB-T. ISDB-Tsb supports the coded transmission of OFDM signals. ISDB-Tmm ISDB-Tmm (Terrestrial mobile multi-media) utilised suitable number of segments by station with video coding MPEG-4 AVC/H.264. With multiple channels, ISDB-Tmm served dedicated channels such as sport, movie, music channels and others with CD quality sound, allowing for better broadcast quality as compared to 1seg. This service used the VHF band, 207.5–222 MHz which began to be utilised after Japan's switchover to digital television in July 2011. Japan's Ministry of Internal Affairs and Communications licensed to NTT Docomo subsidiary mmbi, Inc. for ISDB-Tmm method on September 9, 2010. The MediaFLO method offered with KDDI was not licensed. The ISDB-Tmm broadcasting service by mmbi, Inc. is named モバキャス (pronounced mobakyasu), literally short form of mobile casting on July 14, 2011, and had been branded as NOTTV since October 4, 2011. The Minister of Internal Affairs and Communications approved the start of operations of NOTTV on October 13, 2011. Planning the service with monthly subscription fee of 420 yen for south Kanto Plain, Aichi, Osaka, Kyoto and some other prefectures from April 1, 2012. The deployment plan was to cover approximately 73% of households by the end of 2012, approximately 91% by the end of 2014, and 125 stations or repeaters to be installed in 2016 to cover cities nationwide. Android smartphones and tablets with ISDB-Tmm receiving capability were also sold mainly by NTT DoCoMo, although a separate tuner (TV BoX manufactured by Huawei; or StationTV manufactured by Pixela) could be purchased for iPhones and iPads as well as Android smartphones and tablets sold by au by KDDI and SoftBank Mobile to receive ISDB-Tmm broadcasts. Due to the continued unprofitability of NOTTV, mmbi, Inc. shut down the service on June 30, 2016. 2.6 GHz Mobile satellite digital audio/video broadcasting MobaHo! is the name of the services that uses the Mobile satellite digital audio broadcasting specifications. MobaHo! started its service on 20 October 2004. Ended on 31 March 2009 Standards ARIB and JCTEA developed the following standards. Some part of standards are located on the pages of ITU-R and ITU-T. Table of terrestrial HDTV transmission systems
Technology
Broadcasting
null
272313
https://en.wikipedia.org/wiki/ATSC%20standards
ATSC standards
Advanced Television Systems Committee (ATSC) standards are an international set of standards for broadcast and digital television transmission over terrestrial, cable and satellite networks. It is largely a replacement for the analog NTSC standard — like that standard — is used mostly in the United States, Mexico, Canada, South Korea, Trinidad and Tobago. Several former NTSC users like Japan, have not used ATSC during their digital television transition, because they adopted other systems like ISDB developed by Japan and DVB developed in Europe, for example. The ATSC standards were developed in the early 1990s by the Grand Alliance, a consortium of electronics and telecommunications companies that assembled to develop a specification for what is now known as HDTV. The standard is now administered by the Advanced Television Systems Committee. It includes a number of patented elements, and licensing is required for devices that use these parts of the standard. Key among these is the 8VSB modulation system used for over-the-air broadcasts. ATSC 1.0 technology was primarily developed with patent contributions from LG Electronics, which held most of the patents for the ATSC standard. ATSC includes two primary high definition video formats, 1080i and 720p. It also includes standard-definition formats, although initially only HDTV services were launched in the digital format. ATSC can carry multiple channels of information on a single stream, and it is common for there to be a single high-definition signal and several standard-definition signals carried on a single 6 MHz (former NTSC) channel allocation. Background The high-definition television standards defined by the ATSC produce widescreen 16:9 images up to 1920×1080 pixels in sizemore than six times the display resolution of the earlier standard. However, many different image sizes are also supported. The reduced bandwidth requirements of lower-resolution images allow up to six standard-definition "subchannels" to be broadcast on a single 6 MHz TV channel. ATSC standards are marked A/x (x is the standard number) and can be downloaded for free from the ATSC's website at ATSC.org. ATSC Standard A/53, which implemented the system developed by the Grand Alliance, was published in 1995; the standard was adopted by the Federal Communications Commission in the United States in 1996. It was revised in 2009. ATSC Standard A/72 was approved in 2008 and introduces H.264/AVC video coding to the ATSC system. ATSC supports 5.1-channel surround sound using Dolby Digital's AC-3 format. Numerous auxiliary datacasting services can also be provided. Many aspects of ATSC were patented, including elements of the MPEG video coding, the AC-3 audio coding, and the 8VSB modulation. The cost of patent licensing, estimated at up to per digital TV receiver, had prompted complaints by manufacturers. As with other systems, ATSC depends on numerous interwoven standards, e.g., the EIA-708 standard for digital closed captioning, leading to variations in implementation. Digital switchover ATSC replaced much of the analog NTSC television system in the United States on June 12, 2009, on August 31, 2011 in Canada, on December 31, 2012 in South Korea, and on December 31, 2015 in Mexico. Broadcasters who used ATSC and wanted to retain an analog signal were temporarily forced to broadcast on two separate channels, as the ATSC system requires the use of an entire separate channel. Channel numbers in ATSC do not correspond to RF frequency ranges, as they did with analog television. Instead, virtual channels, sent as part of the metadata along with the program(s), allow channel numbers to be remapped from their physical RF channel to any other number 1 to 99, so that ATSC stations can either be associated with the related NTSC channel numbers, or all stations on a network can use the same number. There is also a standard for distributed transmission systems (DTx), a form of single-frequency network which allows for the synchronised operation of multiple on-channel booster stations. Audio Dolby Digital AC-3 is used as the audio codec, though it was standardized as A/52 by the ATSC. It allows the transport of up to five channels of sound with a sixth channel for low-frequency effects (the so-called "5.1" configuration). In contrast, Japanese ISDB HDTV broadcasts use MPEG's Advanced Audio Coding (AAC) as the audio codec, which also allows 5.1 audio output. DVB (see below) allows both. MPEG-2 audio was a contender for the ATSC standard during the DTV "Grand Alliance" shootout, but lost out to Dolby AC-3. The Grand Alliance issued a statement finding the MPEG-2 system to be "essentially equivalent" to Dolby, but only after the Dolby selection had been made. Later, a story emerged that MIT had entered into an agreement with Dolby whereupon the university would be awarded a large sum of money if the MPEG-2 system was rejected. Dolby also offered an incentive for Zenith to switch their vote (which they did); however, it is unknown whether they accepted the offer. Video The ATSC system supports a number of different display resolutions, aspect ratios, and frame rates. The formats are listed here by resolution, form of scanning (progressive or interlaced), and number of frames (or fields) per second (see also the TV resolution overview at the end of this article). For transport, ATSC uses the MPEG systems specification, known as an MPEG transport stream, to encapsulate data, subject to certain constraints. ATSC uses 188-byte MPEG transport stream packets to carry data. Before decoding of audio and video takes place, the receiver must demodulate and apply error correction to the signal. Then, the transport stream may be demultiplexed into its constituent streams. MPEG-2 There are four basic display sizes for ATSC, generally known by referring to the number of lines of the picture height. NTSC and PAL image sizes are smallest, with a width of 720 (or 704) and a height of 480 or 576 lines. The third size is HDTV images that have 720 scan lines in height and are 1280 pixels wide. The largest size has 1080 lines high and 1920 pixels wide. 1080-line video is actually encoded with 1920×1088 pixel frames, but the last eight lines are discarded prior to display. This is due to a restriction of the MPEG-2 video format, which requires the height of the picture in luma samples (i.e. pixels) to be divisible by 16. The lower resolutions can operate either in progressive scan or interlaced mode, but not the largest picture sizes. The 1080-line system does not support progressive images at the highest frame rates of 50, 59.94 or 60 frames per second, because such technology was seen as too advanced at the time. The standard also requires 720-line video be progressive scan, since that provides better picture quality than interlaced scan at a given frame rate, and there was no legacy use of interlaced scan for that format. The result is that the combination of maximum frame rate and picture size results in approximately the same number of samples per second for both the 1080-line interlaced format and the 720-line format, as 1920*1080*30 is roughly equal to 1280*720*60. A similar equality relationship applies for 576 lines at 25 frame per second versus 480 lines at 30 frames per second. A terrestrial (over-the-air) transmission carries 19.39 megabits of data per second (a fluctuating bandwidth of about 18.3 Mbit/s left after overhead such as error correction, program guide, closed captioning, etc.), compared to a maximum possible MPEG-2 bitrate of 10.08 Mbit/s (7 Mbit/s typical) allowed in the DVD standard and 48 Mbit/s (36 Mbit/s typical) allowed in the Blu-ray disc standard. Although the ATSC A/53 standard limits MPEG-2 transmission to the formats listed below (with integer frame rates paired with 1000/1001-rate versions), the U.S. Federal Communications Commission declined to mandate that television stations obey this part of the ATSC's standard. In theory, television stations in the U.S. are free to choose any resolution, aspect ratio, and frame/field rate, within the limits of Main Profile @ High Level. Many stations do go outside the bounds of the ATSC specification by using other resolutions – for example, 352 x 480 or 720 x 480. "EDTV" displays can reproduce progressive scan content and frequently have a 16:9 wide screen format. Such resolutions are 704×480 or 720×480 in NTSC and 720×576 in PAL, allowing 60 progressive frames per second in NTSC or 50 in PAL. ATSC also supports PAL frame rates and resolutions which are defined in ATSC A/63 standard. The ATSC A/53 specification imposes certain constraints on MPEG-2 video stream: The maximum bit rate value in the sequence header of the MPEG-2 video stream is 19.4 Mbit/s for broadcast television, and 38.8 Mbit/s for the "high data rate" mode (e.g., cable television). The actual MPEG-2 video bit rate will be lower, since the MPEG-2 video stream must fit inside a transport stream. The amount of MPEG-2 stream buffer required at the decoder (the vbv_buffer_size_value) must be less than or equal to 999,424 bytes. In most cases, the transmitter can't start sending a coded image until within a half-second of when it's to be decoded (vbv_delay less than or equal to 45000 90-kHz clock increments). The stream must include colorimetry information (gamma curve, the precise RGB colors used, and the relationship between RGB and the coded YCbCr). The video must be 4:2:0 (chrominance resolution must be 1/2 of luma horizontal resolution and 1/2 of luma vertical resolution). The ATSC specification and MPEG-2 allow the use of progressive frames coded within an interlaced video sequence. For example, NBC stations transmit a 1080i60 video sequence, meaning the formal output of the MPEG-2 decoding process is sixty 540-line fields per second. However, for prime-time television shows, those 60 fields can be coded using 24 progressive frames as a base – actually, an 1080p24 video stream (a sequence of 24 progressive frames per second) is transmitted, and MPEG-2 metadata instructs the decoder to interlace these fields and perform 3:2 pulldown before display, as in soft telecine. The ATSC specification also allows 1080p30 and 1080p24 MPEG-2 sequences, however they are not used in practice, because broadcasters want to be able to switch between 60 Hz interlaced (news), 30 Hz progressive or PsF (soap operas), and 24 Hz progressive (prime-time) content without ending the 1080i60 MPEG-2 sequence. The 1080-line formats are encoded with 1920 × 1088 pixel luma matrices and 960 × 540 chroma matrices, but the last 8 lines are discarded by the MPEG-2 decoding and display process. H.264/MPEG-4 AVC In July 2008, ATSC was updated to support the ITU-T H.264 video codec. The new standard is split in two parts: A/72 part 1: Video System Characteristics of AVC in the ATSC Digital Television System A/72 part 2 : AVC Video Transport Subsystem Characteristics The new standards support 1080p at 50, 59.94 and 60 frames per second; such frame rates require H.264/AVC High Profile Level 4.2, while standard HDTV frame rates only require Levels 3.2 and 4, and SDTV frame rates require Levels 3 and 3.1. Transport stream (TS) The file extension ".TS" stands for "transport stream", which is a media container format. It may contain a number of streams of audio or video content multiplexed within the transport stream. Transport streams are designed with synchronization and recovery in mind for potentially lossy distribution (such as over-the-air ATSC broadcast) in order to continue a media stream with minimal interruption in the face of data loss in transmission. When an over-the-air ATSC signal is captured to a file via hardware/software the resulting file is often in a .TS file format. Modulation and transmission ATSC signals are designed to use the same 6 MHz bandwidth as analog NTSC television channels (the interference requirements of A/53 DTV standards with adjacent NTSC or other DTV channels are very strict). Once the digital video and audio signals have been compressed and multiplexed, the transport stream can be modulated in different ways depending on the method of transmission. Terrestrial (local) broadcasters use 8VSB modulation that can transfer at a maximum rate of 19.39 Mbit/s, sufficient to carry several video and audio programs and metadata. Cable television stations can generally operate at a higher signal-to-noise ratio and can use either the 16VSB as defined in ATSC or the 256-QAM defined in SCTE, to achieve a throughput of 38.78 Mbit/s, using the same 6 MHz channel. The proposals for modulation schemes for digital television were developed when cable operators carried standard-resolution video as uncompressed analog signals. In recent years, cable operators have become accustomed to compressing standard-resolution video for digital cable systems, making it harder to find duplicate 6 MHz channels for local broadcasters on uncompressed "basic" cable. Currently, the Federal Communications Commission requires cable operators in the United States to carry the analog or digital transmission of a terrestrial broadcaster (but not both), when so requested by the broadcaster (the "must-carry rule"). The Canadian Radio-television and Telecommunications Commission in Canada does not have similar rules in force with respect to carrying ATSC signals. However, cable operators have still been slow to add ATSC channels to their lineups for legal, regulatory, and plant & equipment related reasons. One key technical and regulatory issue is the modulation scheme used on the cable: cable operators in the U.S. (and to a lesser extent Canada) can determine their own method of modulation for their plants. Multiple standards bodies exist in the industry: the SCTE defined 256-QAM as a modulation scheme for cable in a cable industry standard, ANSI/SCTE 07 2006: Digital Transmission Standard For Cable Television . Consequently, most U.S. and Canadian cable operators seeking additional capacity on the cable system have moved to 256-QAM from the 64-QAM modulation used in their plant, in preference to the 16VSB standard originally proposed by ATSC. Over time 256-QAM is expected to be included in the ATSC standard. There is also a standard for transmitting ATSC via satellite; however, this is only used by TV networks. Very few teleports outside the U.S. support the ATSC satellite transmission standard, but teleport support for the standard is improving. The ATSC satellite transmission system is not used for direct-broadcast satellite systems; in the U.S. and Canada these have long used either DVB-S (in standard or modified form) or a proprietary system such as DSS or DigiCipher 2. Other systems ATSC coexists with the DVB-T standard, and with ISDB-T. A similar standard called ADTB-T was developed for use as part of China's new DMB-T/H dual standard. While China has officially chosen a dual standard, there is no requirement that a receiver work with both standards and there is no support for the ADTB modulation from broadcasters or equipment and receiver manufacturers. For compatibility with material from various regions and sources, ATSC supports the 480i video format used in the NTSC analog system (480 lines, approximately 60 fields or 30 frames per second), 576i formats used in most PAL regions (576 lines, 50 fields or 25 frames per second), and 24 frames-per-second formats used in film. While the ATSC system has been criticized as being complicated and expensive to implement and use, both broadcasting and receiving equipment are now comparable in cost with that of DVB. The ATSC signal is more susceptible to changes in radio propagation conditions than DVB-T and ISDB-T. It also lacks true hierarchical modulation, which would allow the SDTV part of an HDTV signal (or the audio portion of a television program) to be received uninterrupted even in fringe areas where signal strength is low. For this reason, an additional modulation mode, enhanced-VSB (E-VSB) has been introduced, allowing for a similar benefit. In spite of ATSC's fixed transmission mode, it is still a robust signal under various conditions. 8VSB was chosen over COFDM in part because many areas are rural and have a much lower population density, thereby requiring larger transmitters and resulting in large fringe areas. In these areas, 8VSB was shown to perform better than other systems. COFDM is used in both DVB-T and ISDB-T, and for 1seg, as well as DVB-H and HD Radio in the United States. In metropolitan areas, where population density is highest, COFDM is said to be better at handling multipath propagation. While ATSC is also incapable of true single-frequency network (SFN) operation, the distributed transmission mode, using multiple synchronized on-channel transmitters, has been shown to improve reception under similar conditions. Thus, it may not require more spectrum allocation than DVB-T using SFNs. A comparison study found that ISDB-T and DVB-T performed similarly, and that both were outperformed by DVB-T2. Mobile TV Mobile reception of digital stations using ATSC has, until 2008, been difficult to impossible, especially when moving at vehicular speeds. To overcome this, there are several proposed systems that report improved mobile reception: Samsung/Rhode & Schwarz's A-VSB, Harris/LG's MPH, and a recent proposal from Thomson/Micronas; all of these systems have been submitted as candidates for a new ATSC standard, ATSC-M/H. After one year of standardization, the solution merged between Samsung's AVSB and LGE's MPH technology has been adopted and would have been deployed in 2009. This is in addition to other standards like the now-defunct MediaFLO, and worldwide open standards such as DVB-H and T-DMB. Like DVB-H and ISDB 1seg, the proposed ATSC mobile standards are backward compatible with existing tuners, despite being added to the standard well after the original standard was in wide use. Mobile reception of some stations will still be more difficult, because 18 UHF channels in the U.S. have been removed from TV service, forcing some broadcasters to stay on VHF. This band requires larger antennas for reception, and is more prone to electromagnetic interference from engines and rapidly changing multipath conditions. Future ATSC 2.0 ATSC 2.0 was a planned major new revision of the standard which would have been backward compatible with ATSC 1.0. The standard was to have allowed interactive and hybrid television technologies by connecting the TV with the Internet services and allowing interactive elements into the broadcast stream. Other features were to have included advanced video compression, audience measurement, targeted advertising, enhanced programming guides, video on demand services, and the ability to store information on new receivers, including Non-realtime (NRT) content. However, ATSC 2.0 was never actually launched, as it was essentially outdated before it could be launched. All of the changes that were a part of the ATSC 2.0 revision were adopted into ATSC 3.0. ATSC 3.0 ATSC 3.0 will provide even more services to the viewer and increased bandwidth efficiency and compression performance, which requires breaking backward compatibility with the current version. On November 17, 2017, the FCC voted 3–2 in favor of authorizing voluntary deployments of ATSC 3.0, and issued a Report and Order to that effect. ATSC 3.0 broadcasts and receivers are expected to emerge within the next decade. LG Electronics tested the standard with 4K on February 23, 2016. With the test considered a success, South Korea announced that ATSC 3.0 broadcasts would start in February 2017. On March 28, 2016, the Bootstrap component of ATSC 3.0 (System Discovery and Signalling) was upgraded from candidate standard to finalized standard. On June 29, 2016, NBC affiliate WRAL-TV in Raleigh, North Carolina, a station known for its pioneering roles in testing the original DTV standards, launched an experimental ATSC 3.0 channel carrying the station's programming in 1080p, as well as a 4K demo loop. Structure/ATSC 3.0 System Layers Bootstrap: System Discovery and Signalling Physical Layer: Transmission (OFDM) Protocols: IP, MMT Presentation: Audio and Video standards (to be determined), Ultra HD with High Definition and standard-definition multicast, Immersive Audio Applications: Screen is a web page ATSC 3.0 advantages Better image quality. ATSC 3.0 allows UHD transmission, including high-dynamic-range television (HDR-TV), wide color gamut (WCG), and high frame rate (HFR). Reception upgrades. ATSC 3.0 allows the same aerial to receive more channels with better quality. Portable devices such as mobile phones, tablets, and car infotainment systems can receive TV signals. Enhanced emergency alerts. Emergency signals can be geographically oriented and inform only the specific areas where they are required. Audience measure. Telecommunication companies can easily take audience data gatherings. Targeted advertising with the assistance of local network Wi-Fi. Content variety and diversification. Countries and territories using ATSC North America March 9, 2016, after initially announcing DVB-T (on 10 July 2007) 2018 On December 14, 2011, the Bahamas' national public broadcaster ZNS-TV announced that it would adopt ATSC, in line with the United States and its territories. opened in digital television on December 4, 2024. adopted ATSC, with full-power analog stations in specified "mandatory markets" (which included provincial capitals, and cities with a population of 300,000 or higher) shutting down on August 31, 2011. The CBC only converted its originating stations to digital; it was given permission to operate its repeaters in mandatory markets (such as CBKST in Saskatoon) for an additional year, but later announced that it would shut down all of its analog repeaters on July 31, 2012—citing budget issues and their distribution network as being obsolete. The Dominican Republic announced its adoption on August 10, 2010, completing the transition on September 24, 2015, but most companies were not able to meet the deadline and the government had to move it forward to the year 2021. Will convert to ATSC 3.0 instead of 1.0. The conversion will begin in 2022 and is expected to be completed by 2023. began converting to ATSC in 2013; a full transition was scheduled for December 31, 2015, but due to technical and economic issues for some transmitters, the full transition was extended to December 31, 2016. opened on Saint Lucia on ATSC on March 5, 2024. will convert to ATSC 3.0 instead of ATSC 1.0. The conversion process will begin in March 2023 and is expected to be completed by 2026. Full-power television stations in the United States ended analog television service on June 12, 2009. Analog low-power stations and translators were all wound down by July 13, 2021. South America Suriname has undergone transitioning from analogue NTSC broadcasts to digital ATSC broadcasts. Channel ATV started with ATSC broadcasts in the Paramaribo area in June 2014, which was followed by ATSC broadcasts from stations in Brokopondo, Wageningen and Albina. The stations in Brokopondo, Wageningen and Albina broadcast both the channels of ATV (i.e., ATV and TV2) and STVS. In 2016 all channels in Suriname had already made the switch to ATSC. Asia/Pacific South Korea completed its transition to ATSC on December 31, 2012, although it still operates some analog signals along its northern border for reception in North Korea. territories in the Pacific, including American Samoa, Guam, and the Northern Mariana Islands have adopted ATSC, as with the mainland. Countries and territories are available in using ATSC Patent holders The following organizations held patents for the development of ATSC 1.0 technology, as listed in the patent pool administered by MPEG LA. The latest patents expired on 16 September 2024. Patents for ATSC 3.0 are still active.
Technology
Broadcasting
null
272829
https://en.wikipedia.org/wiki/Numerical%20methods%20for%20ordinary%20differential%20equations
Numerical methods for ordinary differential equations
Numerical methods for ordinary differential equations are methods used to find numerical approximations to the solutions of ordinary differential equations (ODEs). Their use is also known as "numerical integration", although this term can also refer to the computation of integrals. Many differential equations cannot be solved exactly. For practical purposes, however – such as in engineering – a numeric approximation to the solution is often sufficient. The algorithms studied here can be used to compute such an approximation. An alternative method is to use techniques from calculus to obtain a series expansion of the solution. Ordinary differential equations occur in many scientific disciplines, including physics, chemistry, biology, and economics. In addition, some methods in numerical partial differential equations convert the partial differential equation into an ordinary differential equation, which must then be solved. The problem A first-order differential equation is an Initial value problem (IVP) of the form, where is a function , and the initial condition is a given vector. First-order means that only the first derivative of y appears in the equation, and higher derivatives are absent. Without loss of generality to higher-order systems, we restrict ourselves to first-order differential equations, because a higher-order ODE can be converted into a larger system of first-order equations by introducing extra variables. For example, the second-order equation can be rewritten as two first-order equations: and In this section, we describe numerical methods for IVPs, and remark that boundary value problems (BVPs) require a different set of tools. In a BVP, one defines values, or components of the solution y at more than one point. Because of this, different methods need to be used to solve BVPs. For example, the shooting method (and its variants) or global methods like finite differences, Galerkin methods, or collocation methods are appropriate for that class of problems. The Picard–Lindelöf theorem states that there is a unique solution, provided f is Lipschitz-continuous. Methods Numerical methods for solving first-order IVPs often fall into one of two large categories: linear multistep methods, or Runge–Kutta methods. A further division can be realized by dividing methods into those that are explicit and those that are implicit. For example, implicit linear multistep methods include Adams-Moulton methods, and backward differentiation methods (BDF), whereas implicit Runge–Kutta methods include diagonally implicit Runge–Kutta (DIRK), singly diagonally implicit Runge–Kutta (SDIRK), and Gauss–Radau (based on Gaussian quadrature) numerical methods. Explicit examples from the linear multistep family include the Adams–Bashforth methods, and any Runge–Kutta method with a lower diagonal Butcher tableau is explicit. A loose rule of thumb dictates that stiff differential equations require the use of implicit schemes, whereas non-stiff problems can be solved more efficiently with explicit schemes. The so-called general linear methods (GLMs) are a generalization of the above two large classes of methods. Euler method From any point on a curve, you can find an approximation of a nearby point on the curve by moving a short distance along a line tangent to the curve. Starting with the differential equation (), we replace the derivative y′ by the finite difference approximation which when re-arranged yields the following formula and using () gives: This formula is usually applied in the following way. We choose a step size h, and we construct the sequence We denote by a numerical estimate of the exact solution . Motivated by (), we compute these estimates by the following recursive scheme This is the Euler method (or forward Euler method, in contrast with the backward Euler method, to be described below). The method is named after Leonhard Euler who described it in 1768. The Euler method is an example of an explicit method. This means that the new value yn+1 is defined in terms of things that are already known, like yn. Backward Euler method If, instead of (), we use the approximation we get the backward Euler method: The backward Euler method is an implicit method, meaning that we have to solve an equation to find yn+1. One often uses fixed-point iteration or (some modification of) the Newton–Raphson method to achieve this. It costs more time to solve this equation than explicit methods; this cost must be taken into consideration when one selects the method to use. The advantage of implicit methods such as () is that they are usually more stable for solving a stiff equation, meaning that a larger step size h can be used. First-order exponential integrator method Exponential integrators describe a large class of integrators that have recently seen a lot of development. They date back to at least the 1960s. In place of (), we assume the differential equation is either of the form or it has been locally linearized about a background state to produce a linear term and a nonlinear term . Exponential integrators are constructed by multiplying () by , and exactly integrating the result over a time interval : This integral equation is exact, but it doesn't define the integral. The first-order exponential integrator can be realized by holding constant over the full interval: Generalizations The Euler method is often not accurate enough. In more precise terms, it only has order one (the concept of order is explained below). This caused mathematicians to look for higher-order methods. One possibility is to use not only the previously computed value yn to determine yn+1, but to make the solution depend on more past values. This yields a so-called multistep method. Perhaps the simplest is the leapfrog method which is second order and (roughly speaking) relies on two time values. Almost all practical multistep methods fall within the family of linear multistep methods, which have the form Another possibility is to use more points in the interval . This leads to the family of Runge–Kutta methods, named after Carl Runge and Martin Kutta. One of their fourth-order methods is especially popular. Advanced features A good implementation of one of these methods for solving an ODE entails more than the time-stepping formula. It is often inefficient to use the same step size all the time, so variable step-size methods have been developed. Usually, the step size is chosen such that the (local) error per step is below some tolerance level. This means that the methods must also compute an error indicator, an estimate of the local error. An extension of this idea is to choose dynamically between different methods of different orders (this is called a variable order method). Methods based on Richardson extrapolation, such as the Bulirsch–Stoer algorithm, are often used to construct various methods of different orders. Other desirable features include: dense output: cheap numerical approximations for the whole integration interval, and not only at the points t0, t1, t2, ... event location: finding the times where, say, a particular function vanishes. This typically requires the use of a root-finding algorithm. support for parallel computing. when used for integrating with respect to time, time reversibility Alternative methods Many methods do not fall within the framework discussed here. Some classes of alternative methods are: multiderivative methods, which use not only the function f but also its derivatives. This class includes Hermite–Obreschkoff methods and Fehlberg methods, as well as methods like the Parker–Sochacki method or Bychkov–Scherbakov method, which compute the coefficients of the Taylor series of the solution y recursively. methods for second order ODEs. We said that all higher-order ODEs can be transformed to first-order ODEs of the form (1). While this is certainly true, it may not be the best way to proceed. In particular, Nyström methods work directly with second-order equations. geometric integration methods are especially designed for special classes of ODEs (for example, symplectic integrators for the solution of Hamiltonian equations). They take care that the numerical solution respects the underlying structure or geometry of these classes. Quantized state systems methods are a family of ODE integration methods based on the idea of state quantization. They are efficient when simulating sparse systems with frequent discontinuities. Parallel-in-time methods Some IVPs require integration at such high temporal resolution and/or over such long time intervals that classical serial time-stepping methods become computationally infeasible to run in real-time (e.g. IVPs in numerical weather prediction, plasma modelling, and molecular dynamics). Parallel-in-time (PinT) methods have been developed in response to these issues in order to reduce simulation runtimes through the use of parallel computing. Early PinT methods (the earliest being proposed in the 1960s) were initially overlooked by researchers due to the fact that the parallel computing architectures that they required were not yet widely available. With more computing power available, interest was renewed in the early 2000s with the development of Parareal, a flexible, easy-to-use PinT algorithm that is suitable for solving a wide variety of IVPs. The advent of exascale computing has meant that PinT algorithms are attracting increasing research attention and are being developed in such a way that they can harness the world's most powerful supercomputers. The most popular methods as of 2023 include Parareal, PFASST, ParaDiag, and MGRIT. Analysis Numerical analysis is not only the design of numerical methods, but also their analysis. Three central concepts in this analysis are: convergence: whether the method approximates the solution, order: how well it approximates the solution, and stability: whether errors are damped out. Convergence A numerical method is said to be convergent if the numerical solution approaches the exact solution as the step size h goes to 0. More precisely, we require that for every ODE (1) with a Lipschitz function f and every t* > 0, All the methods mentioned above are convergent. Consistency and order Suppose the numerical method is The local (truncation) error of the method is the error committed by one step of the method. That is, it is the difference between the result given by the method, assuming that no error was made in earlier steps, and the exact solution: The method is said to be consistent if The method has order if Hence a method is consistent if it has an order greater than 0. The (forward) Euler method (4) and the backward Euler method (6) introduced above both have order 1, so they are consistent. Most methods being used in practice attain higher order. Consistency is a necessary condition for convergence, but not sufficient; for a method to be convergent, it must be both consistent and zero-stable. A related concept is the global (truncation) error, the error sustained in all the steps one needs to reach a fixed time . Explicitly, the global error at time is where . The global error of a th order one-step method is ; in particular, such a method is convergent. This statement is not necessarily true for multi-step methods. Stability and stiffness For some differential equations, application of standard methods—such as the Euler method, explicit Runge–Kutta methods, or multistep methods (for example, Adams–Bashforth methods)—exhibit instability in the solutions, though other methods may produce stable solutions. This "difficult behaviour" in the equation (which may not necessarily be complex itself) is described as stiffness, and is often caused by the presence of different time scales in the underlying problem. For example, a collision in a mechanical system like in an impact oscillator typically occurs at much smaller time scale than the time for the motion of objects; this discrepancy makes for very "sharp turns" in the curves of the state parameters. Stiff problems are ubiquitous in chemical kinetics, control theory, solid mechanics, weather forecasting, biology, plasma physics, and electronics. One way to overcome stiffness is to extend the notion of differential equation to that of differential inclusion, which allows for and models non-smoothness. History Below is a timeline of some important developments in this field. 1768 - Leonhard Euler publishes his method. 1824 - Augustin Louis Cauchy proves convergence of the Euler method. In this proof, Cauchy uses the implicit Euler method. 1855 - First mention of the multistep methods of John Couch Adams in a letter written by Francis Bashforth. 1895 - Carl Runge publishes the first Runge–Kutta method. 1901 - Martin Kutta describes the popular fourth-order Runge–Kutta method. 1910 - Lewis Fry Richardson announces his extrapolation method, Richardson extrapolation. 1952 - Charles F. Curtiss and Joseph Oakland Hirschfelder coin the term stiff equations. 1963 - Germund Dahlquist introduces A-stability of integration methods. Numerical solutions to second-order one-dimensional boundary value problems Boundary value problems (BVPs) are usually solved numerically by solving an approximately equivalent matrix problem obtained by discretizing the original BVP. The most commonly used method for numerically solving BVPs in one dimension is called the Finite Difference Method. This method takes advantage of linear combinations of point values to construct finite difference coefficients that describe derivatives of the function. For example, the second-order central difference approximation to the first derivative is given by: and the second-order central difference for the second derivative is given by: In both of these formulae, is the distance between neighbouring x values on the discretized domain. One then constructs a linear system that can then be solved by standard matrix methods. For example, suppose the equation to be solved is: The next step would be to discretize the problem and use linear derivative approximations such as and solve the resulting system of linear equations. This would lead to equations such as: On first viewing, this system of equations appears to have difficulty associated with the fact that the equation involves no terms that are not multiplied by variables, but in fact this is false. At i = 1 and n − 1 there is a term involving the boundary values and and since these two values are known, one can simply substitute them into this equation and as a result have a non-homogeneous system of linear equations that has non-trivial solutions.
Mathematics
Differential equations
null
272999
https://en.wikipedia.org/wiki/Tusk
Tusk
Tusks are elongated, continuously growing front teeth that protrude well beyond the mouth of certain mammal species. They are most commonly canine teeth, as with narwhals, chevrotains, musk deer, water deer, muntjac, pigs, peccaries, hippopotamuses and walruses, or, in the case of elephants, elongated incisors. Tusks share common features such as extra-oral position, growth pattern, composition and structure, and lack of contribution to ingestion. Tusks are thought to have adapted to the extra-oral environments, like dry or aquatic or arctic. In most tusked species both the males and the females have tusks although the males' are larger. Most mammals with tusks have a pair of them growing out from either side of the mouth. Tusks are generally curved and have a smooth, continuous surface. The male narwhal's straight single helical tusk, which usually grows out from the left of the mouth, is an exception to the typical features of tusks described above. Continuous growth of tusks is enabled by formative tissues in the apical openings of the roots of the teeth. Other than mammals, dicynodonts are the only known vertebrates to have true tusks. Function Tusks have a variety of uses depending on the animal. Social displays of dominance, particularly among males, are common, as is their use in defense against attackers. Elephants use their tusks as digging and boring tools. Walruses use their tusks to grip and haul out on ice. It has been suggested that tusks' structure has evolved to be compatible with extra-oral environments. Size Elephant tusks are sexually dimorphic, being on average larger in males than in females, and entirely absent in female Asian elephants. Elephants with large tusks each at least in weight are known as "tuskers", sometimes also called "big tuskers" or "great tuskers". While tuskers are rare today, it is thought that they were more common in the past, prior to human impact on elephant populations. The two record holders for longest and heaviest recorded African bush elephant tusks are around long measured along the outside curve, and in weight respectively, while the longest and heaviest Asian elephant tusks are long and respectively. Even larger tusks are known from some extinct proboscideans, such as species of Stegodon, Palaeoloxodon, and mammoths, with the longest tusk ever recorded being that of a specimen of "Mammut" borsoni from Greece, which measures in length, with an estimated weight of with some mammoth tusks exceeding in length and probably in weight. The largest walrus tusks can reach lengths of over . The longest narwhal tusks reach . The upward curving maxillary tusks of babirusa can reach lengths of over . Use by humans Tusks are used by humans to produce ivory, which is used in artifacts and jewellery, and formerly in other items such as piano keys. Consequently, many tusk-bearing species have been hunted commercially and several are endangered. The ivory trade has been severely restricted by the United Nations Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Tusked animals in human care may undergo tusk trimming or removal for health and safety concerns. Furthermore, surgical veterinary procedures to remove tusks have been explored to mitigate human-wildlife conflicts. Gallery
Biology and health sciences
Gastrointestinal tract
Biology
273243
https://en.wikipedia.org/wiki/Sled%20dog
Sled dog
A sled dog is a dog trained and used to pull a land vehicle in harness, most commonly a sled over snow. Sled dogs have been used in the Arctic for at least 8,000 years and, along with watercraft, were the only transportation in Arctic areas until the introduction of semi-trailer trucks, snowmobiles and airplanes in the 20th century, hauling supplies in areas that were inaccessible by other methods. They were used with varying success in the explorations of both poles, as well as during the Alaskan gold rush. Sled dog teams delivered mail to rural communities in Alaska, Yukon, Northwest Territories and Nunavut. Sled dogs today are still used by some rural communities, especially in areas of Russia, Canada, and Alaska as well as much of Greenland. They are used for recreational purposes and racing events, such as the Iditarod Trail and the Yukon Quest. History Sled dogs are used in countries and regions such as Canada, Greenland, Siberia, Russia, Norway, Sweden, and Alaska. Russia A 2017 study showed that 9,000 years ago, the domestic dog was present at what is now Zhokhov Island, northeastern Siberia, which at that time was connected to the mainland. The dogs were selectively bred as either sled dogs or hunting dogs, implying that a sled dog standard and a hunting dog standard co-existed. The optimal maximum size for a sled dog is based on thermo-regulation, and the ancient sled dogs were between . The same standard has been found in the remains of sled dogs from this region 2,000 years ago and in the modern Siberian Husky breed standard. Other dogs were more massive at and appear to be dogs that had been crossed with wolves and used for polar bear hunting. At death, the heads of the dogs had been carefully separated from their bodies by humans. Anthropologists speculated that this might have been for ceremonial reasons. The Kungur Chronicle and the Remezov Chronicle, created at the end of the 16th century and 1703 respectively, tells about the people living along Siberian rivers, whose primary means of transport was riding on reindeer or dogs. In these documents, the rivers Olenyok, Yana, Indigirka and Kolyma were called "dog rivers", as they were rich in fish for the dogs to eat. Rivers with no fish or not enough to feed the dogs were called "deer rivers," as reindeer were then used for transportation. From the 1940s to the 1990s, Russian dog sled numbers were in decline. The breed population reached an all-time low of 3,000 in 1998 before revival efforts took off. Reasons for their decline include introduction of mechanization in the Arctic reduced capacity to keep dogs, especially with reduced fish catches and collectivization of farming and reindeer herding. decline of fur hunting. Scandinavia After World War II, skijor and pulka style dog sled racing gained rapidly in popularity in Norway and neighboring Scandinavian countries. These styles of racing required small, fast teams of 1–4 dogs who competed over short, hilly distances of . Required to use purebred dogs by the Norwegian Sled Dog Racing Association, the German Shorthair Pointer quickly emerged as the dog breed of choice. At the beginning of the 1970s, the "sled pointer" had emerged, a pointing dog who was bred exclusively for sledding and not hunting. During the 1970s, "Nome-style" sled racing, which mimicked the big sled dog teams running long distances and overnighting in subzero temperatures seen in North American-style races, started to attract interest in Scandinavia. In 1974, the first Nome-style sled race, the Skjelbreia Sweepstakes, was hosted near Oslo. For this style of racing, Norwegian mushers began to import Alaskan huskies; popularized by mushers like Stein Havard Fjelstad and Roger Leegaard who traveled to Alaska to race in the Iditarod. However, as a performance crossbreed, the Alaskan husky could not be legally raced in Norway until 1985, when the Norwegian Sled Dog Racing Association removed the requirement that sled dogs be purebred. This new ruling also paved the way for Nordic-style mushers to breed their best performing dogs regardless of breed, with mushers mixing Alaskan husky and German Shorthair Pointer to produce the Eurohound as well as Greyhound with German Shorthair Pointer to produce the Greyster. These Nordic-style crossbreeds gained in popularity across Europe and later North America, especially with the rise in popularity of dryland mushing, such as bikejoring and canicross. Sled dogs and husky safaris are not native to Sápmi (Lapland) and Finland and are considered a major nuisance by reindeer herders as they directly impact their livelihoods. These and glass-domed "iglus" have been appropriated from other cultures by the tourist industry in the 1980s and falsely portrayed as being part of the Sámi and Finnish cultures. Greenland The Greenlandic Inuit have a very long history of using sled dogs and they are still widely used today. As of 2010, some 18,000 Greenland dogs were kept in western Greenland north of the Arctic Circle and in eastern Greenland (because of the effort of maintaining the purity of this culturally important breed, they are the only dogs allowed in these regions) and about half of these were in active use as sled dogs by hunters and fishers. As a result of reduced sea ice (limiting their area of use), increasing use of snowmobiles, increasing dog food prices and disease among some local dog populations, the number has been gradually falling in decades and by 2016 there were 15,000 Greenland dogs. A number of projects have been initiated in an attempt of ensuring that Greenland's dog sledding culture, knowledge and use are not lost. The Sirius Patrol, a special forces unit in the Danish military, enforces the sovereignty of the remote unpopulated northeast (essentially equalling the Northeast Greenland National Park) and conduct long-range dog sled patrolling, which also record all sighted wildlife. The patrols averaged per year during 1978–1998. By 2011, the Greenland wolf had re-populated eastern Greenland from the National Park in the northeast through following these dog-sled patrols over distances of up to . North America In 2019, a study found that those dogs brought initially into the North American Arctic from northeastern Siberia were later replaced by dogs accompanying the Inuit during their expansion beginning 2,000 years ago. These Inuit dogs were more genetically diverse and more morphologically divergent when compared with the earlier dogs. Today, Arctic sledge dogs are the last descendants in the Americas of this pre-European dog lineage. Historical references of the dogs and dog harnesses that were used by Native American cultures date back to before European contact. The use of dogs as draft animals was widespread in North America. There were two main kinds of sled dogs; one kind was kept by coastal cultures, and the other kind was kept by interior cultures such as the Athabascan Indians. These interior dogs formed the basis of the Alaskan husky. Russian traders following the Yukon River inland in the mid-1800s acquired sled dogs from the interior villages along the river. The dogs of this area were reputed to be stronger and better at hauling heavy loads than the native Russian sled dogs. The Alaskan Gold Rush brought renewed interest in the use of sled dogs as transportation. Most gold camps were accessible only by dogsled in the winter. "Everything that moved during the frozen season moved by dog team; prospectors, trappers, doctors, mail, commerce, trade, freighting of supplies … if it needed to move in winter, it was moved by sled dogs." This, along with the dogs' use in the exploration of the poles, led to the late 1800s and early 1900s being nicknamed the "Era of the Sled Dog". Sled dogs were used to deliver the mail in Alaska during the late 1800s and early 1900s. Alaskan Malamutes were the favored breed, with teams averaging eight to 10 dogs. Dogs were capable of delivering mail in conditions that would stop boats, trains, and horses. Each team hauled between of mail. The mail was stored in waterproofed bags to protect it from the snow. By 1901, dog trails had been established along the entirety of the Yukon River. Mail delivery by dog sled came to an end in 1963 when the last mail carrier to use a dog sled, Chester Noongwook of Savoonga, retired. He was honored by the US Postal Service in a ceremony on St. Lawrence Island in the Bering Sea. Airplanes took over Alaskan mail delivery in the 1920s and 1930s. In 1924, Carl Ben Eielson flew the first Alaskan airmail delivery. Dog sleds were used to patrol western Alaska during World War II. Highways and trucking in the 40s and 50s, and the snowmobile in the 50s and 60s, contributed to the decline of the working sled dog. Recreational mushing came into place to maintain the tradition of dog mushing. The desire for larger, stronger, load-pulling dogs changed to one for faster dogs with high endurance used in racing, which caused the dogs to become lighter than they were historically. Americans and others living in Alaska then began to import sled dogs from the native tribes of Siberia (which would later evolve and become the Siberian Husky breed) to increase the speed of their own dogs, presenting "a direct contrast to the idea that Russian traders sought heavier draft-type sled dogs from the Interior regions of Alaska and the Yukon less than a century earlier to increase the hauling capacity of their lighter sled dogs." Outside of Alaska, dog-drawn carts were used to haul peddler's wares in cities like New York. Alaska and the Iditarod In 1925, a massive diphtheria outbreak crippled Nome, Alaska. There was no serum in Nome to treat the people infected by the disease. There was serum in Nenana, but the town was more than away, and inaccessible except by dog sled. A dog sled relay was set up by the villages between Nenana and Nome, and 20 teams worked together to relay the serum to Nome. The serum reached Nome in six days. The Iditarod Trail was established on the path between these two towns. It was known as the Iditarod Trail because, at the time, Iditarod was the largest town on the trail. During the 1940s, the trail fell into disuse. However, in 1967, Dorothy Page, who was conducting Alaska's centennial celebration, ordered of the trail to be cleared for a dog sled race. In 1972, the US Army performed a survey of the trail, and in 1973 the Iditarod was established by Joe Redington, Sr. The race was won by Dick Wilmarth, who took three weeks to complete the race. The modern Iditarod is a endurance sled dog race. It usually lasts for ten to eleven days, weather permitting. It begins with a ceremonial start in Anchorage, Alaska on the morning of the first Saturday in March, with mushers running to Eagle River along the Alaskan Highway, giving spectators a chance to see the dogs and the mushers. The teams are then loaded onto trucks and driven to Wasilla for the official race start in the afternoon. The race ends when the last musher either drops out of the race or crosses the finish line in Nome. The winner of the race receives a prize of US$50,000. It has been billed as the "World Series of mushing events" and "The Last Great Race on Earth". Antarctica The first Arctic explorers were men with sled dogs. Due to the success of using sled dogs in the Arctic, it was thought they would be helpful in the Antarctic exploration as well, and many explorers made attempts to use them. Sled dogs were used until 1992, when they were banned from Antarctica by the Protocol on Environmental Protection to the Antarctic Treaty as part of a larger ban on foreign species in order to protect the antarctic ecosystem. Carsten Borchgrevink used either Sámi sled dogs or Samoyeds with Finnish handlers in Antarctica during his Southern Cross Expedition (1898–1900), but it was much colder than expected at Cape Adare. The dogs were used to working on snow, not on ice, in much milder temperatures. The dogs were also inadequately fed, and eventually all of the dogs died. Erich von Drygalski used Kamchatka sled dogs in his 1901–1903 expedition, and fared much better because his dogs were used to the cold and he hired an experienced dog handler. His dogs were allowed to breed freely and many had to be shot because there was no room on the ship to take them home. Many that were not shot were left behind on the Kerguelen Islands. Otto Nordenskjöld intended to use Greenland dogs in his 1901–1904 expedition, but all but four of his huskies died on the journey south. He picked up a mixed breed in the Falklands, but after his arrival in the Antarctic, these were all hunted down and killed by his four surviving huskies hunting as a pack because of dog handler Ole Jonassen's failure to tether the dogs. These huskies were later able to pull over in three and a half hours. Robert Falcon Scott brought twenty Samoyeds with him during his 1902 journey. The dogs struggled under the conditions Scott placed them in, with four dogs pulling heavily loaded sleds through of snow with bleeding feet. Scott blamed their failure on rotten dried fish. In 1910, Scott returned with 33 Sakhalin huskies but noted that they performed poorly in deep snow and their docked tails prevented them from curling up to keep warm. Douglas Mawson and Xavier Mertz were part of the Far Eastern Party, a three-man sledging team with Lieutenant B.E.S. Ninnis, to survey King George V Land, Antarctica. On 14 December 1912, Ninnis fell through a snow-covered crevasse along with most of the party's rations, and was never seen again. Their meagre provisions forced them to eat their remaining dogs on their return journey. Their meat was tough, stringy and without a vestige of fat. Each animal yielded very little, and the major part was fed to the surviving dogs, which ate the meat, skin and bones until nothing remained. The men also ate the dog's brains and livers. Unfortunately eating the liver of sled dogs produces the condition hypervitaminosis A because canines have a much higher tolerance for vitamin A than humans do. Mertz suffered a quick deterioration. He developed stomach pains and became incapacitated and incoherent. On 7 January 1913, Mertz died. Mawson continued alone, eventually making it back to camp alive. Roald Amundsen's expedition was planned around 97 Esquimaux dogs (possibly Canadian Eskimo Dogs, Greenland Dogs or both). On his first try, two of his dogs froze to death in the temperatures. He tried a second time and was successful. Amundsen was covering a day, with stops every to build a cairn to mark the trail. He had 55 dogs with him, which he culled until he had 14 left when he returned from the pole. On the return trip, a man skied ahead of the dogs and hid meat in the cairns to encourage them to run. Sled dog breeds The original sled dogs were chosen for size, strength and stamina, but modern dogs are bred for speed and endurance Most sled dogs weigh around , but they can weigh as little as , and can exceed . Sled dogs have a very efficient gait, and "mushers strive for a well balanced dog team that matches all dogs for both size (approximately the same) and gait (the walking, trotting or running speeds of the dogs as well as the 'transition speed' where a dog will switch from one gait to another) so that the entire dog team moves in a similar fashion which increases overall team efficiency." They can run up to . Because of this, sled dogs have very tough, webbed feet with closely spaced toes. Their webbed feet act as snow shoes. Sled dog breeds can typically be divided into further sub-types: sprint dogs, bred to pull sleds quickly freight dogs, bred to pull massive weights long distance dogs, bred to travel hundreds or even thousands of miles aboriginal multipurpose sled dogs, such as Russian laikas who pull sleds as well as herd reindeer and hunt game. A dog's fur depends on its use. Freight dogs should have dense, warm coats to hold heat in, and sprint dogs have short coats that let heat out. Most sled dogs have a double coat, with the outer coat keeping snow away from the body, and a waterproof inner coat for insulation. In warm weather, dogs may have problems regulating their body temperature and may overheat. Their tails serve to protect their nose and feet from freezing when the dog is curled up to sleep. They also have a unique arrangement of blood vessels in their legs to help protect against frostbite. Appetite is a big part of choosing sled dogs; picky dogs off trail may be pickier on the trail. They are fed high-fat diets, and on the trail may eat oily salmon or blubbery sea mammals. Sled dogs also must not be overly aggressive with other dogs. They also need a lot of exercise. Breeds Alaskan husky The most commonly used dog in dog sled racing, the Alaskan husky is a mongrel bred specifically for its performance as a sled dog. There are two genetically distinct varieties of the Alaskan husky: a sprinting group and a long-distance group. Alaskan Malamutes and Siberian Huskies contributed the most genetically to the long-distance group, while English Pointers and Salukis contributed the most to the sprinting group. Anatolian Shepherd Dogs contributed a strong work ethic to both varieties. There are many Alaskan huskies that are part Greyhound, which improves their speed. Alaskan Malamute Alaskan Malamutes are large, strong freight dogs. They weigh between and have round faces with soft features. Freight dogs are a class of dogs that includes both pedigree and non-pedigree dogs. Alaskan Malamutes are thought to be one of the first domesticated breeds of dogs, originating in the Kotzebue Sound region of Alaska. These dogs are known for their broad chests, thick coats, and tough feet. Speed has little to no value for these dogs - instead, the emphasis is on pulling strength. They are used in expedition and long adventure trips, and for hauling heavy loads. Alaskan Malamutes were the dog of choice for hauling and messenger work in World War II. Canadian Eskimo Dog The Canadian Eskimo Dog or Canadian Inuit Dog, also known as the Exquimaux Husky, Esquimaux Dog, and Qimmiq (an Inuit language word for dog), has its origins in the aboriginal sled dogs used by the Thule people of Arctic Canada. The breed as it exists today was primarily developed through the work of the Canadian government. It is capable of pulling between per dog for distances between . The Canadian Eskimo Dog was also used as a hunting dog, helping Inuit hunters to catch seals, muskoxen, and polar bears. On 1 May 2000, the Canadian territory of Nunavut officially adopted the "Canadian Inuit Dog" as the animal symbol of the territory. They are considered genetically to be the same breed as the Greenland Dog, as research shows they have not yet diverged enough genetically to be considered separate breeds. Chinook The Chinook is a rare breed of sled dog developed in New Hampshire in the early 1900s by Arthur Walden, a gold rush adventurer and dog driver, and is a blend of English Mastiff, Greenland Dog, German Shepherd Dog, and Belgian Shepherd. It is the state dog of New Hampshire and was recognized by the American Kennel Club (AKC) in the Working Group in 2013. They are described as athletic and "hard bodied" with a "tireless gait". Their coat colour is always tawny, ranging from a pale honey color to reddish-gold. Chukotka Sled Dog The Chukotka Sled Dog (чукотская ездовая) is the aboriginal spitz breed of dog indigenous to the Chukchi people of Russia. Chukotka sled dog teams have been used since prehistoric times to pull sleds in harsh conditions, such as hunting sea mammals on oceanic pack ice. Chukotka sled dogs are most famous as the progenitor of the Siberian husky. Eurohound A Eurohound is a type of dog bred for sprint-style sled dog racing. The Eurohound is typically crossbred from the Alaskan husky group and any of a number of pointing breeds ("pointers"). Greenland Dog Greenland Dogs are heavy dogs with high endurance but little speed. They are frequently used by people offering dog sled adventures and long expeditions. As of 2016, there were about 15,000 Greenland Dogs living in Greenland, but decades ago the number was significantly higher and projects have been initiated to ensure the survival of the breed. In many regions north of the Arctic Circle in Greenland, they are a primary mode of transportation in the winter. Most hunters in Greenland favour dog sled teams over snowmobiles, as the dog sled teams are more reliable. They are considered genetically to be the same breed as the Canadian Eskimo Dog, as research shows they have not yet diverged enough genetically to be considered separate breeds. Greyster The Greyster is a type of sled dog bred for sled dog racing, especially dryland sports like canicross and bikejoring. The Greyster is crossbred from the Greyhound and the German Shorthair Pointer. Kamchatka Sled Dog The Kamchatka Sled Dog (Russian: Камчатских ездовых собак, literally "Kamchatka riding dog") is a rare landrace of sled laika developed by the Itelmen and Koryak people of Kamchatka, Russia. There are currently efforts underway to revive the breed. Labrador Husky The Labrador Husky originated in the Labrador portion of the Canadian province of Newfoundland and Labrador. The breed probably arrived in the area with the Inuit who came to Canada around 1300 AD. Despite the name, Labrador huskies are not related to the Labrador retriever, but in fact most closely related the Canadian Eskimo Dog. There are estimated to be 50–60 Labrador huskies in the world. MacKenzie River Husky The term Mackenzie River husky describes several overlapping local populations of Arctic and sub-Arctic sled dog-type dogs, none of which constitutes a breed. Dogs from Yukon were crossed with large European breeds such as St. Bernards and Newfoundlands to create a powerful freighting dog capable of surviving harsh Arctic conditions. Samoyed The Samoyed is a laika developed by the Samoyede people of Siberia, who used them to herd reindeer and hunt, in addition to hauling sleds. These dogs were so prized, and the people who owned them so dependent upon them for survival, that the dogs were allowed to sleep in the tents with their owners. Samoyeds weigh about for males and for females and stands from at the shoulder. Sakhalin Husky The Sakhalin Husky, also known as the Karafuto Ken (樺太犬), is a breed of sled dog developed on the island of Sakhalin. Sakhalin huskies are prized for their hardiness, great temperaments and easy trainability, even being the preferred dog used by the Soviet army for hauling gear in harsh condition prior to World War II. Unfortunately with the advent of mechanized travel, Soviet officials determined that the cost of maintaining Sakhalins was wasteful and exterminated them, with only a handful residing in Japan surviving. There are approximately 20 Sakhalin Huskies remaining on Sakhalin Island. Siberian Husky Smaller than the similar-appearing Alaskan Malamute, the Siberian Husky pulls more, pound for pound, than a Malamute. Descendants of the sled dogs bred and used by the native Chukchi people of Siberia which were imported to Alaska in the early 1900s, they were used as working dogs and racing sled dogs in Nome, Alaska throughout the 1910s, often dominating the All-Alaska Sweepstakes. They later became widely bred by recreational mushers and show-dog fanciers in the United States and Canada as the Siberian Husky, after the popularity garnered from the 1925 serum run to Nome. Siberians stand , weigh between ( for females, for males), and have been selectively bred for both appearance and pulling ability. They are still used regularly today as sled dogs by competitive, recreational, and tour-guide mushers. Yakutian Laika The Yakutian Laika (Russian: Якутская лайка) is an ancient working dog breed that originated in the Arctic seashore of the Sakha (Yakutia) Republic. In terms of functionality, Yakutian Laikas are a sled laika, being able to herd, hunt, and as well as haul freight. The Yakutian Laika is recognized by the Fédération Cynologique Internationale (FCI) and the AKC's Foundation Stock Service. The Yakutian Laika is a medium size, strong and compact dog, with powerful muscles and thick double coat to handle bitter Arctic temperatures. They were the preferred dog of Russian polar explorer Georgy Ushakov, who prized them for their hardiness and versatility, being able to hunt seals and polar bears as well as haul sleds for thousands of miles. Other breeds Numerous non-sled dog breeds have been used as sled dogs. Poodles, Irish Setters, German Shorthaired Pointers, Labrador Retrievers, golden retriever, Newfoundlands, Chow Chows and St. Bernards have all been used to pull sleds in the past. World Championships FSS held the first World Championships in Saint Moritz, Switzerland in 1990 with classes in only Sled Sprint (10-Dog, 8-Dog, and 6-Dog) and Skidog Pulka for men and women. 113 competitors arrived in the starting chutes to mark the momentous occasion. At first World Championships were held each year, but after the 1995 events, it was decided to hold them every two years, which facilitated the bidding process and enabled the host organization more time for preparation. Famous sled dogs Balto Balto was the lead dog of the sled dog team that carried the diphtheria serum on the last leg of the relay to Nome during the 1925 diphtheria epidemic. He was driven by musher Gunnar Kaasen, who worked for Leonhard Seppala. Seppala had also bred Balto. In 1925, 10 months after Balto completed his run, a bronze statue was erected in his honour in Central Park near the Tisch Children's Zoo. The statue was sculpted by Frederick George Richard Roth. Children frequently climb the statue to pretend to ride on the dog. The plaque at the base of the statue reads "Endurance · Fidelity · Intelligence". Balto's body was stuffed following his death in 1933, and is on display at the Cleveland Museum of Natural History. In 1995, a Universal Pictures animated movie based loosely on him, Balto, was released. Togo Togo was the lead sled dog of Leonhard Seppala and his dog sled team in the 1925 serum run to Nome across central and northern Alaska. Seppala considered Togo to be the greatest sled dog and lead dog of his mushing career, and of that age in Alaska, stating in 1960: "I never had a better dog than Togo. His stamina, loyalty and intelligence could not be improved upon. Togo was the best dog that ever traveled the Alaska trail." Katy Steinmetz in Time Magazine named Togo the most heroic animal of all time, writing that "the dog that often gets credit for eventually saving the town is Balto, but he just happened to run the last, 55-mile leg in the race. The sled dog who did the lion's share of the work was Togo. His journey, fraught with white-out storms, was the longest by 200 miles and included a traverse across perilous Norton Sound — where he saved his team and driver in a courageous swim through ice floes." Togo would go on to become one of the foundation dogs for lines of Siberian sled dogs, and including eventually the Siberian Husky registered breed. In 2019, Walt Disney Pictures released Togo, a film starring Willelm Dafoe as Leonard Seppala. Taro and Jiro In 1958, an ill-fated Japanese research expedition to Antarctica made an emergency evacuation, leaving behind 15 sled dogs. The researchers believed that a relief team would arrive within a few days, so they left the dogs chained up outside with a small supply of food; however, the weather turned bad and the team never made it to the outpost. One year later, a new expedition arrived and discovered that two of the dogs, Taro and Jiro, had survived. The breed spiked in popularity upon the release of the 1983 film Nankyoku Monogatari. A second film from 2006, Eight Below, provided a fictional version of the occurrence, but did not reference the breed. Instead, the film features only eight dogs: two Alaskan Malamutes, and six Siberian Huskies. Other dogs Anna was a small sled dog who ran on Pam Flower's team during her expedition to become the first woman to cross the Arctic alone. She was noted for being the smallest dog to run on the team, and a picture book was created about her journey in the Arctic. There are numerous stories of blind sled dogs that continue to run, either on their own or with assistance from other dogs on the team.
Technology
Agriculture, labor and economy
null
273244
https://en.wikipedia.org/wiki/Rhizobia
Rhizobia
Rhizobia are diazotrophic bacteria that fix nitrogen after becoming established inside the root nodules of legumes (Fabaceae). To express genes for nitrogen fixation, rhizobia require a plant host; they cannot independently fix nitrogen. In general, they are gram negative, motile, non-sporulating rods. Rhizobia are a "group of soil bacteria that infect the roots of legumes to form root nodules". Rhizobia are found in the soil and, after infection, produce nodules in the legume where they fix nitrogen gas (N2) from the atmosphere, turning it into a more readily useful form of nitrogen. From here, the nitrogen is exported from the nodules and used for growth in the legume. Once the legume dies, the nodule breaks down and releases the rhizobia back into the soil, where they can live individually or reinfect a new legume host. History The first known species of rhizobia, Rhizobium leguminosarum, was identified in 1889, and all further species were initially placed in the Rhizobium genus. Most research has been done on crop and forage legumes such as clover, alfalfa, beans, peas, and soybeans; more research is being done on North American legumes. Taxonomy Rhizobia are a paraphyletic group that fall into two classes of Pseudomonadota—the alphaproteobacteria and betaproteobacteria. As shown below, most belong to the order Hyphomicrobiales, but several rhizobia occur in distinct bacterial orders of the Pseudomonadota. These groups include a variety of non-symbiotic bacteria. For instance, the plant pathogen Agrobacterium is a closer relative of Rhizobium than the Bradyrhizobium that nodulate soybean. Importance in agriculture Although much of the nitrogen is removed when protein-rich grain or hay is harvested, significant amounts can remain in the soil for future crops. This is especially important when nitrogen fertilizer is not used, as in organic rotation schemes or in some less-industrialized countries. Nitrogen is the most commonly deficient nutrient in many soils around the world and it is the most commonly supplied plant nutrient. The supply of nitrogen through fertilizers has severe environmental concerns. Specific strains of rhizobia are required to make functional nodules on the roots able to fix the N2. Having this specific rhizobia present is beneficial to the legume, as the N2 fixation can increase crop yield. Inoculation with rhizobia tends to increase yield. Rhizobia has been found to increase legume resistance to insect herbivores, particularly when several species of rhizobia are present. Legume inoculation has been an agricultural practice for many years and has continuously improved over time. 12–20 million hectares of soybeans are inoculated annually. An ideal inoculant includes some of the following aspects; maximum efficacy, ease of use, compatibility, high rhizobial concentration, long shelf-life, usefulness under varying field conditions, and survivability. These inoculants may foster success in legume cultivation. As a result of the nodulation process, after the harvest of the crop, there are higher levels of soil nitrate, which can then be used by the next crop. Symbiotic relationship Rhizobia are unique in that they are the only nitrogen-fixing bacteria living in a symbiotic relationship with legumes. Common crop and forage legumes are peas, beans, clover, and soy. Nature of the mutualism The legume–rhizobium symbiosis is a classic example of mutualism—rhizobia supply ammonia or amino acids to the plant and, in return, receive organic acids (mainly malate and succinate, which are dicarboxylic acids) as a carbon and energy source. However, because several unrelated strains infect each individual plant, a classic tragedy of the commons scenario presents itself. Cheater strains may hoard plant resources such as polyhydroxybutyrate for the benefit of their own reproduction without fixing an appreciable amount of nitrogen. Given the costs involved in nodulation and the opportunity for rhizobia to cheat, it may be surprising that this symbiosis exists. Infection and signal exchange The formation of the symbiotic relationship involves a signal exchange between both partners that leads to mutual recognition and the development of symbiotic structures. The most well understood mechanism for the establishment of this symbiosis is through intracellular infection. Rhizobia are free living in the soil until they are able to sense flavonoids, derivatives of 2-phenyl-1.4-benzopyrone, which are secreted by the roots of their host plant, triggering the accumulation of a large population of cells and eventually attachment to root hairs. These flavonoids then promote the DNA binding activity of NodD, which belongs to the LysR family of transcriptional regulators and triggers the secretion of nod factors after the bacteria have entered the root hair. Nod factors trigger a series of complex developmental changes inside the root hair, beginning with root hair curling and followed by the formation of the infection thread, a cellulose lined tube that the bacteria use to travel down through the root hair into the root cells. The bacteria then infect several other adjacent root cells. This is followed by continuous cell proliferation, resulting in the formation of the root nodule. A second mechanism, used especially by rhizobia that infect aquatic hosts, is called crack entry. In this case, no root hair deformation is observed. Instead, the bacteria penetrate between cells through cracks produced by lateral root emergence. Inside the nodule, the bacteria differentiate morphologically into bacteroids and fix atmospheric nitrogen into ammonium using the enzyme nitrogenase. Ammonium is then converted into amino acids like glutamine and asparagine before it is exported to the plant. In return, the plant supplies the bacteria with carbohydrates in the form of organic acids. The plant also provides the bacteroid oxygen for cellular respiration, tightly bound by leghaemoglobins, plant proteins similar to human hemoglobins. This process keeps the nodule oxygen poor in order to prevent the inhibition of nitrogenase activity. Recently, a Bradyrhizobium strain was discovered to form nodules in Aeschynomene without producing nod factors, suggesting the existence of alternative communication signals other than nod factors, possibly involving the secretion of the plant hormone cytokinin. It has been observed that root nodules can be formed spontaneously in Medicago without the presence of rhizobia. This implies that the development of the nodule is controlled entirely by the plant and simply triggered by the secretion of nod factors. Evolutionary hypotheses The sanctions hypothesis There are two main hypotheses for the mechanism that maintains legume-rhizobium symbiosis (though both may occur in nature). The sanctions hypothesis theorizes that legumes cannot recognize the more parasitic or less nitrogen fixing rhizobia and must counter the parasitism by post-infection legume sanctions. In response to underperforming rhizobia, legume hosts can respond by imposing sanctions of varying severity to their nodules. These sanctions include, but are not limited to, reduction of nodule growth, early nodule death, decreased carbon supply to nodules, or reduced oxygen supply to nodules that fix less nitrogen. Within a nodule, some of the bacteria differentiate into nitrogen fixing bacteroids, which have been found to be unable to reproduce. Therefore, with the development of a symbiotic relationship, if the host sanctions hypothesis is correct, the host sanctions must act toward whole nodules rather than individual bacteria because individual targeting sanctions would prevent any reproducing rhizobia from proliferating over time. This ability to reinforce a mutual relationship with host sanctions pushes the relationship toward mutualism rather than parasitism and is likely a contributing factor to why the symbiosis exists. However, other studies have found no evidence of plant sanctions. The partner choice hypothesis The partner choice hypothesis proposes that the plant uses prenodulation signals from the rhizobia to decide whether to allow nodulation, and chooses only noncheating rhizobia. There is evidence for sanctions in soybean plants, which reduce rhizobium reproduction (perhaps by limiting oxygen supply) in nodules that fix less nitrogen. Likewise, wild lupine plants allocate fewer resources to nodules containing less-beneficial rhizobia, limiting rhizobial reproduction inside. This is consistent with the definition of sanctions, although called "partner choice" by the authors. Some studies support the partner choice hypothesis. While both mechanisms no doubt contribute significantly to maintaining rhizobial cooperation, they do not in themselves fully explain the persistence of mutualism. The partner choice hypothesis is not exclusive from the host sanctions hypothesis, as it is apparent that both of them are prevalent in the symbiotic relationship. Evolutionary history The symbiosis between nitrogen fixing rhizobia and the legume family has emerged and evolved over the past 66 million years. Although evolution tends to swing toward one species taking advantage of another in the form of noncooperation in the selfish-gene model, management of such symbiosis allows for the continuation of cooperation. When the relative fitness of both species is increased, natural selection will favor symbiosis. To understand the evolutionary history of this symbiosis, it is helpful to compare the rhizobia-legume symbiosis to a more ancient symbiotic relationship, such as that between endomycorrhizae fungi and land plants, which dates back to almost 460 million years ago. Endomycorrhizal symbiosis can provide many insights into rhizobia symbiosis because recent genetic studies have suggested that rhizobia co-opted the signaling pathways from the more ancient endomycorrhizal symbiosis. Bacteria secrete Nod factors and endomycorrhizae secrete Myc-LCOs. Upon recognition of the Nod factor/Myc-LCO, the plant proceeds to induce a variety of intracellular responses to prepare for the symbiosis. It is likely that rhizobia co-opted the features already in place for endomycorrhizal symbiosis because there are many shared or similar genes involved in the two processes. For example, the plant recognition gene SYMRK (symbiosis receptor-like kinase) is involved in the perception of both the rhizobial Nod factors as well as the endomycorrhizal Myc-LCOs. The shared similar processes would have greatly facilitated the evolution of rhizobial symbiosis because not all the symbiotic mechanisms would have needed to develop. Instead, the rhizobia simply needed to evolve mechanisms to take advantage of the symbiotic signaling processes already in place from endomycorrhizal symbiosis. Ecology Effects of Rhizobia on Legume Host Characteristics When associating with rhizobia, legumes often experience growth benefits and increased resistance to stress. Rhizobia's ability to convert inorganic atmospheric nitrogen into an organic ammonia compounds provides leguminous plants access to a resource that many plants are limited by, increasing their fitness and the biodiversity of their ecosystems. These growth benefits include increased overall plant growth, greater above and below ground biomass, increased shoot biomass, increased leaf protein levels, and more attractive floral traits for pollinators. Rhizobia has also been shown to increase legume resistance to insect herbivores when rhizobia diversity is high, specifically by increasing expression of defensive traits that reduce leaf herbivory and the number of sap-sucking aphids. Effects of Mutualism on other Species Other species that engage symbiotically with legumes are affected by legume-rhizobia mutualism. Legumes associating with rhizobia sometimes produce less ant attracting-extrafloral nectar, leading to a reduction in ants present and providing defensive benefits. Legumes hosting rhizobia have been observed receiving more pollinator visitations, despite not always increasing production of inflorescences. The presence of rhizobia increases the colonization rate of legume cells by arbuscular mycorrhizal fungi, increasing the quantity of soil nutrients available to the legume. Context-Dependency of Mutualism The legume-rhizobia mutualism is context dependent; the benefits provided by rhizobia are lessened or absent under unfavorable environmental conditions. Perturbations can alter the balance of symbiotic relationships between species as reduced benefits provided can lead to antagonistic behavior, such as parasitism. These disruptions lead plant species to lessen their investment in the relationships, and perhaps even stop engaging in them altogether. For example, nutrient deposition has led to the emergence of less productive strains of rhizobia and increased ambient temperatures have legumes reducing investment in the resource mutualism. Nutrient deposition is of particular issue to legumes as the increased availability of nitrogen allows nitrogen-limited plant species to quickly out compete legumes for light. This increases photosynthesis costs, further destabilizing the legume-rhizobia mutualism as the legume suffers fitness consequences and is unable to provide benefits to rhizobia. Other diazotrophs Many other species of bacteria are able to fix nitrogen (diazotrophs), but few are able to associate intimately with plants and colonize specific structures like legume nodules. Bacteria that do associate with plants include the actinomycete, Frankia, which form symbiotic root nodules in actinorhizal plants, although these bacteria have a much broader host range, implying the association is less specific than in legumes. Additionally, several cyanobacteria like Nostoc are associated with aquatic ferns, Cycas, and Gunneras, although they do not form nodules. Additionally, loosely associated plant bacteria, termed endophytes, have been reported to fix nitrogen in planta. These bacteria colonize the intercellular spaces of leaves, stems, and roots in plants but do not form specialized structures like rhizobia and Frankia. Diazotrophic bacterial endophytes have very broad host ranges, in some cases colonizing both monocots and dicots. Note
Biology and health sciences
Gram-negative bacteria
Plants
273524
https://en.wikipedia.org/wiki/Linear%20particle%20accelerator
Linear particle accelerator
A linear particle accelerator (often shortened to linac) is a type of particle accelerator that accelerates charged subatomic particles or ions to a high speed by subjecting them to a series of oscillating electric potentials along a linear beamline. The principles for such machines were proposed by Gustav Ising in 1924, while the first machine that worked was constructed by Rolf Widerøe in 1928 at the RWTH Aachen University. Linacs have many applications: they generate X-rays and high energy electrons for medicinal purposes in radiation therapy, serve as particle injectors for higher-energy accelerators, and are used directly to achieve the highest kinetic energy for light particles (electrons and positrons) for particle physics. The design of a linac depends on the type of particle that is being accelerated: electrons, protons or ions. Linacs range in size from a cathode-ray tube (which is a type of linac) to the linac at the SLAC National Accelerator Laboratory in Menlo Park, California. History In 1924, Gustav Ising published the first description of a linear particle accelerator using a series of accelerating gaps. Particles would proceed down a series of tubes. At a regular frequency, an accelerating voltage would be applied across each gap. As the particles gained speed while the frequency remained constant, the gaps would be spaced farther and farther apart, in order to ensure the particle would see a voltage applied as it reached each gap. Ising never successfully implemented this design. Rolf Wideroe discovered Ising's paper in 1927, and as part of his PhD thesis he built an 88-inch long, two gap version of the device. Where Ising had proposed a spark gap as the voltage source, Wideroe used a 25kV vacuum tube oscillator. He successfully demonstrated that he had accelerated sodium and potassium ions to an energy of 50,000 electron volts (50 keV), twice the energy they would have received if accelerated only once by the tube. By successfully accelerating a particle multiple times using the same voltage source, Wideroe demonstrated the utility of radio frequency (RF) acceleration. This type of linac was limited by the voltage sources that were available at the time, and it was not until after World War II that Luis Alvarez was able to use newly developed high frequency oscillators to design the first resonant cavity drift tube linac. An Alvarez linac differs from the Wideroe type in that the RF power is applied to the entire resonant chamber through which the particle travels, and the central tubes are only used to shield the particles during the decelerating portion of the oscillator's phase. Using this approach to acceleration meant that Alvarez's first linac was able to achieve proton energies of 31.5 MeV in 1947, the highest that had ever been reached at the time. The initial Alvarez type linacs had no strong mechanism for keeping the beam focused and were limited in length and energy as a result. The development of the strong focusing principle in the early 1950s led to the installation of focusing quadrupole magnets inside the drift tubes, allowing for longer and thus more powerful linacs. Two of the earliest examples of Alvarez linacs with strong focusing magnets were built at CERN and Brookhaven National Laboratory. In 1947, at about the same time that Alvarez was developing his linac concept for protons, William Hansen constructed the first travelling-wave electron accelerator at Stanford University. Electrons are sufficiently lighter than protons that they achieve speeds close to the speed of light early in the acceleration process. As a result, "accelerating" electrons increase in energy but can be treated as having a constant velocity from an accelerator design standpoint. This allowed Hansen to use an accelerating structure consisting of a horizontal waveguide loaded by a series of discs. The 1947 accelerator had an energy of 6 MeV. Over time, electron acceleration at the SLAC National Accelerator Laboratory would extend to a size of and an output energy of 50 GeV. As linear accelerators were developed with higher beam currents, using magnetic fields to focus proton and heavy ion beams presented difficulties for the initial stages of the accelerator. Because the magnetic force is dependent on the particle velocity, it was desirable to create a type of accelerator which could simultaneously accelerate and focus low-to-mid energy hadrons. In 1970, Soviet physicists I. M. Kapchinsky and Vladimir Teplyakov proposed the radio-frequency quadrupole (RFQ) type of accelerating structure. RFQs use vanes or rods with precisely designed shapes in a resonant cavity to produce complex electric fields. These fields provide simultaneous acceleration and focusing to injected particle beams. Beginning in the 1960s, scientists at Stanford and elsewhere began to explore the use of superconducting radio frequency cavities for particle acceleration. Superconducting cavities made of niobium alloys allow for much more efficient acceleration, as a substantially higher fraction of the input power could be applied to the beam rather than lost to heat. Some of the earliest superconducting linacs included the Superconducting Linear Accelerator (for electrons) at Stanford and the Argonne Tandem Linear Accelerator System (for protons and heavy ions) at Argonne National Laboratory. Basic principles of operation Radiofrequency acceleration When a charged particle is placed in an electromagnetic field it experiences a force given by the Lorentz force law: where is the charge on the particle, is the electric field, is the particle velocity, and is the magnetic field. The cross product in the magnetic field term means that static magnetic fields cannot be used for particle acceleration, as the magnetic force acts perpendicularly to the direction of particle motion. As electrostatic breakdown limits the maximum constant voltage which can be applied across a gap to produce an electric field, most accelerators use some form of RF acceleration. In RF acceleration, the particle traverses a series of accelerating regions, driven by a source of voltage in such a way that the particle sees an accelerating field as it crosses each region. In this type of acceleration, particles must necessarily travel in "bunches" corresponding to the portion of the oscillator's cycle where the electric field is pointing in the intended direction of acceleration. If a single oscillating voltage source is used to drive a series of gaps, those gaps must be placed increasingly far apart as the speed of the particle increases. This is to ensure that the particle "sees" the same phase of the oscillator's cycle as it reaches each gap. As particles asymptotically approach the speed of light, the gap separation becomes constant: additional applied force increases the energy of the particles but does not significantly alter their speed. Focusing In order to ensure particles do not escape the accelerator, it is necessary to provide some form of focusing to redirect particles moving away from the central trajectory back towards the intended path. With the discovery of strong focusing, quadrupole magnets are used to actively redirect particles moving away from the reference path. As quadrupole magnets are focusing in one transverse direction and defocusing in the perpendicular direction, it is necessary to use groups of magnets to provide an overall focusing effect in both directions. Phase stability Focusing along the direction of travel, also known as phase stability, is an inherent property of RF acceleration. If the particles in a bunch all reach the accelerating region during the rising phase of the oscillating field, then particles which arrive early will see slightly less voltage than the "reference" particle at the center of the bunch. Those particles will therefore receive slightly less acceleration and eventually fall behind the reference particle. Correspondingly, particles which arrive after the reference particle will receive slightly more acceleration, and will catch up to the reference as a result. This automatic correction occurs at each accelerating gap, so the bunch is refocused along the direction of travel each time it is accelerated. Construction and operation A linear particle accelerator consists of the following parts: A straight hollow pipe vacuum chamber which contains the other components. It is evacuated with a vacuum pump so that the accelerated particles will not collide with air molecules. The length will vary with the application. If the device is used for the production of X-rays for inspection or therapy, then the pipe may be only 0.5 to 1.5 meters long. If the device is to be an injector for a synchrotron, it may be about ten meters long. If the device is used as the primary accelerator for nuclear particle investigations, it may be several thousand meters long. The particle source (S) at one end of the chamber which produces the charged particles which the machine accelerates. The design of the source depends on the particle that is being accelerated. Electrons are generated by a cold cathode, a hot cathode, a photocathode, or radio frequency ion sources. Protons are generated in an ion source, which can have many different designs. If heavier particles are to be accelerated, (e.g., uranium ions), a specialized ion source is needed. The source has its own high voltage supply to inject the particles into the beamline. Extending along the pipe from the source is a series of open-ended cylindrical electrodes (C1, C2, C3, C4), whose length increases progressively with the distance from the source. The particles from the source pass through these electrodes. The length of each electrode is determined by the frequency and power of the driving power source and the particle to be accelerated, so that the particle passes through each electrode in exactly one-half cycle of the accelerating voltage. The mass of the particle has a large effect on the length of the cylindrical electrodes; for example an electron is considerably lighter than a proton and so will generally require a much smaller section of cylindrical electrodes as it accelerates very quickly. A target with which the particles collide, located at the end of the accelerating electrodes. If electrons are accelerated to produce X-rays, then a water-cooled tungsten target is used. Various target materials are used when protons or other nuclei are accelerated, depending upon the specific investigation. Behind the target are various detectors to detect the particles resulting from the collision of the incoming particles with the atoms of the target. Many linacs serve as the initial accelerator stage for larger particle accelerators such as synchrotrons and storage rings, and in this case after leaving the electrodes the accelerated particles enter the next stage of the accelerator. An electronic oscillator and amplifier (G) which generates a radio frequency AC voltage of high potential (usually thousands of volts) which is applied to the cylindrical electrodes. This is the accelerating voltage which produces the electric field which accelerates the particles. Opposite phase voltage is applied to successive electrodes. A high power accelerator will have a separate amplifier to power each electrode, all synchronized to the same frequency. As shown in the animation, the oscillating voltage applied to alternate cylindrical electrodes has opposite polarity (180° out of phase), so adjacent electrodes have opposite voltages. This creates an oscillating electric field (E) in the gap between each pair of electrodes, which exerts force on the particles when they pass through, imparting energy to them by accelerating them. The particle source injects a group of particles into the first electrode once each cycle of the voltage, when the charge on the electrode is opposite to the charge on the particles. Each time the particle bunch passes through an electrode, the oscillating voltage changes polarity, so when the particles reach the gap between electrodes the electric field is in the correct direction to accelerate them. Therefore, the particles accelerate to a faster speed each time they pass between electrodes; there is little electric field inside the electrodes so the particles travel at a constant speed within each electrode. The particles are injected at the right time so that the oscillating voltage differential between electrodes is maximum as the particles cross each gap. If the peak voltage applied between the electrodes is volts, and the charge on each particle is elementary charges, the particle gains an equal increment of energy of electron volts when passing through each gap. Thus the output energy of the particles is electron volts, where is the number of accelerating electrodes in the machine. At speeds near the speed of light, the incremental velocity increase will be small, with the energy appearing as an increase in the mass of the particles. In portions of the accelerator where this occurs, the tubular electrode lengths will be almost constant. Additional magnetic or electrostatic lens elements may be included to ensure that the beam remains in the center of the pipe and its electrodes. Very long accelerators may maintain a precise alignment of their components through the use of servo systems guided by a laser beam. Concepts in development Various new concepts are in development as of 2021. The primary goal is to make linear accelerators cheaper, with better focused beams, higher energy or higher beam current. Induction linear accelerator Induction linear accelerators use the electric field induced by a time-varying magnetic field for acceleration—like the betatron. The particle beam passes through a series of ring-shaped ferrite cores standing one behind the other, which are magnetized by high-current pulses, and in turn each generate an electrical field strength pulse along the axis of the beam direction. Induction linear accelerators are considered for short high current pulses from electrons but also from heavy ions. The concept goes back to the work of Nicholas Christofilos. Its realization is highly dependent on progress in the development of more suitable ferrite materials. With electrons, pulse currents of up to 5 kiloamps at energies up to 5 MeV and pulse durations in the range of 20 to 300 nanoseconds were achieved. Energy recovery linac In previous electron linear accelerators, the accelerated particles are used only once and then fed into an absorber (beam dump), in which their residual energy is converted into heat. In an energy recovery linac (ERL), the accelerated in resonators and, for example, in undulators. The electrons used are fed back through the accelerator, out of phase by 180 degrees. They therefore pass through the resonators in the decelerating phase and thus return their remaining energy to the field. The concept is comparable to the hybrid drive of motor vehicles, where the kinetic energy released during braking is made available for the next acceleration by charging a battery. The Brookhaven National Laboratory and the Helmholtz-Zentrum Berlin with the project "bERLinPro" reported on corresponding development work. The Berlin experimental accelerator uses superconducting niobium cavity resonators. In 2014, three free-electron lasers based on ERLs were in operation worldwide: in the Jefferson Lab (US), in the Budker Institute of Nuclear Physics (Russia) and at JAEA (Japan). At the University of Mainz, an ERL called MESA is expected to begin operation in 2024. Compact Linear Collider The concept of the Compact Linear Collider (CLIC) (original name CERN Linear Collider, with the same abbreviation) for electrons and positrons provides a traveling wave accelerator for energies of the order of 1 tera-electron volt (TeV). Instead of the otherwise necessary numerous klystron amplifiers to generate the acceleration power, a second parallel electron linear accelerator of lower energy is to be used, which works with superconducting cavities in which standing waves are formed. High-frequency power is extracted from it at regular intervals and transmitted to the main accelerator. In this way, the very high acceleration field strength of 80 MV / m should be achieved. Kielfeld accelerator (plasma accelerator) In cavity resonators, the dielectric strength limits the maximum acceleration that can be achieved within a certain distance. This limit can be circumvented using accelerated waves in plasma to generate the accelerating field in Kielfeld accelerators: A laser or particle beam excites an oscillation in a plasma, which is associated with very strong electric field strengths. This means that significantly (factors of 100s to 1000s ) more compact linear accelerators can possibly be built. Experiments involving high power lasers in metal vapour plasmas suggest that a beam line length reduction from some tens of metres to a few cm is quite possible. Compact medical accelerators The LIGHT program (Linac for Image-Guided Hadron Therapy) hopes to create a design capable of accelerating protons to 200MeV or so for medical use over a distance of a few tens of metres, by optimising and nesting existing accelerator techniques The current design (2020) uses the highest practical bunch frequency (currently ~ 3 GHz) for a Radio-frequency quadrupole (RFQ) stage from injection at 50kVdC to ~5MeV bunches, a Side Coupled Drift Tube Linac (SCDTL) to accelerate from 5Mev to ~ 40MeV and a Cell Coupled Linac (CCL) stage final, taking the output to 200-230MeV. Each stage is optimised to allow close coupling and synchronous operation during the beam energy build-up. The project aim is to make proton therapy a more accessible mainstream medicine as an alternative to existing radio therapy. Modern concepts The higher the frequency of the acceleration voltage selected, the more individual acceleration thrusts per path length a particle of a given speed experiences, and the shorter the accelerator can therefore be overall. That is why accelerator technology developed in the pursuit of higher particle energies, especially towards higher frequencies. The linear accelerator concepts (often called accelerator structures in technical terms) that have been used since around 1950 work with frequencies in the range from around to a few gigahertz (GHz) and use the electric field component of electromagnetic waves. Standing waves and traveling waves When it comes to energies of more than a few MeV, accelerators for ions are different from those for electrons. The reason for this is the large mass difference between the particles. Electrons are already close to the speed of light, the absolute speed limit, at a few MeV; with further acceleration, as described by relativistic mechanics, almost only their energy and momentum increase. On the other hand, with ions of this energy range, the speed also increases significantly due to further acceleration. The acceleration concepts used today for ions are always based on electromagnetic standing waves that are formed in suitable resonators. Depending on the type of particle, energy range and other parameters, very different types of resonators are used; the following sections only cover some of them. Electrons can also be accelerated with standing waves above a few MeV. An advantageous alternative here, however, is a progressive wave, a traveling wave. The phase velocity the traveling wave must be roughly equal to the particle speed. Therefore, this technique is only suitable when the particles are almost at the speed of light, so that their speed only increases very little. The development of high-frequency oscillators and power amplifiers from the 1940s, especially the klystron, was essential for these two acceleration techniques . The first larger linear accelerator with standing waves - for protons - was built in 1945/46 in the Lawrence Berkeley National Laboratory under the direction of Luis W. Alvarez. The frequency used was .  The first electron accelerator with traveling waves of around was developed a little later at Stanford University by W.W. Hansen and colleagues. In the two diagrams, the curve and arrows indicate the force acting on the particles. Only at the points with the correct direction of the electric field vector, i.e. the correct direction of force, can particles absorb energy from the wave. (An increase in speed cannot be seen in the scale of these images.) Advantages The linear accelerator could produce higher particle energies than the previous electrostatic particle accelerators (the Cockcroft–Walton accelerator and Van de Graaff generator) that were in use when it was invented. In these machines, the particles were only accelerated once by the applied voltage, so the particle energy in electron volts was equal to the accelerating voltage on the machine, which was limited to a few million volts by insulation breakdown. In the linac, the particles are accelerated multiple times by the applied voltage, so the particle energy is not limited by the accelerating voltage. High power linacs are also being developed for production of electrons at relativistic speeds, required since fast electrons traveling in an arc will lose energy through synchrotron radiation; this limits the maximum power that can be imparted to electrons in a synchrotron of given size. Linacs are also capable of prodigious output, producing a nearly continuous stream of particles, whereas a synchrotron will only periodically raise the particles to sufficient energy to merit a "shot" at the target. (The burst can be held or stored in the ring at energy to give the experimental electronics time to work, but the average output current is still limited.) The high density of the output makes the linac particularly attractive for use in loading storage ring facilities with particles in preparation for particle to particle collisions. The high mass output also makes the device practical for the production of antimatter particles, which are generally difficult to obtain, being only a small fraction of a target's collision products. These may then be stored and further used to study matter-antimatter annihilation. Medical linacs Linac-based radiation therapy for cancer treatment began with the first patient treated in 1953 in London, UK, at the Hammersmith Hospital, with an 8 MV machine built by Metropolitan-Vickers and installed in 1952, as the first dedicated medical linac. A short while later in 1954, a 6 MV linac was installed in Stanford, USA, which began treatments in 1956. Medical linear accelerators accelerate electrons using a tuned-cavity waveguide, in which the RF power creates a standing wave. Some linacs have short, vertically mounted waveguides, while higher energy machines tend to have a horizontal, longer waveguide and a bending magnet to turn the beam vertically towards the patient. Medical linacs use monoenergetic electron beams between 4 and 25 MeV, giving an X-ray output with a spectrum of energies up to and including the electron energy when the electrons are directed at a high-density (such as tungsten) target. The electrons or X-rays can be used to treat both benign and malignant disease. The LINAC produces a reliable, flexible and accurate radiation beam. The versatility of LINAC is a potential advantage over cobalt therapy as a treatment tool. In addition, the device can simply be powered off when not in use; there is no source requiring heavy shielding – although the treatment room itself requires considerable shielding of the walls, doors, ceiling etc. to prevent escape of scattered radiation. Prolonged use of high powered (>18 MeV) machines can induce a significant amount of radiation within the metal parts of the head of the machine after power to the machine has been removed (i.e. they become an active source and the necessary precautions must be observed). In 2019 a Little Linac model kit, containing 82 building blocks, was developed for children undergoing radiotherapy treatment for cancer. The hope is that building the model will alleviate some of the stress experienced by the child before undergoing treatment by helping them to understand what the treatment entails. The kit was developed by Professor David Brettle, Institute of Physics and Engineering in Medicine (IPEM) in collaboration with manufacturers Best-Lock Ltd. The model can be seen at the Science Museum, London. Application for medical isotope development The expected shortages of Mo-99, and the technetium-99m medical isotope obtained from it, have also shed light onto linear accelerator technology to produce Mo-99 from non-enriched Uranium through neutron bombardment. This would enable the medical isotope industry to manufacture this crucial isotope by a sub-critical process. The aging facilities, for example the Chalk River Laboratories in Ontario, Canada, which still now produce most Mo-99 from highly enriched uranium could be replaced by this new process. In this way, the sub-critical loading of soluble uranium salts in heavy water with subsequent photo neutron bombardment and extraction of the target product, Mo-99, will be achieved. Disadvantages The device length limits the locations where one may be placed. A great number of driver devices and their associated power supplies are required, increasing the construction and maintenance expense of this portion. If the walls of the accelerating cavities are made of normally conducting material and the accelerating fields are large, the wall resistivity converts electric energy into heat quickly. On the other hand, superconductors also need constant cooling to keep them below their critical temperature, and the accelerating fields are limited by quenches. Therefore, high energy accelerators such as SLAC, still the longest in the world (in its various generations), are run in short pulses, limiting the average current output and forcing the experimental detectors to handle data coming in short bursts.
Physical sciences
Devices
Physics