id
stringlengths
2
8
url
stringlengths
31
117
title
stringlengths
1
71
text
stringlengths
153
118k
topic
stringclasses
4 values
section
stringlengths
4
49
sublist
stringclasses
9 values
1951295
https://en.wikipedia.org/wiki/Blister%20beetle
Blister beetle
Blister beetles are beetles of the family Meloidae, so called for their defensive secretion of a blistering agent, cantharidin. About 7,500 species are known worldwide. Many are conspicuous and some are aposematically colored, announcing their toxicity to would-be predators. Description Blister beetles are hypermetamorphic, going through several larval stages, the first of which is typically a mobile triungulin. The larvae are insectivorous, mainly attacking bees, though a few feed on grasshopper eggs. While sometimes considered parasitoids, in general, the meloid larva apparently consumes the immature host along with its provisions, and can often survive on the provisions alone; thus it is not an obligatory parasitoid, but rather a facultative parasitoid, or simply a kleptoparasite. The adults sometimes feed on flowers and leaves of plants of such diverse families as the Amaranthaceae, Asteraceae, Fabaceae, and Solanaceae. Cantharidin, a poisonous chemical that causes blistering of the skin, is secreted as a defensive agent. It is used medically to remove warts and is collected for this purpose from species of the genera Mylabris and Lytta, especially Lytta vesicatoria, better known as "Spanish fly". Toxicity Cantharidin is the principal irritant in "Spanish fly", a folk medicine prepared from dried beetles in the family Meloidae. The largest genus, Epicauta, contains many species toxic to horses. A few beetles consumed in a single feeding of alfalfa hay may be lethal. In semiarid areas of the western United States, modern harvesting techniques may contribute to cantharidin content in harvested forage. The practice of hay conditioning, crushing the stalks to promote drying, also crushes any beetles present and causes the release of cantharidin into the fodder. Blister beetles are attracted to alfalfa and weeds during bloom. Reducing weeds and timing harvests before and after bloom are sound management practices. Using equipment without hay conditioners may reduce beetle mortality and allow them to escape before baling. Evolutionary history The family is thought to have begun diversifying during the Early Cretaceous. The oldest fossil of the group is a larva (triangulin) found phoretic on a schizopterid bug from the mid Cretaceous Burmese amber, dated to around 99 million years ago. Systematics Subfamily Eleticinae Tribe Derideini Anthicoxenus Deridea Iselma Iselmeletica Tribe Morphozonitini Ceriselma Morphozonitis Steniselma Tribe Eleticini Eletica Tribe Spasticini Eospasta Protomeloe Spastica Xenospasta Subfamily Meloinae Tribe Cerocomini Anisarthrocera Cerocoma Diaphorocera Rhampholyssa Rhampholyssodes Tribe Epicautini Denierella Epicauta Linsleya Psalydolytta Tribe Eupomphini Cordylospasta Cysteodemus Eupompha Megetra Phodaga Pleropasta Tegrodera Tribe Lyttini Acrolytta Afrolytta Alosimus Berberomeloe Cabalia Dictyolytta Eolydus Epispasta Lagorina Lydomorphus Lydulus Lydus Lytta Lyttolydulus Lyttonyx Megalytta Muzimes Oenas Parameloe Paroenas Physomeloe Prionotolytta Prolytta Pseudosybaris Sybaris Teratolytta Tetraolytta Trichomeloe Tribe Meloini Cyaneolytta Lyttomeloe Meloe Spastomeloe Spastonyx Tribe Mylabrini Actenodia Ceroctis Croscherichia Hycleus Lydoceras Mimesthes Mylabris Paractenodia Pseudabris Semenovilia Xanthabris Tribe Pyrotini Bokermannia Brasiliota Denierota Glaphyrolytta Lyttamorpha Picnoseus Pseudopyrota Pyrota Wagneronota Genera incertae sedis Australytta Calydus Gynapteryx Oreomeloe Pseudomeloe Subfamily Nemognathinae Tribe Horiini Cissites Horia Synhoria Tribe Nemognathini Cochliophorus Euzonitis Gnathium Gnathonemula Leptopalpus Megatrachelus Nemognatha Palaestra Palaestrida Pseudozonitis Rhyphonemognatha Stenodera Zonitis Zonitodema Zonitolytta Zonitomorpha Zonitoschema Tribe Sitarini Allendeselazaria Apalus Ctenopus Glasunovia Nyadatus Sitaris Sitarobrachys Stenoria Genera incertae sedis Hornia Onyctenus Sitaromorpha Tricrania Subfamily Tetraonycinae Tribe Tetraonycini Meloetyphlus Opiomeloe Tetraonyx
Biology and health sciences
Beetles (Coleoptera)
Animals
1951419
https://en.wikipedia.org/wiki/Critical%20point%20%28thermodynamics%29
Critical point (thermodynamics)
In thermodynamics, a critical point (or critical state) is the end point of a phase equilibrium curve. One example is the liquid–vapor critical point, the end point of the pressure–temperature curve that designates conditions under which a liquid and its vapor can coexist. At higher temperatures, the gas comes into a supercritical phase, and so cannot be liquefied by pressure alone. At the critical point, defined by a critical temperature Tc and a critical pressure pc, phase boundaries vanish. Other examples include the liquid–liquid critical points in mixtures, and the ferromagnet–paramagnet transition (Curie temperature) in the absence of an external magnetic field. Liquid–vapor critical point Overview For simplicity and clarity, the generic notion of critical point is best introduced by discussing a specific example, the vapor–liquid critical point. This was the first critical point to be discovered, and it is still the best known and most studied one. The figure shows the schematic P-T diagram of a pure substance (as opposed to mixtures, which have additional state variables and richer phase diagrams, discussed below). The commonly known phases solid, liquid and vapor are separated by phase boundaries, i.e. pressure–temperature combinations where two phases can coexist. At the triple point, all three phases can coexist. However, the liquid–vapor boundary terminates in an endpoint at some critical temperature Tc and critical pressure pc. This is the critical point. The critical point of water occurs at and . In the vicinity of the critical point, the physical properties of the liquid and the vapor change dramatically, with both phases becoming even more similar. For instance, liquid water under normal conditions is nearly incompressible, has a low thermal expansion coefficient, has a high dielectric constant, and is an excellent solvent for electrolytes. Near the critical point, all these properties change into the exact opposite: water becomes compressible, expandable, a poor dielectric, a bad solvent for electrolytes, and mixes more readily with nonpolar gases and organic molecules. At the critical point, only one phase exists. The heat of vaporization is zero. There is a stationary inflection point in the constant-temperature line (critical isotherm) on a PV diagram. This means that at the critical point: Above the critical point there exists a state of matter that is continuously connected with (can be transformed without phase transition into) both the liquid and the gaseous state. It is called supercritical fluid. The common textbook knowledge that all distinction between liquid and vapor disappears beyond the critical point has been challenged by Fisher and Widom, who identified a p–T line that separates states with different asymptotic statistical properties (Fisher–Widom line). Sometimes the critical point does not manifest in most thermodynamic or mechanical properties, but is "hidden" and reveals itself in the onset of inhomogeneities in elastic moduli, marked changes in the appearance and local properties of non-affine droplets, and a sudden enhancement in defect pair concentration. History The existence of a critical point was first discovered by Charles Cagniard de la Tour in 1822 and named by Dmitri Mendeleev in 1860 and Thomas Andrews in 1869. Cagniard showed that CO2 could be liquefied at 31 °C at a pressure of 73 atm, but not at a slightly higher temperature, even under pressures as high as 3000 atm. Theory Solving the above condition for the van der Waals equation, one can compute the critical point as However, the van der Waals equation, based on a mean-field theory, does not hold near the critical point. In particular, it predicts wrong scaling laws. To analyse properties of fluids near the critical point, reduced state variables are sometimes defined relative to the critical properties The principle of corresponding states indicates that substances at equal reduced pressures and temperatures have equal reduced volumes. This relationship is approximately true for many substances, but becomes increasingly inaccurate for large values of pr. For some gases, there is an additional correction factor, called Newton's correction, added to the critical temperature and critical pressure calculated in this manner. These are empirically derived values and vary with the pressure range of interest. Table of liquid–vapor critical temperature and pressure for selected substances Mixtures: liquid–liquid critical point The liquid–liquid critical point of a solution, which occurs at the critical solution temperature, occurs at the limit of the two-phase region of the phase diagram. In other words, it is the point at which an infinitesimal change in some thermodynamic variable (such as temperature or pressure) leads to separation of the mixture into two distinct liquid phases, as shown in the polymer–solvent phase diagram to the right. Two types of liquid–liquid critical points are the upper critical solution temperature (UCST), which is the hottest point at which cooling induces phase separation, and the lower critical solution temperature (LCST), which is the coldest point at which heating induces phase separation. Mathematical definition From a theoretical standpoint, the liquid–liquid critical point represents the temperature–concentration extremum of the spinodal curve (as can be seen in the figure to the right). Thus, the liquid–liquid critical point in a two-component system must satisfy two conditions: the condition of the spinodal curve (the second derivative of the free energy with respect to concentration must equal zero), and the extremum condition (the third derivative of the free energy with respect to concentration must also equal zero or the derivative of the spinodal temperature with respect to concentration must equal zero).
Physical sciences
Phase transitions
null
1951424
https://en.wikipedia.org/wiki/Critical%20point%20%28mathematics%29
Critical point (mathematics)
In mathematics, a critical point is the argument of a function where the function derivative is zero (or undefined, as specified below). The value of the function at a critical point is a . More specifically, when dealing with functions of a real variable, a critical point, also known as a stationary point, is a point in the domain of the function where the function derivative is equal to zero (or where the function is not differentiable). Similarly, when dealing with complex variables, a critical point is a point in the function's domain where its derivative is equal to zero (or the function is not holomorphic). Likewise, for a function of several real variables, a critical point is a value in its domain where the gradient norm is equal to zero (or undefined). This sort of definition extends to differentiable maps between and a critical point being, in this case, a point where the rank of the Jacobian matrix is not maximal. It extends further to differentiable maps between differentiable manifolds, as the points where the rank of the Jacobian matrix decreases. In this case, critical points are also called bifurcation points. In particular, if is a plane curve, defined by an implicit equation the critical points of the projection onto the parallel to the are the points where the tangent to are parallel to the that is the points where In other words, the critical points are those where the implicit function theorem does not apply. Critical point of a single variable function A critical point of a function of a single real variable, , is a value in the domain of where is not differentiable or its derivative is 0 (i.e. A critical value is the image under of a critical point. These concepts may be visualized through the graph of at a critical point, the graph has a horizontal tangent if one can be assigned at all. Notice how, for a differentiable function, critical point is the same as stationary point. Although it is easily visualized on the graph (which is a curve), the notion of critical point of a function must not be confused with the notion of critical point, in some direction, of a curve (see below for a detailed definition). If is a differentiable function of two variables, then is the implicit equation of a curve. A critical point of such a curve, for the projection parallel to the -axis (the map ), is a point of the curve where This means that the tangent of the curve is parallel to the -axis, and that, at this point, g does not define an implicit function from to (see implicit function theorem). If is such a critical point, then is the corresponding critical value. Such a critical point is also called a bifurcation point, as, generally, when varies, there are two branches of the curve on a side of and zero on the other side. It follows from these definitions that a differentiable function has a critical point with critical value if and only if is a critical point of its graph for the projection parallel to the with the same critical value If is not differentiable at due to the tangent becoming parallel to the -axis, then is again a critical point of , but now is a critical point of its graph for the projection parallel to the For example, the critical points of the unit circle of equation are and for the projection parallel to the and and for the direction parallel to the If one considers the upper half circle as the graph of the function then is a critical point with critical value 1 due to the derivative being equal to 0, and are critical points with critical value 0 due to the derivative being undefined. Examples The function is differentiable everywhere, with the derivative This function has a unique critical point −1, because it is the unique number for which This point is a global minimum of . The corresponding critical value is The graph of is a concave up parabola, the critical point is the abscissa of the vertex, where the tangent line is horizontal, and the critical value is the ordinate of the vertex and may be represented by the intersection of this tangent line and the -axis. The function is defined for all and differentiable for with the derivative Since is not differentiable at and otherwise, it is the unique critical point. The graph of the function has a cusp at this point with vertical tangent. The corresponding critical value is The absolute value function is differentiable everywhere except at critical point where it has a global minimum point, with critical value 0. The function has no critical points. The point is not a critical point because it is not included in the function's domain. Location of critical points By the Gauss–Lucas theorem, all of a polynomial function's critical points in the complex plane are within the convex hull of the roots of the function. Thus for a polynomial function with only real roots, all critical points are real and are between the greatest and smallest roots. Sendov's conjecture asserts that, if all of a function's roots lie in the unit disk in the complex plane, then there is at least one critical point within unit distance of any given root. Critical points of an implicit curve Critical points play an important role in the study of plane curves defined by implicit equations, in particular for sketching them and determining their topology. The notion of critical point that is used in this section, may seem different from that of previous section. In fact it is the specialization to a simple case of the general notion of critical point given below. Thus, we consider a curve defined by an implicit equation , where is a differentiable function of two variables, commonly a bivariate polynomial. The points of the curve are the points of the Euclidean plane whose Cartesian coordinates satisfy the equation. There are two standard projections and , defined by and that map the curve onto the coordinate axes. They are called the projection parallel to the y-axis and the projection parallel to the x-axis, respectively. A point of is critical for , if the tangent to exists and is parallel to the y-axis. In that case, the images by of the critical point and of the tangent are the same point of the x-axis, called the critical value. Thus a point of is critical for if its coordinates are a solution of the system of equations: This implies that this definition is a special case of the general definition of a critical point, which is given below. The definition of a critical point for is similar. If is the graph of a function , then is critical for if and only if is a critical point of , and that the critical values are the same. Some authors define the critical points of as the points that are critical for either or , although they depend not only on , but also on the choice of the coordinate axes. It depends also on the authors if the singular points are considered as critical points. In fact the singular points are the points that satisfy and are thus solutions of either system of equations characterizing the critical points. With this more general definition, the critical points for are exactly the points where the implicit function theorem does not apply. Use of the discriminant When the curve is algebraic, that is when it is defined by a bivariate polynomial , then the discriminant is a useful tool to compute the critical points. Here we consider only the projection ; Similar results apply to by exchanging and . Let be the discriminant of viewed as a polynomial in with coefficients that are polynomials in . This discriminant is thus a polynomial in which has the critical values of among its roots. More precisely, a simple root of is either a critical value of such the corresponding critical point is a point which is not singular nor an inflection point, or the -coordinate of an asymptote which is parallel to the -axis and is tangent "at infinity" to an inflection point (inflexion asymptote). A multiple root of the discriminant correspond either to several critical points or inflection asymptotes sharing the same critical value, or to a critical point which is also an inflection point, or to a singular point. Several variables For a function of several real variables, a point (that is a set of values for the input variables, which is viewed as a point in is critical if it is a point where the gradient is zero or undefined. The critical values are the values of the function at the critical points. A critical point (where the function is differentiable) may be either a local maximum, a local minimum or a saddle point. If the function is at least twice continuously differentiable the different cases may be distinguished by considering the eigenvalues of the Hessian matrix of second derivatives. A critical point at which the Hessian matrix is nonsingular is said to be nondegenerate, and the signs of the eigenvalues of the Hessian determine the local behavior of the function. In the case of a function of a single variable, the Hessian is simply the second derivative, viewed as a 1×1-matrix, which is nonsingular if and only if it is not zero. In this case, a non-degenerate critical point is a local maximum or a local minimum, depending on the sign of the second derivative, which is positive for a local minimum and negative for a local maximum. If the second derivative is null, the critical point is generally an inflection point, but may also be an undulation point, which may be a local minimum or a local maximum. For a function of variables, the number of negative eigenvalues of the Hessian matrix at a critical point is called the index of the critical point. A non-degenerate critical point is a local maximum if and only if the index is , or, equivalently, if the Hessian matrix is negative definite; it is a local minimum if the index is zero, or, equivalently, if the Hessian matrix is positive definite. For the other values of the index, a non-degenerate critical point is a saddle point, that is a point which is a maximum in some directions and a minimum in others. Application to optimization By Fermat's theorem, all local maxima and minima of a continuous function occur at critical points. Therefore, to find the local maxima and minima of a differentiable function, it suffices, theoretically, to compute the zeros of the gradient and the eigenvalues of the Hessian matrix at these zeros. This requires the solution of a system of equations, which can be a difficult task. The usual numerical algorithms are much more efficient for finding local extrema, but cannot certify that all extrema have been found. In particular, in global optimization, these methods cannot certify that the output is really the global optimum. When the function to minimize is a multivariate polynomial, the critical points and the critical values are solutions of a system of polynomial equations, and modern algorithms for solving such systems provide competitive certified methods for finding the global minimum. Critical point of a differentiable map Given a differentiable map the critical points of are the points of where the rank of the Jacobian matrix of is not maximal. The image of a critical point under is a called a critical value. A point in the complement of the set of critical values is called a regular value. Sard's theorem states that the set of critical values of a smooth map has measure zero. Some authors give a slightly different definition: a critical point of is a point of where the rank of the Jacobian matrix of is less than . With this convention, all points are critical when . These definitions extend to differential maps between differentiable manifolds in the following way. Let be a differential map between two manifolds and of respective dimensions and . In the neighborhood of a point of and of , charts are diffeomorphisms and The point is critical for if is critical for This definition does not depend on the choice of the charts because the transitions maps being diffeomorphisms, their Jacobian matrices are invertible and multiplying by them does not modify the rank of the Jacobian matrix of If is a Hilbert manifold (not necessarily finite dimensional) and is a real-valued function then we say that is a critical point of if is not a submersion at . Application to topology Critical points are fundamental for studying the topology of manifolds and real algebraic varieties. In particular, they are the basic tool for Morse theory and catastrophe theory. The link between critical points and topology already appears at a lower level of abstraction. For example, let be a sub-manifold of and be a point outside The square of the distance to of a point of is a differential map such that each connected component of contains at least a critical point, where the distance is minimal. It follows that the number of connected components of is bounded above by the number of critical points. In the case of real algebraic varieties, this observation associated with Bézout's theorem allows us to bound the number of connected components by a function of the degrees of the polynomials that define the variety.
Mathematics
Functions: General
null
18821046
https://en.wikipedia.org/wiki/Chickenpox
Chickenpox
Chickenpox, also known as varicella ( ), is a highly contagious disease caused by varicella zoster virus (VZV), a member of the herpesvirus family. The disease results in a characteristic skin rash that forms small, itchy blisters, which eventually scab over. It usually starts on the chest, back, and face. It then spreads to the rest of the body. The rash and other symptoms, such as fever, tiredness, and headaches, usually last five to seven days. Complications may occasionally include pneumonia, inflammation of the brain, and bacterial skin infections. The disease is usually more severe in adults than in children. Chickenpox is an airborne disease which easily spreads via human-to-human transmission, typically through the coughs and sneezes of an infected person. The incubation period is 10–21 days, after which the characteristic rash appears. It may be spread from one to two days before the rash appears until all lesions have crusted over. It may also spread through contact with the blisters. Those with shingles may spread chickenpox to those who are not immune through contact with the blisters. The disease can usually be diagnosed based on the presenting symptom; however, in unusual cases it may be confirmed by polymerase chain reaction (PCR) testing of the blister fluid or scabs. Testing for antibodies may be done to determine if a person is immune. People usually only get chickenpox once. Although reinfections by the virus occur, these reinfections usually do not cause any symptoms. Since its introduction in 1995 in the United States, the varicella vaccine has resulted in a decrease in the number of cases and complications from the disease. It protects about 70–90 percent of people from disease with a greater benefit for severe disease. Routine immunization of children is recommended in many countries. Immunization within three days of exposure may improve outcomes in children. Treatment of those infected may include calamine lotion to help with itching, keeping the fingernails short to decrease injury from scratching, and the use of paracetamol (acetaminophen) to help with fevers. For those at increased risk of complications, antiviral medication such as aciclovir is recommended. Chickenpox occurs in all parts of the world. In 2013, there were 140 million cases of chickenpox and shingles worldwide. Before routine immunization the number of cases occurring each year was similar to the number of people born. Since immunization the number of infections in the United States has decreased nearly 90%. In 2015 chickenpox resulted in 6,400 deaths globally – down from 8,900 in 1990. Death occurs in about 1 per 60,000 cases. Chickenpox was not separated from smallpox until the late 19th century. In 1888 its connection to shingles was determined. The first documented use of the term chicken pox was in 1658. Various explanations have been suggested for the use of "chicken" in the name, one being the relative mildness of the disease. Signs and symptoms The early (prodromal) symptoms in adolescents and adults are nausea, loss of appetite, aching muscles, and headache. This is followed by the characteristic rash or oral sores, malaise, and a low-grade fever that signals the presence of the disease. Oral manifestations of the disease (enanthem) not uncommonly may precede the external rash (exanthem). In children, the illness is not usually preceded by prodromal symptoms, and the first sign is the rash or the spots in the oral cavity. The rash begins as small red dots on the face, scalp, torso, upper arms, and legs; progressing over 10–12 hours to small bumps, blisters, and pustules; followed by umbilication and the formation of scabs. At the blister stage, intense itching is usually present. Blisters may also occur on the palms, soles, and genital area. Commonly, visible evidence of the disease develops in the oral cavity and tonsil areas in the form of small ulcers which can be painful, itchy, or both; this enanthem (internal rash) can precede the exanthem (external rash) by 1 to 3 days or can be concurrent. These symptoms of chickenpox appear 10 to 21 days after exposure to a contagious person. Adults may have a more widespread rash and longer fever, and they are more likely to experience complications, such as varicella pneumonia. Because watery nasal discharge containing live virus usually precedes both exanthem (external rash) and enanthem (oral ulcers) by one to two days, the infected person becomes contagious one to two days before recognition of the disease. Contagiousness persists until all vesicular lesions have become dry crusts (scabs), which usually entails four or five days, by which time nasal shedding of live virus ceases. The condition usually resolves by itself within a week or two. The rash may, however, last for up to one month. Chickenpox is rarely fatal, although it is generally more severe in adult men than in women or children. Non-immune pregnant women and those with a suppressed immune system are at highest risk of serious complications. Arterial ischemic stroke (AIS) associated with chickenpox in the previous year accounts for nearly one-third of childhood AIS. The most common late complication of chickenpox is shingles (herpes zoster), caused by reactivation of the varicella zoster virus decades after the initial, often childhood, chickenpox infection. Pregnancy and neonates During pregnancy the dangers to the fetus associated with a primary VZV infection are greater in the first six months. In the third trimester, the mother is more likely to have severe symptoms. For pregnant women, antibodies produced as a result of immunization or previous infection are transferred via the placenta to the fetus. Varicella infection in pregnant women could lead to spread via the placenta and infection of the fetus. If infection occurs during the first 28 weeks of gestation, this can lead to fetal varicella syndrome (also known as congenital varicella syndrome). Effects on the fetus can range in severity from underdeveloped toes and fingers to severe anal and bladder malformation. Possible problems include: Damage to the brain: encephalitis, microcephaly, hydrocephaly, aplasia of brain Damage to the eye: optic stalk, optic cup, and lens vesicles, microphthalmia, cataracts, chorioretinitis, optic atrophy Other neurological disorder: damage to cervical and lumbosacral spinal cord, motor/sensory deficits, absent deep tendon reflexes, anisocoria/Horner's syndrome Damage to body: hypoplasia of upper/lower extremities, anal and bladder sphincter dysfunction Skin disorders: (cicatricial) skin lesions, hypopigmentation Infection late in gestation or immediately following birth is referred to as "neonatal varicella". Maternal infection is associated with premature delivery. The risk of the baby developing the disease is greatest following exposure to infection in the period 7 days before delivery and up to 8 days following the birth. The baby may also be exposed to the virus via infectious siblings or other contacts, but this is of less concern if the mother is immune. Newborns who develop symptoms are at a high risk of pneumonia and other serious complications of the disease. Pathophysiology Exposure to VZV in a healthy child initiates the production of host immunoglobulin G (IgG), immunoglobulin M (IgM), and immunoglobulin A (IgA) antibodies; IgG antibodies persist for life and confer immunity. Cell-mediated immune responses are also important in limiting the scope and the duration of primary varicella infection. After primary infection, VZV is hypothesized to spread from mucosal and epidermal lesions to local sensory nerves. VZV then remains latent in the dorsal ganglion cells of the sensory nerves. Reactivation of VZV results in the clinically distinct syndrome of herpes zoster (i.e., shingles), postherpetic neuralgia, and sometimes Ramsay Hunt syndrome type II. Varicella zoster can affect the arteries in the neck and head, producing stroke, either during childhood, or after a latency period of many years. Shingles After a chickenpox infection, the virus remains dormant in the body's nerve tissues for about 50 years. This, however, does not mean that VZV cannot be contracted later in life. The immune system usually keeps the virus at bay, but it can still manifest itself at any given age causing a different form of the viral infection called shingles (also known as herpes zoster). Since the efficacy of the human immune system decreases with age, the United States Advisory Committee on Immunization Practices (ACIP) suggests that every adult over the age of 50 years get the herpes zoster vaccine. Shingles affects one in five adults infected with chickenpox as children, especially those who are immune-suppressed, particularly from cancer, HIV, or other conditions. Stress can bring on shingles as well, although scientists are still researching the connection. Adults over the age of 60 who had chickenpox but not shingles are the most prone age demographic. Diagnosis The diagnosis of chickenpox is primarily based on the signs and symptoms, with typical early symptoms followed by a characteristic rash. Confirmation of the diagnosis is by examination of the fluid within the vesicles of the rash, or by testing blood for evidence of an acute immunologic response. Vesicular fluid can be examined with a Tzanck smear, or by testing for direct fluorescent antibody. The fluid can also be "cultured", whereby attempts are made to grow the virus from a fluid sample. Blood tests can be used to identify a response to acute infection (IgM) or previous infection and subsequent immunity (IgG). Prenatal diagnosis of fetal varicella infection can be performed using ultrasound, though a delay of 5 weeks following primary maternal infection is advised. A PCR (DNA) test of the mother's amniotic fluid can also be performed, though the risk of spontaneous abortion due to the amniocentesis procedure is higher than the risk of the baby's developing fetal varicella syndrome. Prevention Hygiene measures The spread of chickenpox can be prevented by isolating affected individuals. Contagion is by exposure to respiratory droplets, or direct contact with lesions, within a period lasting from three days before the onset of the rash, to four days after the onset of the rash.<ref>, edition (Elsevier), p.</ref> The chickenpox virus is susceptible to disinfectants, notably chlorine bleach (i.e., sodium hypochlorite). Like all enveloped viruses, it is sensitive to drying, heat and detergents. Vaccine Chickenpox can be prevented by vaccination. The side effects are usually mild, such as some pain or swelling at the injection site. A live attenuated varicella vaccine, the Oka strain, was developed by Michiaki Takahashi and his colleagues in Japan in the early 1970s. In 1995, Merck & Co. licensed the "Oka" strain of the varicella virus in the United States, and Maurice Hilleman's team at Merck invented a varicella vaccine in the same year. The varicella vaccine is recommended in many countries. Some countries require the varicella vaccination or an exemption before entering elementary school. A second dose is recommended five years after the initial immunization. A vaccinated person is likely to have a milder case of chickenpox if they become infected. Immunization within three days following household contact reduces infection rates and severity in children. Being exposed to chickenpox as an adult (for example, through contact with infected children) may boost immunity to shingles. Therefore, it was thought that when the majority of children were vaccinated against chickenpox, adults might lose this natural boost, so immunity would drop and more shingles cases would occur. On the other hand, current observations suggest that exposure to children with varicella is not a critical factor in the maintenance of immunity. Multiple subclinical reactivations of varicella-zoster virus may occur spontaneously and, despite not causing clinical disease, may still provide an endogenous boost to immunity against zoster. The vaccine is part of the routine immunization schedule in the US. Some European countries include it as part of universal vaccinations in children, but not all countries provide the vaccine. In the UK as of 2014, the vaccine is only recommended in people who are particularly vulnerable to chickenpox. This is to keep the virus in circulation, thereby exposing the population to the virus at an early age when it is less harmful, and to reduce the occurrence of shingles through repeated exposure to the virus later in life. In November 2023, the UK Joint Committee on Vaccination and Immunisation recommended all children be given the vaccine at ages 12 months and 18 months; however, this has not yet been implemented. In populations that have not been immunized or if immunity is questionable, a clinician may order an enzyme immunoassay. An immunoassay measures the levels of antibodies against the virus that give immunity to a person. If the levels of antibodies are low (low titer) or questionable, reimmunization may be done. Treatment Treatment mainly consists of easing the symptoms. As a protective measure, people are usually required to stay at home while they are infectious to avoid spreading the disease to others. Cutting the fingernails short or wearing gloves may prevent scratching and minimize the risk of secondary infections. Although there have been no formal clinical studies evaluating the effectiveness of topical application of calamine lotion (a topical barrier preparation containing zinc oxide, and one of the most commonly used interventions), it has an excellent safety profile. Maintaining good hygiene and daily cleaning of skin with warm water can help to avoid secondary bacterial infection; scratching may increase the risk of secondary infection. Paracetamol (acetaminophen) but not aspirin may be used to reduce fever. Aspirin use by someone with chickenpox may cause serious, sometimes fatal disease of the liver and brain, Reye syndrome. People at risk of developing severe complications who have had significant exposure to the virus may be given intra-muscular varicella zoster immune globulin (VZIG), a preparation containing high titres of antibodies to varicella zoster virus, to ward off the disease. Antivirals are sometimes used. Children If aciclovir by mouth is started within 24 hours of rash onset, it decreases symptoms by one day but does not affect complication rates. Use of aciclovir, therefore, is not currently recommended for individuals with normal immune function. Children younger than 12 years old and older than one month are not meant to receive antiviral drugs unless they have another medical condition that puts them at risk of developing complications. Treatment of chickenpox in children is aimed at symptoms while the immune system deals with the virus. With children younger than 12 years, cutting fingernails and keeping them clean is an important part of treatment as they are more likely to scratch their blisters more deeply than adults. Aspirin is highly contraindicated in children younger than 16 years, as it has been related to Reye syndrome. Adults Infection in otherwise healthy adults tends to be more severe. Treatment with antiviral drugs (e.g. aciclovir or valaciclovir) is generally advised, as long as it is started within 24–48 hours from rash onset. Remedies to ease the symptoms of chickenpox in adults are generally the same as those used for children. Adults are more often prescribed antiviral medication, as it is effective in reducing the severity of the condition and the likelihood of developing complications. Adults are advised to increase water intake to reduce dehydration and relieve headaches. Painkillers such as paracetamol (acetaminophen) are recommended, as they are effective in relieving itching and other symptoms such as fever or pain. Antihistamines relieve itching and may be used in cases where the itching prevents sleep because they also act as a sedative. As with children, antiviral medication is considered more useful for those adults who are more prone to develop complications. These include pregnant women or people who have a weakened immune system. Prognosis The duration of the visible blistering caused by varicella zoster virus varies in children usually from four to seven days, and the appearance of new blisters begins to subside after the fifth day. Chickenpox infection is milder in young children, and symptomatic treatment, with sodium bicarbonate baths or antihistamine medication may ease itching. In adults, the disease is more severe, though the incidence is much less common. Infection in adults is associated with greater morbidity and mortality due to pneumonia (either direct viral pneumonia or secondary bacterial pneumonia), bronchitis (either viral bronchitis or secondary bacterial bronchitis), hepatitis, and encephalitis. In particular, up to 10% of pregnant women with chickenpox develop pneumonia, the severity of which increases with onset later in gestation. In England and Wales, 75% of deaths due to chickenpox are in adults. Inflammation of the brain, encephalitis, can occur in immunocompromised individuals, although the risk is higher with herpes zoster. Necrotizing fasciitis is also a rare complication. Varicella can be lethal to individuals with impaired immunity. The number of people in this high-risk group has increased, due to the HIV epidemic and the increased use of immunosuppressive therapies. Varicella is a particular problem in hospitals when there are patients with immune systems weakened by drugs (e.g., high-dose steroids) or HIV. Secondary bacterial infection of skin lesions, manifesting as impetigo, cellulitis, and erysipelas, is the most common complication in healthy children. Disseminated primary varicella infection usually seen in the immunocompromised may have high morbidity. Ninety percent of cases of varicella pneumonia occur in the adult population. Rarer complications of disseminated chickenpox include myocarditis, hepatitis, and glomerulonephritis. Hemorrhagic complications are more common in the immunocompromised or immunosuppressed populations, although healthy children and adults have been affected. Five major clinical syndromes have been described: febrile purpura, malignant chickenpox with purpura, postinfectious purpura, purpura fulminans, and anaphylactoid purpura. These syndromes have variable courses, with febrile purpura being the most benign of the syndromes and having an uncomplicated outcome. In contrast, malignant chickenpox with purpura is a grave clinical condition that has a mortality rate of greater than 70%. The cause of these hemorrhagic chickenpox syndromes is not known. Epidemiology Primary varicella occurs in all countries worldwide. In 2015 chickenpox resulted in 6,400 deaths globally – down from 8,900 in 1990. There were 7,000 deaths in 2013. Varicella is highly transmissible, with an infection rate of 90% in close contacts. In temperate countries, chickenpox is primarily a disease of children, with most cases occurring during the winter and spring, most likely due to school contact. In such countries it is one of the classic diseases of childhood, with most cases occurring in children up to age 15; most people become infected before adulthood, and 10% of young adults remain susceptible. In the United States, a temperate country, the Centers for Disease Control and Prevention (CDC) do not require state health departments to report infections of chickenpox, and only 31 states volunteered this information . A 2013 study conducted by the social media disease surveillance tool called Sickweather used anecdotal reports of chickenpox infections on social media systems Facebook and Twitter to measure and rank states with the most infections per capita, with Maryland, Tennessee and Illinois in the top three. In the tropics, chickenpox often occurs in older people and may cause more serious disease. In adults, the pockmarks are darker and the scars more prominent than in children. Society and culture Etymology How the term chickenpox originated is not clear but it may be due to it being a relatively mild disease. It has been said to be derived from chickpeas, based on resemblance of the vesicles to chickpeas, or to come from the rash resembling chicken pecks. Other suggestions include the designation chicken for a child (i.e., literally 'child pox'), a corruption of itching-pox'', or the idea that the disease may have originated in chickens. Samuel Johnson explained the designation as "from its being of no very great danger". Intentional exposure Because chickenpox is usually more severe in adults than it is in children, some parents deliberately expose their children to the virus, for example by taking them to "chickenpox parties". Doctors say that children are safer getting the vaccine, which is a weakened form of the virus, than getting the disease, which can be fatal or lead to shingles later in life. Repeated exposure to chickenpox may protect against zoster. Other animals Humans are the only known species that the disease affects naturally. However, chickenpox has been caused in animals, including chimpanzees and gorillas. Research Sorivudine, a nucleoside analog, has been reported to be effective in the treatment of primary varicella in healthy adults (case reports only), but large-scale clinical trials are still needed to demonstrate its efficacy. There was speculation in 2005 that continuous dosing of aciclovir by mouth for a period of time could eradicate VZV from the host, although further trials were required to discern whether eradication was actually viable.
Biology and health sciences
Infectious disease
null
21091725
https://en.wikipedia.org/wiki/Insulin%20%28medication%29
Insulin (medication)
As a medication, insulin is any pharmaceutical preparation of the protein hormone insulin that is used to treat high blood glucose. Such conditions include type 1 diabetes, type 2 diabetes, gestational diabetes, and complications of diabetes such as diabetic ketoacidosis and hyperosmolar hyperglycemic states. Insulin is also used along with glucose to treat hyperkalemia (high blood potassium levels). Typically it is given by injection under the skin, but some forms may also be used by injection into a vein or muscle. There are various types of insulin, suitable for various time spans. The types are often all called insulin in the broad sense, although in a more precise sense, insulin is identical to the naturally occurring molecule whereas insulin analogues have slightly different molecules that allow for modified time of action. It is on the World Health Organization's List of Essential Medicines. In 2022, it was the 192nd most commonly prescribed medication in the United States, with more than 2million prescriptions. Insulin can be made from the pancreas of pigs or cows. Human versions can be made either by modifying pig versions, or recombinant technology using mainly E. coli or Saccharomyces cerevisiae. It comes in three main types: short–acting (such as regular insulin), intermediate-acting (such as neutral protamine Hagedorn (NPH) insulin), and longer-acting (such as insulin glargine). Medical uses Insulin is used to treat a number of diseases including diabetes and its acute complications such as diabetic ketoacidosis and hyperosmolar hyperglycemic states. It is also used along with glucose to treat high blood potassium levels. Use during pregnancy is relatively safe for the baby. Insulin was formerly used in a psychiatric treatment called insulin shock therapy. Side effects Some side effects are hypoglycemia (low blood sugar), hypokalemia (low blood potassium), and allergic reactions. Allergy to insulin affected about 2% of people, of which most reactions are not due to the insulin itself but to preservatives added to insulin such as zinc, protamine, and meta-cresol. Most reactions are Type I hypersensitivity reactions and rarely cause anaphylaxis. A suspected allergy to insulin can be confirmed by skin prick testing, patch testing and occasionally skin biopsy. First line therapy against insulin hypersensitivity reactions include symptomatic therapy with antihistamines. The affected persons are then switched to a preparation that does not contain the specific agent they are reacting to or undergo desensitization. Cutaneous adverse effects Other side effects may include pain or skin changes at the sites of injection. Repeated subcutaneous injection without site rotation can lead to lipohypertrophy and amyloidomas, which manifest as firm palpable nodules under the skin. Effects of early routine use Early initiation of insulin therapy for the long-term management of conditions such as type 2 diabetes would suggest that the use of insulin has unique benefits, however, with insulin therapy, there is a need to gradually raise the dose and the complexity of the regimen, as well as the likelihood of developing severe hypoglycemia which is why many people and their doctors are hesitant to begin insulin therapy in the early stage of disease management. Many obstacles associated with health behaviors also prevent people with type 2 diabetes mellitus from starting or intensifying their insulin treatment, including lack of motivation, lack of familiarity with or experience with treatments, and time restraints causing people to have high glycemic loads for extended periods of time prior to starting insulin therapy. This is why managing the side effects associated with long-term early routine use of insulin for type 2 diabetes mellitus can prove to be a therapeutic and behavioral challenge. Principles Insulin is an endogenous hormone, which is produced by the pancreas. The insulin protein has been highly conserved across evolutionary time, and is present in both mammals and invertebrates. The insulin/insulin-like growth factor signalling pathway (IIS) has been extensively studied in species including nematode worms (e.g.C. elegans), flies (Drosophila melanogaster) and mice (Mus musculus). Its mechanisms of action are highly similar across species. Both type 1 diabetes and type 2 diabetes are marked by a loss of pancreatic function, though to differing degrees. People who are affected with diabetes are referred to as diabetics. Many diabetics require an exogenous source of insulin to keep their blood sugar levels within a safe target range. In 1916, Nicolae C. Paulescu (1869–1931) succeeded in developing an aqueous pancreatic extract that normalized a diabetic dog. In 1921, he published 4 papers in the Society of Biology in Paris centering on the successful effects of the pancreatic extract in diabetic dogs. Research on the Role of the Pancreas in Food Assimilation by Paulescu was published in August 1921 in the Archives Internationales de Physiologie, Liège, Belgium. Initially, the only way to obtain insulin for clinical use was to extract it from the pancreas of another creature. Animal glands were obtainable as a waste product of the meatpacking industry. Insulin was derived primarily from cows (Eli Lilly and Company) and pigs (Nordisk Insulinlaboratorium). The making of eight ounces of purified insulin could require as much as two tons of pig parts. Insulin from these sources is effective in humans as it is highly similar to human insulin (three amino acid difference in bovine insulin, one amino acid difference in porcine). Initially, lower preparation purity resulted in allergic reactions to the presence of non-insulin substances. Purity has improved steadily since the 1920s ultimately reaching purity of 99% by the mid-1970s thanks to high-pressure liquid chromatography (HPLC) methods. Minor allergic reactions still occur occasionally, even to synthetic "human" insulin varieties. Beginning in 1982, biosynthetic "human" insulin has been manufactured for clinical use through genetic engineering techniques using recombinant DNA technology. Genentech developed the technique used to produce the first such insulin, Humulin, but did not commercially market the product themselves. Eli Lilly marketed Humulin in 1982. Humulin was the first medication produced using modern genetic engineering techniques in which actual human DNA is inserted into a host cell (E. coli in this case). The host cells are then allowed to grow and reproduce normally, and due to the inserted human DNA, they produce a synthetic version of human insulin. Manufacturers claim this reduces the presence of many impurities. However, the clinical preparations prepared from such insulins differ from endogenous human insulin in several important respects; an example is the absence of C-peptide which has in recent years been shown to have systemic effects itself. Novo Nordisk has also developed a genetically engineered insulin independently using a yeast process. According to a survey that the International Diabetes Federation conducted in 2002 on the access to and availability of insulin in its member countries, approximately 70% of the insulin that is currently sold in the world is recombinant, biosynthetic 'human' insulin. A majority of insulin used clinically today is produced this way, although clinical experience has provided conflicting evidence on whether these insulins are any less likely to produce an allergic reaction. Adverse reactions have been reported; these include loss of warning signs that patients may slip into a coma through hypoglycemia, convulsions, memory lapse and loss of concentration. However, the International Diabetes Federation's position statement from 2005 is very clear in stating that "there is NO overwhelming evidence to prefer one species of insulin over another" and "[modern, highly purified] animal insulins remain a perfectly acceptable alternative." Since January 2006, all insulins distributed in the US and some other countries are synthetic "human" insulins or their analogues. A special FDA importation process is required to obtain bovine or porcine derived insulin for use in the US, although there may be some remaining stocks of porcine insulin made by Lilly in 2005 or earlier, and porcine lente insulin is also sold and marketed under the brand name Vetsulin(SM) in the US for veterinary usage in the treatment of companion animals with diabetes. Basal insulin In type 1 diabetes, insulin production is extremely low, and as such the body requires exogenous insulin. Some people with type 2 diabetes, particularly those with very high hemoglobin A1c values, may also require a baseline rate of insulin, as their body is desensitized to the level of insulin being produced. Basal insulin regulates the body's blood glucose between mealtimes, as well as overnight. This basal rate of insulin action is generally achieved via the use of an intermediate-acting insulin (such as NPH) or a long-acting insulin analog. In type 1 diabetics, it may also be achieved via continuous infusion of rapid-acting insulin using an insulin pump. Approximately half of a person's daily insulin requirement is administered as a basal insulin, usually administered once per day at night. Prandial insulin When a person eats food containing carbohydrates and glucose, insulin helps regulate the body's metabolism of the food. Prandial insulin, also called mealtime or bolus insulin, is designed as a bolus dose of insulin prior to a meal to regulate the spike in blood glucose that occurs following a meal. The dose of prandial insulin may be static, or may be calculated by the patient using either their current blood sugar, planned carbohydrate intake, or both. This calculation may also be performed by an insulin pump in patients using a pump. Insulin regiments that consist of doses calculated in this manner are considered intensive insulin regimens. Prandial insulin is usually administered no more than 15–30 minutes prior to a meal using a rapid-acting insulin or a regular insulin. In some patients, a combination insulin may be used that contains both NPH (long acting) insulin and a rapid/regular insulin to provide both a basal insulin and prandial insulin. Challenges in treatment There are several challenges involved in the use of insulin as a clinical treatment for diabetes: Mode of administration. Selecting the 'right' dose and timing. The amount of carbohydrates one unit of insulin handles varies widely between persons and over the day but values between 7 and 20 grams per 1 IE is typical. Selecting an appropriate insulin preparation (typically on 'speed of onset and duration of action' grounds). Adjusting dosage and timing to fit food intake timing, amounts, and types. Adjusting dosage and timing to fit exercise undertaken. Adjusting dosage, type, and timing to fit other conditions, for instance the increased stress of illness. Variability in absorption into the bloodstream via subcutaneous delivery The dosage is non-physiological in that a subcutaneous bolus dose of insulin alone is administered instead of combination of insulin and C-peptide being released gradually and directly into the portal vein. It is simply a nuisance for people to inject whenever they eat carbohydrate or have a high blood glucose reading. It is dangerous in case of mistake (such as 'too much' insulin). Types Medical preparations of insulin are never just insulin in water (with nothing else). Clinical insulins are mixtures of insulin plus other substances including preservatives. These prevent the protein from spoiling or denaturing too rapidly, delay absorption of the insulin, adjust the pH of the solution to reduce reactions at the injection site, and so on. Slight variations of the human insulin molecule are called insulin analogues, (technically "insulin receptor ligands") so named because they are not technically insulin, rather they are analogues which retain the hormone's glucose management functionality. They have absorption and activity characteristics not currently possible with subcutaneously injected insulin proper. They are either absorbed rapidly in an attempt to mimic real beta cell insulin (as with insulin lispro, insulin aspart, and insulin glulisine), or steadily absorbed after injection instead of having a 'peak' followed by a more or less rapid decline in insulin action (as with insulin detemir and insulin glargine), all while retaining insulin's glucose-lowering action in the human body. However, a number of meta-analyses, including those done by the Cochrane Collaboration in 2005, Germany's Institute for Quality and Cost Effectiveness in the Health Care Sector [IQWiG] released in 2007, and the Canadian Agency for Drugs and Technology in Health (CADTH) also released in 2007 have shown no unequivocal advantages in clinical use of insulin analogues over more conventional insulin types. The commonly used types of insulin are as follows. Fast-acting (Rapid-acting) Includes the insulin analogues aspart, lispro, and glulisine. These begin to work within 5 to 15 minutes and are active for 3 to 4 hours. Most insulins form hexamers, which delay entry into the blood in active form; these analog insulins do not but have normal insulin activity. Newer varieties are now pending regulatory approval in the US which are designed to work rapidly, but retain the same genetic structure as regular human insulin. Short-acting Includes regular insulin, which begins working within 30 minutes and is active about 5 to 8 hours. Intermediate-acting Includes NPH insulin, which begins working in 1 to 3 hours and is active for 16 to 24 hours. Long-acting Includes the analogues glargine U100 and detemir, each of which begins working within 1 to 2 hours and continues to be active, without major peaks or dips, for about 24 hours, although this varies in many individuals. Ultra-long acting Includes the analogues insulin glargine U300 and degludec, which begin working within 30 to 90 minutes and continues to be active for greater than 24 hours. Combination insulin products Includes a combination of either fast-acting or short-acting insulin with a longer-acting insulin, typically an NPH insulin. The combination products begin to work with the shorter-acting insulin (5–15 minutes for fast-acting, and 30 minutes for short-acting), and remain active for 16–24 hours. There are several variations with different proportions of the mixed insulins (e.g. Novolog Mix 70/30 contains 70% aspart protamine [akin to NPH], and 30% aspart.) Methods of administration Unlike many medicines, insulin cannot be taken orally at the present time. Like nearly all other proteins introduced into the gastrointestinal tract, it is reduced to fragments (single amino acid components), whereupon all activity is lost. There has been some research into ways to protect insulin from the digestive tract, so that it can be administered in a pill. So far this is entirely experimental. Subcutaneous Insulin is usually taken as subcutaneous injections by single-use syringes with needles, an insulin pump, or by repeated-use insulin pens with needles. People who wish to reduce repeated skin puncture of insulin injections often use an injection port in conjunction with syringes. The use of subcutaneous injections of insulin is designed to mimic the natural physiological cycle of insulin secretion, while taking into account the various properties of the formulations used such as half-life, onset of action, and duration of action. In many people, both a rapid- or short-acting insulin product as well as an intermediate- or long-acting product are used to decrease the amount of injections per day. In some, insulin injections may be combined with other injection therapy such as GLP-1 receptor agonists. Cleansing of the injection site and injection technique are required to ensure effective insulin therapy. Insulin pump Insulin pumps are a reasonable solution for some. Advantages to the person are better control over background or basal insulin dosage, bolus doses calculated to fractions of a unit, and calculators in the pump that may help with determining bolus infusion dosages. The limitations are cost, the potential for hypoglycemic and hyperglycemic episodes, catheter problems, and no "closed loop" means of controlling insulin delivery based on current blood glucose levels. Insulin pumps may be like 'electrical injectors' attached to a temporarily implanted catheter or cannula. Some who cannot achieve adequate glucose control by conventional (or jet) injection are able to do so with the appropriate pump. Indwelling catheters pose the risk of infection and ulceration, and some peoples may also develop lipodystrophy due to the infusion sets. These risks can often be minimized by keeping infusion sites clean. Insulin pumps require care and effort to use correctly. Dosage and timing Dosage units One international unit of insulin (1 IU) is defined as the "biological equivalent" of 34.7 μg pure crystalline insulin. The first definition of a unit of insulin was the amount required to induce hypoglycemia in a rabbit. This was set by James Collip at the University of Toronto in 1922. Of course, this was dependent on the size and diet of the rabbits. The unit of insulin was set by the insulin committee at the University of Toronto. The unit evolved eventually to the old USP insulin unit, where one unit (U) of insulin was set equal to the amount of insulin required to reduce the concentration of blood glucose in a fasting rabbit to 45 m g/d L (2.5 m mol/L). Once the chemical structure and mass of insulin was known, the unit of insulin was defined by the mass of pure crystalline insulin required to obtain the USP unit. The unit of measurement used in insulin therapy is not part of the International System of Units (abbreviated SI) which is the modern form of the metric system. Instead the pharmacological international unit (IU) is defined by the WHO Expert Committee on Biological Standardization. Potential complications The central problem for those requiring external insulin is picking the right dose of insulin and the right timing. Physiological regulation of blood glucose, as in the non-diabetic, would be best. Increased blood glucose levels after a meal is a stimulus for prompt release of insulin from the pancreas. The increased insulin level causes glucose absorption and storage in cells, reduces glycogen to glucose conversion, reducing blood glucose levels, and so reducing insulin release. The result is that the blood glucose level rises somewhat after eating, and within an hour or so, returns to the normal 'fasting' level. Even the best diabetic treatment with synthetic human insulin or even insulin analogs, however administered, falls far short of normal glucose control in the non-diabetic. Complicating matters is that the composition of the food eaten (see glycemic index) affects intestinal absorption rates. Glucose from some foods is absorbed more (or less) rapidly than the same amount of glucose in other foods. In addition, fats and proteins cause delays in absorption of glucose from carbohydrates eaten at the same time. As well, exercise reduces the need for insulin even when all other factors remain the same, since working muscle has some ability to take up glucose without the help of insulin. Because of the complex and interacting factors, it is, in principle, impossible to know for certain how much insulin (and which type) is needed to 'cover' a particular meal to achieve a reasonable blood glucose level within an hour or two after eating. Non-diabetics' beta cells routinely and automatically manage this by continual glucose level monitoring and insulin release. All such decisions by a diabetic must be based on experience and training (i.e., at the direction of a physician, PA, or in some places a specialist diabetic educator) and, further, specifically based on the individual experience of the person. But it is not straightforward and should never be done by habit or routine. With some care however, it can be done reasonably well in clinical practice. For example, some people with diabetes require more insulin after drinking skim milk than they do after taking an equivalent amount of fat, protein, carbohydrate, and fluid in some other form. Their particular reaction to skimmed milk is different from other people with diabetes, but the same amount of whole milk is likely to cause a still different reaction even in that person. Whole milk contains considerable fat while skimmed milk has much less. It is a continual balancing act for all people with diabetes, especially for those taking insulin. People with insulin-dependent diabetes typically require some base level of insulin (basal insulin), as well as short-acting insulin to cover meals (bolus also known as mealtime or prandial insulin). Maintaining the basal rate and the bolus rate is a continuous balancing act that people with insulin-dependent diabetes must manage each day. This is normally achieved through regular blood tests, although continuous blood sugar testing equipment (Continuous Glucose Monitors or CGMs) are now becoming available which could help to refine this balancing act once widespread usage becomes common. Strategies A long-acting insulin is used to approximate the basal secretion of insulin by the pancreas, which varies in the course of the day. NPH/isophane, lente, ultralente, glargine, and detemir may be used for this purpose. The advantage of NPH is its low cost, the fact that you can mix it with short-acting forms of insulin, thereby minimizing the number of injections that must be administered, and that the activity of NPH will peak 4–6 hours after administration, allowing a bedtime dose to balance the tendency of glucose to rise with the dawn, along with a smaller morning dose to balance the lower afternoon basal need and possibly an afternoon dose to cover evening need. A disadvantage of bedtime NPH is that if not taken late enough (near midnight) to place its peak shortly before dawn, it has the potential of causing hypoglycemia. One theoretical advantage of glargine and detemir is that they only need to be administered once a day, although in practice many people find that neither lasts a full 24 hours. They can be administered at any time during the day as well, provided that they are given at the same time every day. Another advantage of long-acting insulins is that the basal component of an insulin regimen (providing a minimum level of insulin throughout the day) can be decoupled from the prandial or bolus component (providing mealtime coverage via ultra-short-acting insulins), while regimens using NPH and regular insulin have the disadvantage that any dose adjustment affects both basal and prandial coverage. Glargine and detemir are significantly more expensive than NPH, lente and ultralente, and they cannot be mixed with other forms of insulin. A short-acting insulin is used to simulate the endogenous insulin surge produced in anticipation of eating. Regular insulin, lispro, aspart and glulisine can be used for this purpose. Regular insulin should be given with about a 30-minute lead-time prior to the meal to be maximally effective and to minimize the possibility of hypoglycemia. Lispro, aspart and glulisine are approved for dosage with the first bite of the meal, and may even be effective if given after completing the meal. The short-acting insulin is also used to correct hyperglycemia. Sliding scales First described in 1934, what physicians typically refer to as sliding-scale insulin (SSI) is fast- or rapid-acting insulin only, given subcutaneously, typically at meal times and sometimes bedtime, but only when blood glucose is above a threshold (e.g. 10 mmol/L, 180 mg/dL). The so-called "sliding-scale" method is widely taught, although it has been heavily criticized. Sliding scale insulin (SSI) is not an effective way of managing long-term diabetes in individuals residing in nursing homes. Sliding scale insulin leads to greater discomfort and increased nursing time. Sample regimen using insulin glargine and insulin lispro: Insulin glargine: 20 units at bedtime Insulin Medication in Pregnancy During pregnancy, spontaneous hyperglycemia can develop and lead to gestational diabetes mellitus (GDM), a frequent pregnancy complication . With a prevalence of 6-20% among pregnant women globally, gestational diabetes mellitus (GDM) is defined as any degree of glucose intolerance developing or initially recognized during pregnancy. Neutral protamine Hagedorn (NPH) insulin has been the cornerstone of insulin therapy during pregnancy, administered two to four times per day. Women with GDM and pregnant women with type I diabetes mellitus who frequently check their blood glucose levels and utilize glucose monitoring equipment for doing so, use continuous insulin infusion of a rapid-acting insulin analogue, such as lispro and aspart. However, a number of considerations go into choosing a regimen for administering insulin to patients. When managing GDM in pregnant women, these guidelines are crucial and can vary depending on certain physiological and interestingly the sociocultural environment as well. The current perinatal guidelines recommend a low daily dose of insulin and take into account the woman's physiological features and the frequency of self-monitoring. The importance of using specialized insulin therapy planning based on parameters like those stated above rather than a broad approach is emphasized. Women with pre-existing diabetes have the highest levels of insulin sensitivity early in pregnancy. Close glucose monitoring is required to prevent hypoglycemia, which can potentially result in altered consciousness, seizures, and maternal damage. Low birth weight newborns might also be the result of hypoglycemia, especially in patients with type 1 diabetes, because they are frequently more insulin sensitive than persons with type 2 diabetes and more likely to be unaware of their hypoglycemic state. Close glucose monitoring is essential because after 16 weeks of pregnancy, women with preexisting diabetes become more insulin resistant and their insulin demands may fluctuate weekly. The need for insulin may rise from one pregnancy to the next. Therefore, it is realistic to expect higher needs for glucose control with subsequent pregnancies in multiparous women. As a performance-enhancing drug The possibility of using insulin in an attempt to improve athletic performance was suggested as early as the 1998 Winter Olympics in Nagano, Japan, as reported by Peter Sönksen in the July 2001 issue of Journal of Endocrinology. The question of whether non-diabetic athletes could legally use insulin was raised by a Russian medical officer. Whether insulin would actually improve athletic performance is unclear, but concerns about its use led the International Olympic Committee to ban use of the hormone by non-diabetic athletes in 1998. The book Game of Shadows (2001), by reporters Mark Fainaru-Wada and Lance Williams, included allegations that baseball player Barry Bonds used insulin (as well as other drugs) in the apparent belief that it would increase the effectiveness of the growth hormone he was alleged to be taking. Bonds eventually testified in front of a federal grand jury as part of a government investigation of BALCO. Bodybuilders in particular are claimed to be using exogenous insulin and other drugs in the belief that they will increase muscle mass. Bodybuilders have been described as injecting up to 10 IU of regular synthetic insulin before eating sugary meals. A 2008 report suggested that insulin is sometimes used in combination with anabolic steroids and growth hormone (GH), and that "Athletes are exposing themselves to potential harm by self‐administering large doses of GH, IGF‐I and insulin". Insulin abuse has been mentioned as a possible factor in the deaths of bodybuilders Ghent Wakefield and Rich Piana. Insulin, human growth hormone (HGH) and insulin-like growth factor 1 (IGF-1) are self-administered by those looking to increase muscle mass beyond the scope offered by anabolic steroids alone. Their rationale is that since insulin and HGH act synergistically to promote growth, and since IGF-1 is a primary mediator of musculoskeletal growth, the 'stacking' of insulin, HGH and IGF-1 should offer a synergistic growth effect on skeletal muscle. This theory has been supported in recent years by top-level bodybuilders whose competition weight is in excess of of muscle, larger than that of competitors in the past, and with even lower levels of body fat. Insulin effects on strength, and exercise performance Exogenous insulin significantly boosts the rate of glucose metabolism in training athletes along with a substantial increase in the peak V̇O2. Insulin is thought to enhance performance by increasing protein synthesis, reducing protein catabolism, and facilitating the transfer of certain amino acids in human skeletal muscle. Insulin-treated athletes are perceived to have lean body mass because physiological hyperinsulinemia in human skeletal muscle improves the activity of amino acid transport, which in turn promotes protein synthesis. Insulin stimulates the transport of amino acids into cells and also controls glucose metabolism. It decreases lipolysis and increases lipogenesis which is why bodybuilders and athletes use rhGH in conjunction with it as to offset this negative effect while maximizing protein synthesis. The athletes extrapolated the physiology of the diabetic patient in the sporting arena because they are interested in the suppression of proteolysis. Insulin administration is found to be protein anabolic in the insulin-resistant state of chronic renal failure. It inhibits proteolysis and when administered along with amino acids, it enhances net protein synthesis. Exogenous insulin injection creates an in-vivo hyperinsulinemic clamp, boosting muscle glycogen before and during the recovery phases of intense exercise. Power, strength, and stamina are all expected to increase as a result, and it might also speed up the healing process after intense physical activity. Second, insulin is expected to increase muscle mass by preventing the breakdown of muscle protein when consumed along with high carb-protein diet. Although a limited number of studies do suggest that insulin medication can be abused as a pharmacological treatment to boost strength and performance in young, healthy people or athletes, a recent assessment of the research argues that this is only applicable to a small group of "drug-naïve" individuals. Abuse The abuse of exogenous insulin carries with it an attendant risk of hypoglycemic coma and death when the amount used is in excess of that required to handle ingested carbohydrate. Acute risks include brain damage, paralysis, and death. Symptoms may include dizziness, weakness, trembling, palpitations, seizures, confusion, headache, drowsiness, coma, diaphoresis and nausea. All persons with overdoses should be referred for medical assessment and treatment, which may last for hours or days. Data from the US National Poison Data System (2013) indicates that 89.3% of insulin cases reported to poison centers are unintentional, as a result of therapeutic error. Another 10% of cases are intentional, and may reflect attempted suicide, abuse, criminal intent, secondary gain or other unknown reasons. Hypoglycemia that has been induced by exogenous insulin can be chemically detected by examining the ratio of insulin to C-peptide in peripheral circulation. It has been suggested that this type of approach could be used to detect exogenous insulin abuse by athletes. Detection in biological fluids Insulin is often measured in serum, plasma or blood in order to monitor therapy in people who are diabetic, confirm a diagnosis of poisoning in hospitalized persons or assist in a medicolegal investigation of suspicious death. Interpretation of the resulting insulin concentrations is complex, given the numerous types of insulin available, various routes of administration, the presence of anti-insulin antibodies in insulin-dependent diabetics and the ex vivo instability of the drug. Other potential confounding factors include the wide-ranging cross-reactivity of commercial insulin immunoassays for the biosynthetic insulin analogs, the use of high-dose intravenous insulin as an antidote to antihypertensive drug over dosage and postmortem redistribution of insulin within the body. The use of a chromatographic technique for insulin assay may be preferable to immunoassay in some circumstances, to avoid the issue of cross-reactivity affecting the quantitative result and also to assist identifying the specific type of insulin in the specimen. Combination with other antidiabetic drugs A combination therapy of insulin and other antidiabetic drugs appears to be most beneficial in people who are diabetic, who still have residual insulin secretory capacity. A combination of insulin therapy and sulfonylurea is more effective than insulin alone in treating people with type 2 diabetes after secondary failure to oral drugs, leading to better glucose profiles and/or decreased insulin needs. History Insulin was first used as a medication in Canada by Charles Best and Frederick Banting in 1922. This is a chronology of key milestones in the history of the medical use of insulin. For more details on the discovery, extraction, purification, clinical use, and synthesis of insulin, see Insulin 1921 Research on the role of pancreas in the nutritive assimilation 1922 Frederick Banting, Charles Best and James Collip use bovine insulin extract in humans at Connaught Laboratories in Toronto, Canada. 1922 Leonard Thompson becomes the first human to be treated with insulin. 1922 James D. Havens, son of former congressman James S. Havens, becomes the first American to be treated with insulin. 1922 Elizabeth Hughes Gossett, daughter of the US Secretary of State, becomes the first American to be (officially) treated in Toronto. 1923 Eli Lilly produces commercial quantities of much purer bovine insulin than Banting et al. had used 1923 Farbwerke Hoechst, one of the forerunners of today's Sanofi Aventis, produces commercial quantities of bovine insulin in Germany 1923 Hans Christian Hagedorn founds the Nordisk Insulinlaboratorium in Denmark – forerunner of today's Novo Nordisk 1923 Constance Collier returns to health after being successfully treated with insulin in Strasbourg 1926 Nordisk receives a Danish charter to produce insulin as a non-profit 1936 Canadians David M. Scott and Albert M. Fisher formulate a zinc insulin mixture at Connaught Laboratories in Toronto and license it to Novo 1936 Hagedorn discovers that adding protamine to insulin prolongs the duration of action of insulin 1946 Nordisk formulates Isophane porcine insulin aka Neutral Protamine Hagedorn or NPH insulin 1946 Nordisk crystallizes a protamine and insulin mixture 1950 Nordisk markets NPH insulin 1953 Novo formulates Lente porcine and bovine insulins by adding zinc for longer lasting insulin 1955 Frederick Sanger determines the amino acid sequence of insulin 1965 Synthesized by total synthesis by Wang Yinglai, Chen-Lu Tsou, et al. 1969 Dorothy Crowfoot Hodgkin characterizes and describes the crystal structure of insulin by X-ray crystallography 1973 Purified monocomponent (MC) insulin is introduced 1973 The US officially "standardized" insulin sold for human use in the US to U-100 (100 units per milliliter). Prior to that, insulin was sold in different strengths, including U-80 (80 units per milliliter) and U-40 formulations (40 units per milliliter), so the effort to "standardize" the potency aimed to reduce dosage errors and ease doctors' job of prescribing insulin for people. Other countries also followed suit. 1978 Genentech produces biosynthetic human insulin in Escherichia coli bacteria using recombinant DNA techniques, licenses to Eli Lilly 1981 Novo Nordisk chemically and enzymatically converts porcine to human insulin 1982 Genentech synthetic human insulin (above) approved 1983 Eli Lilly and Company produces biosynthetic human insulin with recombinant DNA technology, Humulin 1985 Axel Ullrich sequences a human cell membrane insulin receptor. 1988 Novo Nordisk produces recombinant biosynthetic human insulin 1996 Lilly Humalog "lispro" insulin analogue approved. 2000 Sanofi Aventis Lantus insulin "glargine" analogue approved for clinical use in the US and the EU. 2004 Sanofi Aventis Apidra insulin "glulisine" insulin analogue approved for clinical use in the US. 2006 Novo Nordisk Levemir "detemir" insulin analogue approved for clinical use in the US. 2008 Abott laboratories " FreeStyle Navigator CGM" gets approved. 2013 The US Food and Drug Administration (FDA) requested more cardiac safety tests for Insulin degludec. 2015 Insulin degludec was approved by the FDA in September 2015. Society and culture Economics United States In the United States the unit price of insulin has increased steadily from 1991 to 2019. It rose threefold from 2002 to 2013. Costs can be as high as US$900 per month. Concerns were raised in 2016 of pharmaceutical companies working together to increase prices. In January 2019, lawmakers from the United States House of Representatives sent letters to insulin manufacturers Eli Lilly and Company, Sanofi, and Novo Nordisk asking for explanations for their rapidly raising insulin prices. The annual cost of insulin for people with type 1 diabetes in the US almost doubled from $2,900 to $5,700 over the period from 2012 to 2016. In 2019, it was estimated that people in the US pay two to six times more than the rest of the world for brand name prescription medicine, according to the International Federation of Health Plans. California, in July 2022, approved a budget that allocates $100 million for the state to create its own insulin at a close-to-cost price. Canada Canada, like many other industrialized countries, has price controls on the cost of pharmaceuticals. The Patented Medicine Prices Review Board ensures the price of patented medicine sold in Canada is "not excessive" and remains "comparable with prices in other countries." United Kingdom Insulin, and all other medications, are supplied free of charge to people who use it to manage their diabetes by the National Health Services of the countries of the United Kingdom. Regulatory status United States In March 2020, the FDA changed the regulatory pathway for approval of new insulin products. Insulin is regulated as a biologic rather than as a drug. The changed status gives the FDA more flexibility for approval and labeling. In July 2021, the FDA approved insulin glargine-yfgn (Semglee), a biosimilar product that contains the long acting analog insulin glargine. Insulin glargine-yfgn is interchangeable and less expensive than the reference product, insulin glargine (Lantus), which had been approved in 2000. The FDA requires that new insulin products are not inferior to existing insulin products with respect to reduction in hemoglobin A1c. Research Inhalation In 2006, the US Food and Drug Administration (FDA) approved the use of Exubera, the first inhalable insulin. It was withdrawn from the market by its maker in 2007 due to lack of acceptance. Inhaled insulin claimed to have similar efficacy to injected insulin, both in terms of controlling glucose levels and blood half-life. Currently, inhaled insulin is short-acting and is typically taken before meals; an injection of long-acting insulin at night is often still required. When people were switched from injected to inhaled insulin, no significant difference was observed in HbA1c levels over three months. Accurate dosing was a particular problem, although people showed no significant weight gain or pulmonary function decline over the length of the trial when compared to the baseline. Following its commercial launch in 2005 in the United Kingdom, it was not (as of July 2006) recommended by National Institute for Health and Clinical Excellence for routine use, except in cases where there is "proven injection phobia diagnosed by a psychiatrist or psychologist". In January 2008, the world's largest insulin manufacturer, Novo Nordisk, also announced that the company was discontinuing all further development of the company's own version of inhalable insulin, known as the AERx iDMS inhaled insulin system. Similarly, Eli Lilly and Company ended its efforts to develop its inhaled Air Insulin in March 2008. Afrezza, developed by Mannkind, was authorized by the FDA in June 2014 for use in adults with Type 1 and Type 2 diabetes, with a label restriction limiting its use only to those who also have asthma, active lung cancer, or chronic obstructive pulmonary disease (COPD). Rapid-acting inhaled insulin is a component of the drug-device combination solution that is used at the start of every meal. It employs technosphere technology, which appears to have a more practical delivery method and more dosing flexibility, and a new inhaled insulin formulation (2.5 m). A thumb-sized inhaler with improved dosage flexibility is used to deliver inhalable insulin. It includes powder-dissolved recombinant human insulin (fumaryl diketopiperazine). Technosphere insulin is quickly absorbed by the lung surface after inhalation. Within 12 hours of inhalation, both substances—insulin, and powder (fumaryl diketopiperazine)—are virtually eliminated from healthy people's lungs. In comparison to Exubera (8–9%), just 0.3% of inhaled insulin was still present in the lungs after 12 hours. However, since serum antibody levels have been reported to increase without substantial clinical changes, acute bronchospasm in asthmatic and COPD patients along with a significant reduction in Diffusing Lung Capacity for Carbon Monoxide, in comparison to subcutaneous insulin, have been reported with its usage, Afrezza was given FDA approval with a warning (Risk Evaluation and Mitigation Strategy). Transdermal There are several methods for transdermal delivery of insulin. Pulsatile insulin uses microjets to pulse insulin into the person, mimicking the physiological secretions of insulin by the pancreas. Jet injection had different insulin delivery peaks and durations as compared to needle injection. Some diabetics may prefer jet injectors to hypodermic injection. Both electricity using iontophoresis and ultrasound have been found to make the skin temporarily porous. The insulin administration aspect remains experimental, but the blood glucose test aspect of "wrist appliances" is commercially available Researchers have produced a watch-like device that tests for blood glucose levels through the skin and administers corrective doses of insulin through pores in the skin. A similar device, but relying on skin-penetrating "microneedles", was in the animal testing stage in 2015. In the last couple of years, the use of chemical enhancers, electrical devices, and microneedle devices has shown tremendous promise for improving the penetration of insulin compared to passive transport via the skin . Transdermal insulin delivery shows a more patient-friendly and minimally invasive approach to daily diabetes care than the conventional hypodermic injection however, additional research is necessary to address issues such as long-term use, delivery efficiency, and reliability, as well as side effects involving inflammation and irritation. Intranasal Insulin can be delivered to the central nervous system via the intranasal (IN) route with little to no systemic uptake or associated peripheral side effects. It has been demonstrated that intranasally delivered insulin rapidly accumulates in CSF fluid, indicating effective transport to the brain. This accumulation is thought to occur along olfactory and nearby routes. Although numerous studies have published encouraging results, further study is still being conducted to comprehend its long-term impacts in order to begin the successful clinical application. By mouth The basic appeal of hypoglycemic agents by mouth is that most people would prefer a pill or an oral liquid to an injection. However, insulin is a peptide hormone, which is digested in the stomach and gut and in order to be effective at controlling blood sugar, cannot be taken orally in its current form. The potential market for an oral form of insulin is assumed to be enormous, thus many laboratories have attempted to devise ways of moving enough intact insulin from the gut to the portal vein to have a measurable effect on blood sugar. A number of derivatization and formulation strategies are currently being pursued to in an attempt to develop an orally available insulin. Many of these approaches employ nanoparticle delivery systems and several are being tested in clinical trials. Pancreatic transplantation Another improvement would be a transplantation of the pancreas or beta cell to avoid periodic insulin administration. This would result in a self-regulating insulin source. Transplantation of an entire pancreas (as an individual organ) is difficult and relatively uncommon. It is often performed in conjunction with liver or kidney transplant, although it can be done by itself. It is also possible to do a transplantation of only the pancreatic beta cells. However, islet transplants had been highly experimental for many years, but some researchers in Alberta, Canada, have developed techniques with a high initial success rate (about 90% in one group). Nearly half of those who got an islet cell transplant were insulin-free one year after the operation; by the end of the second year that number drops to about one in seven. However, researchers at the University of Illinois at Chicago (UIC) have slightly modified the Edmonton Protocol procedure for islet cell transplantation and achieved insulin independence in diabetic people, with fewer but better-functioning pancreatic islet cells. Beta cell transplant may become practical. Additionally, some researchers have explored the possibility of transplanting genetically engineered non-beta cells to secrete insulin.
Biology and health sciences
Specific drugs
Health
13127410
https://en.wikipedia.org/wiki/Oberth%20effect
Oberth effect
In astronautics, a powered flyby, or Oberth maneuver, is a maneuver in which a spacecraft falls into a gravitational well and then uses its engines to further accelerate as it is falling, thereby achieving additional speed. The resulting maneuver is a more efficient way to gain kinetic energy than applying the same impulse outside of a gravitational well. The gain in efficiency is explained by the Oberth effect, wherein the use of a reaction engine at higher speeds generates a greater change in mechanical energy than its use at lower speeds. In practical terms, this means that the most energy-efficient method for a spacecraft to burn its fuel is at the lowest possible orbital periapsis, when its orbital velocity (and so, its kinetic energy) is greatest. In some cases, it is even worth spending fuel on slowing the spacecraft into a gravity well to take advantage of the efficiencies of the Oberth effect. The maneuver and effect are named after the person who first described them in 1927, Hermann Oberth, a Transylvanian Saxon physicist and a founder of modern rocketry. Because the vehicle remains near periapsis only for a short time, for the Oberth maneuver to be most effective the vehicle must be able to generate as much impulse as possible in the shortest possible time. As a result the Oberth maneuver is much more useful for high-thrust rocket engines like liquid-propellant rockets, and less useful for low-thrust reaction engines such as ion drives, which take a long time to gain speed. Low thrust rockets can use the Oberth effect by splitting a long departure burn into several short burns near the periapsis. The Oberth effect also can be used to understand the behavior of multi-stage rockets: the upper stage can generate much more usable kinetic energy than the total chemical energy of the propellants it carries. In terms of the energies involved, the Oberth effect is more effective at higher speeds because at high speed the propellant has significant kinetic energy in addition to its chemical potential energy. At higher speed the vehicle is able to employ the greater change (reduction) in kinetic energy of the propellant (as it is exhausted backward and hence at reduced speed and hence reduced kinetic energy) to generate a greater increase in kinetic energy of the vehicle. Explanation in terms of work and kinetic energy Because kinetic energy equals mv2/2, this change in velocity imparts a greater increase in kinetic energy at a high velocity than it would at a low velocity. For example, considering a 2 kg rocket: at 1 m/s, the rocket starts with 12 = 1 J of kinetic energy. Adding 1 m/s increases the kinetic energy to 22 = 4 J, for a gain of 3 J; at 10 m/s, the rocket starts with 102 = 100 J of kinetic energy. Adding 1 m/s increases the kinetic energy to 112 = 121 J, for a gain of 21 J. This greater change in kinetic energy can then carry the rocket higher in the gravity well than if the propellant were burned at a lower speed. Description in terms of work The thrust produced by a rocket engine is independent of the rocket’s velocity relative to the surrounding atmosphere. A rocket acting on a fixed object, as in a static firing, does no useful work on the rocket; the rocket's chemical energy is progressively converted to kinetic energy of the exhaust, plus heat. But when the rocket moves, its thrust acts through the distance it moves. Force multiplied by displacement is the definition of mechanical work. The greater the velocity of the rocket and payload during the burn the greater is the displacement and the work done, and the greater the increase in kinetic energy of the rocket and its payload. As the velocity of the rocket increases, progressively more of the available kinetic energy goes to the rocket and its payload, and less to the exhaust. This is shown as follows. The mechanical work done on the rocket is defined as the dot product of the force of the engine's thrust and the displacement it travels during the burn If the burn is made in the prograde direction, The work results in a change in kinetic energy Differentiating with respect to time, we obtain or where is the velocity. Dividing by the instantaneous mass to express this in terms of specific energy we get where is the acceleration vector. Thus it can be readily seen that the rate of gain of specific energy of every part of the rocket is proportional to speed and, given this, the equation can be integrated (numerically or otherwise) to calculate the overall increase in specific energy of the rocket. Impulsive burn Integrating the above energy equation is often unnecessary if the burn duration is short. Short burns of chemical rocket engines close to periapsis or elsewhere are usually mathematically modeled as impulsive burns, where the force of the engine dominates any other forces that might change the vehicle's energy over the burn. For example, as a vehicle falls toward periapsis in any orbit (closed or escape orbits) the velocity relative to the central body increases. Briefly burning the engine (an "impulsive burn") prograde at periapsis increases the velocity by the same increment as at any other time (). However, since the vehicle's kinetic energy is related to the square of its velocity, this increase in velocity has a non-linear effect on the vehicle's kinetic energy, leaving it with higher energy than if the burn were achieved at any other time. Oberth calculation for a parabolic orbit If an impulsive burn of Δv is performed at periapsis in a parabolic orbit, then the velocity at periapsis before the burn is equal to the escape velocity (Vesc), and the specific kinetic energy after the burn is where . When the vehicle leaves the gravity field, the loss of specific kinetic energy is so it retains the energy which is larger than the energy from a burn outside the gravitational field () by When the vehicle has left the gravity well, it is traveling at a speed For the case where the added impulse Δv is small compared to escape velocity, the 1 can be ignored, and the effective Δv of the impulsive burn can be seen to be multiplied by a factor of simply and one gets ≈ Similar effects happen in closed and hyperbolic orbits. Parabolic example If the vehicle travels at velocity v at the start of a burn that changes the velocity by Δv, then the change in specific orbital energy (SOE) due to the new orbit is Once the spacecraft is far from the planet again, the SOE is entirely kinetic, since gravitational potential energy approaches zero. Therefore, the larger the v at the time of the burn, the greater the final kinetic energy, and the higher the final velocity. The effect becomes more pronounced the closer to the central body, or more generally, the deeper in the gravitational field potential in which the burn occurs, since the velocity is higher there. So if a spacecraft is on a parabolic flyby of Jupiter with a periapsis velocity of 50 km/s and performs a 5 km/s burn, it turns out that the final velocity change at great distance is 22.9 km/s, giving a multiplication of the burn by 4.58 times. Paradox It may seem that the rocket is getting energy for free, which would violate conservation of energy. However, any gain to the rocket's kinetic energy is balanced by a relative decrease in the kinetic energy the exhaust is left with (the kinetic energy of the exhaust may still increase, but it does not increase as much). Contrast this to the situation of static firing, where the speed of the engine is fixed at zero. This means that its kinetic energy does not increase at all, and all the chemical energy released by the fuel is converted to the exhaust's kinetic energy (and heat). At very high speeds the mechanical power imparted to the rocket can exceed the total power liberated in the combustion of the propellant; this may also seem to violate conservation of energy. But the propellants in a fast-moving rocket carry energy not only chemically, but also in their own kinetic energy, which at speeds above a few kilometres per second exceed the chemical component. When these propellants are burned, some of this kinetic energy is transferred to the rocket along with the chemical energy released by burning. The Oberth effect can therefore partly make up for what is extremely low efficiency early in the rocket's flight when it is moving only slowly. Most of the work done by a rocket early in flight is "invested" in the kinetic energy of the propellant not yet burned, part of which they will release later when they are burned.
Physical sciences
Orbital mechanics
Astronomy
6510422
https://en.wikipedia.org/wiki/Caenagnathidae
Caenagnathidae
Caenagnathidae is a family of derived caenagnathoid dinosaurs from the Cretaceous of North America and Asia. They are a member of the Oviraptorosauria, and relatives of the Oviraptoridae. Like other oviraptorosaurs, caenagnathids had specialized beaks, long necks, and short tails, and would have been covered in feathers. The relationships of caenagnathids were long a puzzle. The family was originally named by Raymond Martin Sternberg in 1940 as a family of flightless birds. The discovery of skeletons of the related oviraptorids revealed that they were in fact non-avian theropods, and the discovery of more complete caenagnathid remains revealed that Chirostenotes pergracilis, originally named on the basis of a pair of hands, and Citipes elegans, originally thought to be an ornithomimid, named from a foot, were caenagnathids as well. Discovery The name Caenagnathus (and hence Caenagnathidae) means "recent jaws"—when first discovered, it was thought that caenagnathids were close relatives of paleognath birds (such as the ostrich) based on features of the lower jaw. Since it would be unusual to find a recent group of birds in the Cretaceous, the name "recent jaws" was applied. Most paleontologists, however, now think that the birdlike features of the jaw were acquired convergently with modern birds. Description Caenagnathids were some of the largest oviraptorosaurs that ever existed. The largest members are represented by the enormous Beibeilong and Gigantoraptor, estimated around in length. Other caenagnathids were slightly smaller, such as the long Hagryphus, or the long Anzu. Overall, the anatomy of the caenagnathids is similar to that of the closely related Oviraptoridae, but there are a number of differences. In particular, caenagnathid jaws exhibited a distinct suite of specializations not seen in other oviraptorosaurs. Compared to the oviraptorids, the jaws tended to be relatively long and shallow, suggesting that the bite was not as powerful. The inside of the lower jaws also bore a complex series of ridges and toothlike processes, as well as a pair of horizontal, shelf-like structures. Furthermore, the jaws were unusual in being hollow and air filled, apparently being connected to the air sac system. Analysis of oviraptorosaur functional morphology reveals that caenagnathids were not as well suited to herbivory as more primitive oviraptorosaurs, and their close relatives, the oviraptorids. Their mandibles have a lower mechanical advantage anteriorly but not posteriorly, similar to the dromaeosaurids, and are less robust, which makes them more suited to quicker prey capture, with their pointed beaks aiding in slashing prey. Caenagnathids also tended to be more lightly built than the oviraptorids. They had slender arms and long, gracile legs, although they lacked the extreme cursorial specializations seen in avimimids and Caudipteryx. Classification The family Caenagnathidae, together with its sister group the Oviraptoridae, comprises the superfamily Caenagnathoidea. In phylogenetic taxonomy, the clade Caenagnathidae is defined as the most inclusive group containing Caenagnathus collinsi but not Oviraptor philoceratops. While before 2010s only about two to six species were commonly recognized as belonging to the Caenagnathidae, currently that number may be much greater, with new discoveries and theories about older species that may inflate this number to up to ten. Much of this historical difference centers on the first caenagnathid to be described, Chirostenotes pergracilis. Due to the poor preservation of most caenagnathid remains and resulting misidentifications, different bones and different specimens of Chirostenotes have historically been assigned to a number of different species. For example, the feet of one species, named Macrophalangia canadensis, were known from the same region from which Chirostenotes pergracilis was recovered, but the discovery of a new specimen with both hands and feet preserved provided the support to combine them, while the later discovery of a partial skull with hands and feet suggested that Chirostenotes and Caenagnathus were the same animal, and current studies of caenagnathid relationships continue to find them as closely related genera. Hendrickx and colleagues (2015) defined a subgroup of Caenagnathidae, the Caenagnathinae, as all caenagnathids more closely related to Caenagnathus collinsi than to Elmisaurus rarus. The group Elmisaurinae is defined as including all species more closely related to Elmisaurus rarus than to Caenagnathus collinsi. The cladogram below follows an analysis by Gregory Funston in 2020. Evolution The earliest known caenagnathid is Microvenator celer, from the Early Cretaceous Cloverly Formation. Caenagnathids likely dispersed to Asia from North America with some caenagnathids later reappearing in western North America, during the Campanian. Caenagnathids showed considerable variation in form. The tiny jaws of Caenagnathasia suggest a small animal, perhaps the size of a turkey. Anzu wyliei, from the Hell Creek Formation is a much larger animal, considerably larger than a human. If Gigantoraptor erlianensis is a caenagnathid, then it would represent far and away the largest member of the group, measuring up to in length and weighing up to . Their beaks also show considerable variation; that of Caenagnathasia is relatively short and deep, while that of Caenagnathus is long and shovel-shaped. This variation in size and beak shape suggests that caenagnathids evolved to exploit a range of ecological niches. Caenagnathids persisted up until the end of the Cretaceous period, as shown by the presence of Anzu and another, unnamed species of elmisaurine (all caenagnathids closer to Elmisaurus than to Caenagnathus) in the late Maastrichtian Hell Creek Formation, before vanishing at the end of the Cretaceous along with all other non-avian dinosaurs. Species Roughly a dozen caenagnathid species have been named, but it remains unclear how many are valid. Many species are known from fragmentary remains, such as jaws, hands, or feet, making comparisons between them difficult. Caenagnathus sternbergi, for example, was described on the basis of a jaw bone. It has been interpreted as either the jaws of Chirostenotes pergracilis (described on the basis of a pair of hands) or Chirostenotes elegans (described on the basis of a foot), but because no complete skeleton is known, it is difficult to be certain which animal it belongs to. The relationships of other species remain in doubt. Gigantoraptor was originally interpreted as an oviraptorid, but may in fact represent a primitive caenagnathid. Anzu wyliei - (Hell Creek Formation, North Dakota and South Dakota, United States) Apatoraptor pennatus - (Horseshoe Canyon Formation, Alberta) Beibeilong sinensis - (Gaogou Formation, China) Caenagnathasia martinsoni - (Bissekty Formation, Uzbekistan) Citipes elegans - (Dinosaur Park Formation, Alberta, Canada) Chirostenotes pergracilis - (Dinosaur Park Formation, Alberta, Canada) Caenagnathus collinsi - (Dinosaur Park Formation, Alberta, Canada) Elmisaurus rarus - (Nemegt Formation, Mongolia) Eoneophron infernalis - (Hell Creek Formation, South Dakota) Epichirostenotes curriei - (Horseshoe Canyon Formation, Alberta, Canada) Gigantoraptor erlianensis - (Iren Dabasu Formation, Inner Mongolia, China) Hagryphus giganteus - (Kaiparowits Formation, Utah, United States) Leptorhynchos gaddisi - (Aguja Formation, Texas, United States) Nomingia gobiensis - (Nemegt Formation, Mongolia) Ojoraptorsaurus boerei - (Ojo Alamo Formation, New Mexico, United States) Caenagnathids are only known from the Late Cretaceous of North America and Asia. The earliest and most primitive known caenagnathid is Caenagnathasia martinsoni, from the Bissekty Formation of Uzbekistan.
Biology and health sciences
Theropods
Animals
6512121
https://en.wikipedia.org/wiki/Igneous%20intrusion
Igneous intrusion
In geology, an igneous intrusion (or intrusive body or simply intrusion) is a body of intrusive igneous rock that forms by crystallization of magma slowly cooling below the surface of the Earth. Intrusions have a wide variety of forms and compositions, illustrated by examples like the Palisades Sill of New York and New Jersey; the Henry Mountains of Utah; the Bushveld Igneous Complex of South Africa; Shiprock in New Mexico; the Ardnamurchan intrusion in Scotland; and the Sierra Nevada Batholith of California. Because the solid country rock into which magma intrudes is an excellent insulator, cooling of the magma is extremely slow, and intrusive igneous rock is coarse-grained (phaneritic). Intrusive igneous rocks are classified separately from extrusive igneous rocks, generally on the basis of their mineral content. The relative amounts of quartz, alkali feldspar, plagioclase, and feldspathoid is particularly important in classifying intrusive igneous rocks. Intrusions must displace existing country rock to make room for themselves. The question of how this takes place is called the room problem, and it remains a subject of active investigation for many kinds of intrusions. The term pluton is poorly defined, but has been used to describe an intrusion emplaced at great depth; as a synonym for all igneous intrusions; as a dustbin category for intrusions whose size or character are not well determined; or as a name for a very large intrusion or for a crystallized magma chamber. A pluton that has intruded and obscured the contact between a terrane and adjacent rock is called a stitching pluton. Classification Intrusions are broadly divided into discordant intrusions, which cut across the existing structure of the country rock, and concordant intrusions that intrude parallel to existing bedding or fabric. These are further classified according to such criteria as size, evident mode of origin, or whether they are tabular in shape. An intrusive suite is a group of intrusions related in time and space. Discordant intrusions Dikes Dikes are tabular discordant intrusions, taking the form of sheets that cut across existing rock beds. They tend to resist erosion, so that they stand out as natural walls on the landscape. They vary in thickness from millimeter-thick films to over and an individual sheet can have an area of . They also vary widely in composition. Dikes form by hydraulic fracturing of the country rock by magma under pressure, and are more common in regions of crustal tension. Ring dikes and cone sheets Ring dikes and cone sheets are dikes with particular forms that are associated with the formation of calderas. Volcanic necks Volcanic necks are feeder pipes for volcanoes that have been exposed by erosion. Surface exposures are typically cylindrical, but the intrusion often becomes elliptical or even cloverleaf-shaped at depth. Dikes often radiate from a volcanic neck, suggesting that necks tend to form at intersections of dikes where passage of magma is least obstructed. Diatremes and breccia pipes Diatremes and breccia pipes are pipe-like bodies of breccia that are formed by particular kinds of explosive eruptions. As they have reached the surface they are really extrusions, but the non erupted material is an intrusion and indeed due to erosion may be difficult to distinguish from an intrusion that never reached the surface when magma/lava. The root material of a diatreme is identical to intrusive material nearby, if it exists, that never reached the then surface when formed. Stocks A stock is a non-tabular discordant intrusion whose exposure covers less than . Although this seems arbitrary, particularly since the exposure may be only the tip of a larger intrusive body, the classification is meaningful for bodies which do not change much in area with depth and that have other features suggesting a distinctive origin and mode of emplacement. Batholiths Batholiths are discordant intrusions with an exposed area greater than . Some are of truly enormous size, and their lower contacts are very rarely exposed. For example, the Coastal Batholith of Peru is long and wide. They are usually formed from magma rich in silica, and never from gabbro or other rock rich in mafic minerals, but some batholiths are composed almost entirely of anorthosite. Concordant intrusions Sills A sill is a tabular concordant intrusion, typically taking the form of a sheet parallel to sedimentary beds. They are otherwise similar to dikes. Most are of mafic composition, relatively low in silica, which gives them the low viscosity necessary to penetrate between sedimentary beds. Laccoliths A laccolith is a concordant intrusion with a flat base and domed roof. Laccoliths typically form at shallow depth, less than , and in regions of crustal compression. Lopoliths and layered intrusions Lopoliths are concordant intrusions with a saucer shape, somewhat resembling an inverted laccolith, but they can be much larger and form by different processes. Their immense size promotes very slow cooling, and this produces an unusually complete mineral segregation called a layered intrusion. Formation The room problem The ultimate source of magma is partial melting of rock in the upper mantle and lower crust. This produces magma that is less dense than its source rock. For example, a granitic magma, which is high in silica, has a density of 2.4 Mg/m3, much less than the 2.8 Mg/m3 of high-grade metamorphic rock. This gives the magma tremendous buoyancy, so that ascent of the magma is inevitable once enough magma has accumulated. However, the question of precisely how large quantities of magma are able to shove aside country rock to make room for themselves (the room problem) is still a matter of research. The composition of the magma and country rock and the stresses affecting the country rock strongly influence the kinds of intrusions that take place. For example, where the crust is undergoing extension, magma can easily rise into tensional fractures in the upper crust to form dikes. Where the crust is under compression, magma at shallow depth will tend to form laccoliths instead, with the magma penetrating the least competent beds, such as shale beds. Ring dikes and cone sheets form only at shallow depth, where a plug of overlying country rock can be raised or lowered. The immense volumes of magma involved in batholiths can force their way upwards only when the magma is highly silicic and buoyant, and are likely do so as diapirs in the ductile deep crust and through a variety of other mechanisms in the brittle upper crust. Multiple and composite intrusions Igneous intrusions may form from a single magmatic event or several incremental events. Recent evidence suggests that incremental formation is more common for large intrusions. For example, the Palisades Sill was never a single body of magma thick, but was formed from multiple injections of magma. An intrusive body is described as multiple when it forms from repeated injections of magma of similar composition, and as composite when formed of repeated injections of magma of unlike composition. A composite dike can include rocks as different as granophyre and diabase. While there is often little visual evidence of multiple injections in the field, there is geochemical evidence. Zircon zoning provides important evidence for determining if a single magmatic event or a series of injections were the methods of emplacement. Large felsic intrusions likely form from melting of lower crust that has been heated by an intrusion of mafic magma from the upper mantle. The different densities of felsic and mafic magma limit mixing, so that the silicic magma floats on the mafic magma. Such limited mixing as takes place results in the small inclusions of mafic rock commonly found in granites and granodiorites. Cooling An intrusion of magma loses heat to the surrounding country rock through heat conduction. Near the contact of hot material with cold material, if the hot material is initially uniform in temperature, the temperature profile across the contact is given by the relationship where is the initial temperature of the hot material, k is the thermal diffusivity (typically close to 10−6 m2 s−1 for most geologic materials), x is the distance from the contact, and t is the time since intrusion. This formula suggests that the magma close to the contact will be rapidly chilled while the country rock close to the contact is rapidly heated, while material further from the contact will be much slower to cool or heat. Thus a chilled margin is often found on the intrusion side of the contact, while a contact aureole is found on the country rock side. The chilled margin is much finer grained than most of the intrusion, and may be different in composition, reflecting the initial composition of the intrusion before fractional crystallization, assimilation of country rock, or further magmatic injections modified the composition of the rest of the intrusion. Isotherms (surfaces of constant temperature) propagate away from the margin according to a square root law, so that if the outermost meter of the magma takes ten years to cool to a given temperature, the next inward meter will take 40 years, the next will take 90 years, and so on. This is an idealization, and such processes as magma convection (where cooled magma next to the contact sinks to the bottom of the magma chamber and hotter magma takes its place) can alter the cooling process, reducing the thickness of chilled margins while hastening cooling of the intrusion as a whole. However, it is clear that thin dikes will cool much faster than larger intrusions, which explains why small intrusions near the surface (where the country rock is initially cold) are often nearly as fine-grained as volcanic rock. Structural features of the contact between intrusion and country rock give clues to the conditions under which the intrusion took place. Catazonal intrusions have a thick aureole that grades into the intrusive body with no sharp margin, indicating considerable chemical reaction between intrusion and country rock, and often have broad migmatite zones. Foliations in the intrusion and the surrounding country rock are roughly parallel, with indications of extreme deformation in the country rock. Such intrusions are interpreted as taking placed at great depth. Mesozonal intrusions have a much lower degree of metamorphism in their contact aureoles, and the contact between country rock and intrusion is clearly discernible. Migmatites are rare and deformation of country rock is moderate. Such intrusions are interpreted as occurring at medium depth. Epizonal intrusions are discordant with country rock and have sharp contacts with chilled margins, with only limited metamorphism in a contact aureole, and often contain xenolithic fragments of country rock suggesting brittle fracturing. Such intrusions are interpreted as occurring at shallow depth, and are commonly associated with volcanic rocks and collapse structures. Cumulates An intrusion does not crystallize all minerals at once; rather, there is a sequence of crystallization that is reflected in the Bowen reaction series. Crystals formed early in cooling are generally denser than the remaining magma and can settle to the bottom of a large intrusive body. This forms a cumulate layer with distinctive texture and composition. Such cumulate layers may contain valuable ore deposits of chromite. The vast Bushveld Igneous Complex of South Africa includes cumulate layers of the rare rock type, chromitite, composed of 90% chromite,
Physical sciences
Igneous rocks
Earth science
141264
https://en.wikipedia.org/wiki/Prismatic%20blade
Prismatic blade
In archaeology, a prismatic blade is a long, narrow, specialized stone flake tool with a sharp edge, like a small razor blade. Prismatic blades are flaked from stone cores through pressure flaking or direct percussion. This process results in a very standardized finished tool and waste assemblage. The most famous and most prevalent prismatic blade material is obsidian, as obsidian use was widespread in Mesoamerica, though chert, flint, and chalcedony blades are not uncommon. The term is generally restricted to Mesoamerican archaeology, although some examples are found in the Old World, for example in a Minoan grave in Crete. Prismatic blades were used for cutting and scraping, and have been reshaped into other tool types, such as projectile points and awls. Morphology Prismatic blades are often trapezoidal in cross section, but very close in appearance to an isosceles trapezoid. Triangular blades (in cross-section) are also common. The ventral surface of the prismatic blade is very smooth, sometimes bearing slight rippling reflecting the direction of applied force and a very small bulb of applied force (indicative of pressure reduction). Flake scars are absent on the ventral surface of these blades, though eraillure flakes are sometimes present on the bulb. The dorsal surface, on the other hand, exhibits scar ridges running parallel to the long axis of the blade. These facets are created by the previous removal of blades from the core. The proximal end contains the blade's striking platform and its bulb of applied force, while the distal end will consist of a snap break, a feather termination, or a stepped termination. Production Obsidian prismatic blade production was ubiquitous in Mesoamerica, and these tools can be found at a large majority of Mesoamerican archaeological sites from the Preclassic period on until the arrival of the Spanish in the early 16th century. Ethnohistoric sources recount the process of prismatic blade production. Fray Motolinia, a Spanish observer, recorded: The production of prismatic blades creates not only a very standardized final product, but also a standardized waste assemblage. The analysis of obsidian debitage can reveal whether or not prismatic blade production occurred at a site and, if it had, what stages of production the process included. In other words, the types of manufacturing waste present (e.g., rejuvenation flakes and/or blades, platform rejuvenation flakes, etc.) at a site can inform archaeologists about the stage in which blades were being produced.
Technology
Hand tools
null
141496
https://en.wikipedia.org/wiki/Depth%20charge
Depth charge
A depth charge is an anti-submarine warfare (ASW) weapon designed to destroy submarines by detonating in the water near the target and subjecting it to a destructive hydraulic shock. Most depth charges use high explosives with a fuze set to detonate the charge, typically at a specific depth from the surface. Depth charges can be dropped by ships (typically fast, agile surface combatants such as destroyers or frigates), patrol aircraft and helicopters. Depth charges were developed during World War I, and were one of the first viable methods of attacking a submarine underwater. They were widely used in World War I and World War II, and remained part of the anti-submarine arsenals of many navies during the Cold War, during which they were supplemented, and later largely replaced, by anti-submarine homing torpedoes. A depth charge fitted with a nuclear warhead is also known as a "nuclear depth bomb". These were designed to be dropped from a patrol plane or deployed by an anti-submarine missile from a surface ship, or another submarine, located a safe distance away. By the late 1990s all nuclear anti-submarine weapons had been withdrawn from service by the United States, the United Kingdom, France, Russia and China. They have been replaced by conventional weapons whose accuracy and range had improved greatly as ASW technology improved. History The first attempt to fire charges against submerged targets was with aircraft bombs attached to lanyards which triggered them. A similar idea was a guncotton charge in a lanyarded can. Two of these lashed together became known as the "depth charge Type A". Problems with the lanyards tangling and failing to function led to the development of a chemical pellet trigger as the "Type B". These were effective at a distance of around . A 1913 Royal Navy Torpedo School report described a device intended for countermining, a "dropping mine". At Admiral John Jellicoe's request, the standard Mark II mine was fitted with a hydrostatic pistol (developed in 1914 by Thomas Firth and Sons of Sheffield) preset for firing, to be launched from a stern platform. Weighing , and effective at , the "cruiser mine" was a potential hazard to the dropping ship. The design work was carried out by Herbert Taylor at the RN Torpedo and Mine School, HMS Vernon. The first effective depth charge, the Type D, became available in January 1916. It was a barrel-like casing containing a high explosive (usually TNT, but amatol was also used when TNT became scarce). There were initially two sizes—Type D, with a charge for fast ships, and Type D* with a charge for ships too slow to leave the danger area before the more powerful charge detonated. A hydrostatic pistol actuated by water pressure at a pre-selected depth detonated the charge. Initial depth settings were . Because production could not keep up with demand, anti-submarine vessels initially carried only two depth charges, to be released from a chute at the stern of the ship. The first success was the sinking of U-68 off County Kerry, Ireland, on 22 March 1916, by the Q-ship Farnborough. Germany became aware of the depth charge following unsuccessful attacks on U-67 on 15 April 1916, and U-69 on 20 April 1916. The only other submarines sunk by depth charge during 1916 were UC-19 and UB-29. Numbers of depth charges carried per ship increased to four in June 1917, to six in August, and 30–50 by 1918. The weight of charges and racks caused ship instability unless heavy guns and torpedo tubes were removed to compensate. Improved pistols allowed greater depth settings in increments, from . Even slower ships could safely use the Type D at below and at or more, so the relatively ineffective Type D* was withdrawn. Monthly use of depth charges increased from 100 to 300 per month during 1917 to an average of 1745 per month during the last six months of World War I. The Type D could be detonated as deep as by that date. By the war's end, 74,441 depth charges had been issued by the RN, and 16,451 fired, scoring 38 kills in all, and aiding in 140 more. The United States requested full working drawings of the device in March 1917. Having received them, Commander Fullinwider of the U.S. Bureau of Naval Ordnance and U.S. Navy engineer Minkler made some modifications and then patented it in the U.S. It has been argued that this was done to avoid paying the original inventor. The Royal Navy Type D depth charge was designated the "Mark VII" in 1939. Initial sinking speed was with a terminal velocity of at a depth of if rolled off the stern, or upon water contact from a depth charge thrower. Cast iron weights of were attached to the Mark VII at the end of 1940 to increase sinking velocity to . New hydrostatic pistols increased the maximum detonation depth to . The Mark VII's amatol charge was estimated to be capable of splitting a submarine pressure hull at a distance of , and forcing the submarine to surface at twice that. The change of explosive to Torpex (or Minol) at the end of 1942 was estimated to increase those distances to . The British Mark X depth charge weighed and was launched from the torpedo tubes of older destroyers to achieve a sinking velocity of . The launching ship needed to clear the area at 11 knots to avoid damage, and the charge was seldom used. Only 32 were actually fired, and they were known to be troublesome. The teardrop-shaped United States Mark 9 depth charge entered service in the spring of 1943. The charge was of Torpex with a sinking speed of and depth settings of up to . Later versions increased depth to and sinking speed to with increased weight and improved streamlining. Although the explosions of the standard United States Mark 4 and Mark 7 depth charge used in World War II were nerve-wracking to the target, a U-boat's pressure hull would not rupture unless the charge detonated within about . Getting the weapon within this range was a matter of luck and quite unlikely as the target took evasive action. Most U-boats sunk by depth charges were destroyed by damage accumulated from an extended barrage rather than by a single charge, and many survived hundreds of depth charges over a period of many hours, such as U-427, which survived 678 depth charges in April 1945. Delivery mechanisms The first delivery mechanism was to simply roll the "ashcans" off racks at the stern of the moving attacking vessel. Originally depth charges were simply placed at the top of a ramp and allowed to roll. Improved racks, which could hold several depth charges and release them remotely with a trigger, were developed towards the end of the First World War. These racks remained in use throughout World War II because they were simple and easy to reload. Some Royal Navy trawlers used for anti-submarine work during 1917 and 1918 had a thrower on the forecastle for a single depth charge, but there do not seem to be any records of it being used in action. Specialized depth charge throwers were developed to generate a wider dispersal pattern when used in conjunction with rack-deployed charges. The first of these was developed from a British Army trench mortar. 1277 were issued, 174 installed in auxiliaries during 1917 and 1918. The bombs they launched were too light to be truly effective; only one U-boat is known to have been sunk by them. Thornycroft created an improved version able to throw a charge . The first was fitted in July 1917 and became operational in August. In all, 351 torpedo boat destroyers and 100 other craft were equipped. Projectors called "Y-guns" (in reference to their basic shape), developed by the U.S. Navy's Bureau of Ordnance from the Thornycroft thrower, became available in 1918. Mounted on the centerline of the ship with the arms of the Y pointing outboard, two depth charges were cradled on shuttles inserted into each arm. An explosive propellant charge was detonated in the vertical column of the Y-gun to propel a depth charge about over each side of the ship. The main disadvantage of the Y-gun was that it had to be mounted on the centerline of a ship's deck, which could otherwise be occupied by superstructure, masts, or guns. The first were built by New London Ship and Engine Company beginning on 24 November 1917. The K-gun, standardized in 1942, replaced the Y-gun as the primary depth charge projector. The K-guns fired one depth charge at a time and could be mounted on the periphery of a ship's deck, thus freeing valuable centerline space. Four to eight K-guns were typically mounted per ship. The K-guns were often used together with stern racks to create patterns of six to ten charges. In all cases, the attacking ship needed to be moving fast enough to get out of the danger zone before the charges exploded. Depth charges could also be dropped from an aircraft against submarines. At the start of World War II, Britain's primary aerial anti-submarine weapon was the anti-submarine bomb, but it was too light to be effective. To replace it, the Royal Navy's Mark VII depth charge was modified for aerial use by the addition of a streamlined nose fairing and stabilising fins on the tail; it entered service in 1941 as the Mark VII Airborne DC. Other designs followed in 1942. Experiencing the same problems as the RAF with ineffective anti-submarine bombs, Captain Birger Ek of Finnish Air Force squadron LeLv 6 contacted a navy friend to use Finnish Navy depth charges from aircraft, which led to his unit's Tupolev SB bombers being modified in early 1942 to carry depth charges. Later depth charges for dedicated aerial use were developed. These are still useful today and remain in use, particularly for shallow-water situations where a homing torpedo may not be effective. Depth charges are especially useful for "flushing the prey" in the event of a diesel submarine hiding on the bottom. Effectiveness The effective use of depth charges required the combined resources and skills of many individuals during an attack. Sonar, helm, depth charge crews and the movement of other ships had to be carefully coordinated. Aircraft depth charge tactics depended on the aircraft using its speed to rapidly appear from over the horizon and surprising the submarine on the surface (where it spent most of its time) during the day or night (at night using radar to detect the target and a Leigh light to illuminate it immediately before attacking), then quickly attacking once it had been located, as the submarine would normally crash dive to escape attack. As the Battle of the Atlantic wore on, British and Commonwealth forces became particularly adept at depth charge tactics, and formed some of the first destroyer hunter-killer groups to actively seek out and destroy German U-boats. Surface ships usually used ASDIC (sonar) to detect submerged submarines. However, to deliver its depth charges a ship had to pass over the contact to drop them over the stern; sonar contact would be lost just before attack, rendering the hunter blind at the crucial moment. This gave a skilful submarine commander an opportunity to take evasive action. In 1942 the forward-throwing "hedgehog" mortar, which fired a spread salvo of bombs with contact fuzes at a "stand-off" distance while still in sonar contact, was introduced, and proved to be effective. Pacific theater and the May Incident In the Pacific Theater during World War II, Japanese depth charge attacks were initially unsuccessful because they were unaware that the latest United States Navy submarines could dive so deep. Unless caught in shallow water, an American submarine could dive below the Japanese depth charge attack. The Japanese had used attack patterns based on the older United States S-class submarines (1918–1925) that had a test depth of ; while the WWII Balao-class submarines (1943) could reach . This changed in June 1943 when U.S. Congressman Andrew J. May of the House Military Affairs Committee caused The May Incident. The congressman, who had just returned from the Pacific theater where he had received confidential intelligence and operational briefings from the US Navy, revealed at a press conference that there were deficiencies in Japanese depth-charge tactics. After various press associations reported the depth issue, the Japanese Imperial Navy began setting their depth charges to explode at a more effective average depth of . Vice Admiral Charles A. Lockwood, commander of the U.S. submarine fleet in the Pacific, later estimated that May's ill-advised comments cost the US Navy as many as ten submarines and 800 seamen killed in action. Later developments For the reasons expressed above, the depth charge was generally replaced as an anti-submarine weapon. Initially, this was by ahead-throwing weapons such as the British-developed Hedgehog and later Squid mortars. These weapons threw a pattern of warheads ahead of the attacking vessel to bracket a submerged contact. The Hedgehog was contact fuzed, while the Squid fired a pattern of three large, depth charges with clockwork detonators. Later developments included the Mark 24 "Fido" acoustic homing torpedo (and later such weapons), and the SUBROC, which was armed with a nuclear depth charge. The USSR, United States and United Kingdom developed nuclear depth bombs. , the Royal Navy retains a depth charge labelled as Mk11 Mod 3, which can be deployed from its AgustaWestland Wildcat and Merlin HM.2 helicopters. Russia has also developed homing (but unpropelled) depth charges including the S3V Zagon and the 90SG. China has also produced such weapons. Signaling During the Cold War when it was necessary to inform submarines of the other side that they had been detected but without actually launching an attack, low-power "signalling depth charges" (also called "practice depth charges") were sometimes used, powerful enough to be detected when no other means of communication was possible, but not destructive. Underwater explosions The high explosive in a depth charge undergoes a rapid chemical reaction at an approximate rate of . The gaseous products of that reaction momentarily occupy the volume previously occupied by the solid explosive, but at very high pressure. This pressure is the source of the damage and is proportional to the explosive density and the square of the detonation velocity. A depth charge gas bubble expands to equalize with the pressure of the surrounding water. This gas expansion propagates a shock wave. The density difference of the expanding gas bubble from the surrounding water causes the bubble to rise toward the surface. Unless the explosion is shallow enough to vent the gas bubble to the atmosphere during its initial expansion, the momentum of water moving away from the gas bubble will create a gaseous void of lower pressure than the surrounding water. Surrounding water pressure then collapses the gas bubble with inward momentum causing excess pressure within the gas bubble. Re-expansion of the gas bubble then propagates another potentially damaging shock wave. Cyclical expansion and contraction can continue for several seconds until the gas bubble vents to the atmosphere. Consequently, explosions where the depth charge is detonated at a shallow depth and the gas bubble vents into the atmosphere very soon after the detonation are quite ineffective, even though they are more dramatic and therefore preferred in movies. A sign of an effective detonation depth is that the surface just slightly rises and only after a while vents into a water burst. Very large depth charges, including nuclear weapons, may be detonated at sufficient depth to create multiple damaging shock waves. Such depth charges can also cause damage at longer distances, if reflected shock waves from the ocean floor or surface converge to amplify radial shock waves. Submarines or surface ships may be damaged if operating in the convergence zones of their own depth charge detonations. The damage that an underwater explosion inflicts on a submarine comes from a primary and a secondary shock wave. The primary shock wave is the initial shock wave of the depth charge, and will cause damage to personnel and equipment inside the submarine if detonated close enough. The secondary shock wave is a result of the cyclical expansion and contraction of the gas bubble and will bend the submarine back and forth and cause catastrophic hull breach, in a way that can be likened to bending a plastic ruler rapidly back and forth until it snaps. Up to sixteen cycles of secondary shock waves have been recorded in tests. The effect of the secondary shock wave can be reinforced if another depth charge detonates on the other side of the hull in close time proximity to the first detonation, which is why depth charges are normally launched in pairs with different pre-set detonation depths. The killing radius of a depth charge depends on the depth of detonation, the payload of the depth charge and the size and strength of the submarine hull. A depth charge of approximately of TNT (400 MJ) would normally have a killing radius (resulting in a hull breach) of only against a conventional 1000-ton submarine, while the disablement radius (where the submarine is not sunk but is put out of commission) would be approximately . A larger payload increases the radius only slightly because the effect of an underwater explosion decreases as the cube of the distance to the target.
Technology
Naval warfare
null
141738
https://en.wikipedia.org/wiki/Comet%20Hyakutake
Comet Hyakutake
Comet Hyakutake (formally designated C/1996 B2) is a comet discovered on 31 January 1996. It was dubbed the Great Comet of 1996; its passage to within of the Earth on 25 March was one of the closest cometary approaches of the previous 200 years. Reaching an apparent visual magnitude of zero and spanning nearly 80°, Hyakutake appeared very bright in the night sky and was widely seen around the world. The comet temporarily upstaged the much anticipated Comet Hale–Bopp, which was approaching the inner Solar System at the time. Hyakutake is a long-period comet that passed perihelion on 1 May 1996. Before its most recent passage through the Solar System, its orbital period was about 17,000 years, but the gravitational perturbation of the giant planets has increased this period to 70,000 years. This is the first comet to have an X-ray emission detected, which is most likely the result of ionised solar wind particles interacting with neutral atoms in the coma of the comet. The Ulysses spacecraft fortuitously crossed the comet's tail at a distance of more than from the nucleus, showing that Hyakutake had the longest tail known for a comet. Discovery The comet was discovered on 30 January 1996, by Yuji Hyakutake, an amateur astronomer from southern Japan. He had been searching for comets for years and had moved to Kagoshima Prefecture partly for the dark skies in nearby rural areas. He was using a powerful set of binoculars with objective lenses to scan the skies on the night of the discovery. This comet was actually the second Comet Hyakutake; Hyakutake had discovered comet C/1995 Y1 several weeks earlier. While re-observing his first comet (which never became visible to the naked eye) and the surrounding patch of sky, Hyakutake was surprised to find another comet in almost the same position as the first had been. Hardly believing a second discovery so soon after the first, Hyakutake reported his observation to the National Astronomical Observatory of Japan the following morning. Later that day, the discovery was confirmed by independent observations. At the time of its discovery, the comet was shining at magnitude 11.0 and had a coma approximately 2.5 arcminutes across. It was approximately 2 astronomical units (AU) from the Sun. Later, a precovery image of the comet was found on a photograph taken on January 1, when the comet was about 2.4 AU from the Sun and had a magnitude of 13.3. Orbit When the first calculations of the comet's orbit were made, scientists realized that it was going to pass just 0.1 AU from Earth on 25 March. Only four comets in the previous century had passed closer. Comet Hale–Bopp was already being discussed as a possible "great comet"; the astronomical community eventually realised that Hyakutake might also become spectacular because of its close approach. Moreover, Comet Hyakutake's orbit meant that it had last been to the inner Solar System approximately 17,000 years earlier. Because it had probably passed close to the Sun several times before, the approach in 1996 would not be a maiden arrival from the Oort cloud, a place from where comets with orbital periods of millions of years come. Comets entering the inner Solar System for the first time may brighten rapidly before fading as they near the Sun, because a layer of highly volatile material evaporates. This was the case with Comet Kohoutek in 1973; it was initially touted as potentially spectacular, but only appeared moderately bright. Older comets show a more consistent brightening pattern. Thus, all indications suggested Comet Hyakutake would be bright. Besides approaching close to Earth, the comet would also be visible throughout the night to northern hemisphere observers at its closest approach because of its path, passing very close to the pole star. This would be an unusual occurrence, because most comets are close to the Sun in the sky when the comets are at their brightest, leading to the comets appearing in a sky not completely dark. Earth passage Hyakutake became visible to the naked eye in early March 1996. By mid-March, the comet was still fairly unremarkable, shining at 4th magnitude with a tail about 5 degrees long. As it neared its closest approach to Earth, it rapidly became brighter, and its tail grew in length. By March 24, the comet was one of the brightest objects in the night sky, and its tail stretched 35 degrees. The comet had a notably bluish-green colour. The closest approach occurred on 25 March at a distance of . Hyakutake was moving so rapidly across the night sky that its movement could be detected against the stars in just a few minutes; it covered the diameter of a full moon (half a degree) every 30 minutes. Observers estimated its magnitude as around 0, and tail lengths of up to 80 degrees were reported. Its coma, now close to the zenith for observers at mid-northern latitudes, appeared approximately 1.5 to 2 degrees across, roughly four times the diameter of the full moon. The comet's head appeared distinctly blue-green, possibly due to emissions from diatomic carbon (C2) combined with sunlight reflected from dust grains. Because Hyakutake was at its brightest for only a few days, it did not have time to permeate the public imagination in the way that Comet Hale–Bopp did the following year. Many European observers in particular did not see the comet at its peak because of unfavourable weather conditions. Perihelion and afterwards After its close approach to the Earth, the comet faded to about 2nd magnitude. It reached perihelion on 1 May 1996, brightening again and exhibiting a dust tail in addition to the gas tail seen as it passed the Earth. By this time, however, it was close to the Sun and was not seen as easily. It was observed passing perihelion by the SOHO Sun-observing satellite, which also recorded a large coronal mass ejection being formed at the same time. The comet's distance from the Sun at perihelion was 0.23 AU, well inside the orbit of Mercury. After its perihelion passage, Hyakutake faded rapidly and was lost to naked-eye visibility by the end of May. Its orbital path carried it rapidly into the southern skies, but following perihelion it became much less monitored. The last known observation of the comet took place on November 2. Hyakutake had passed through the inner Solar System approximately 17,000 years ago; gravitational interactions with the gas giants during its 1996 passage stretched its orbit greatly, and barycentric fits to the comet's orbit predict it will not return to the inner Solar System again for approximately 70,000 years. Scientific results Spacecraft passes through the tail The Ulysses spacecraft made an unexpected pass through the tail of the comet on 1 May 1996. Evidence of the encounter was not noticed until 1998. Astronomers analysing old data found that Ulysses instruments had detected a large drop in the number of protons passing, as well as a change in the direction and strength of the local magnetic field. This implied that the spacecraft had crossed the 'wake' of an object, most likely a comet; the object responsible was not immediately identified. In 2000, two teams independently analyzed the same event. The magnetometer team realized that the changes in the direction of the magnetic field mentioned above agreed with the "draping" pattern expected in a comet's ion, or plasma tail. The magnetometer team looked for likely suspects. No known comets were located near the satellite, but looking further afield, they found that Hyakutake, away, had crossed Ulysses orbital plane on 23 April 1996. The solar wind had a velocity at the time of about , at which speed it would have taken eight days for the tail to be carried out to where the spacecraft was situated at 3.73 AU, approximately 45 degrees out of the ecliptic plane. The orientation of the ion tail inferred from the magnetic field measurements agreed with the source lying in Comet Hyakutake's orbital plane. The other team, working on data from the spacecraft's ion composition spectrometer, discovered a sudden large spike in detected levels of ionised particles at the same time. The relative abundances of chemical elements detected indicated that the object responsible was definitely a comet. Based on the Ulysses encounter, the comet's tail is known to have been at least 570 million km (360 million miles; 3.8 AU) long. This is almost twice as long as the previous longest-known cometary tail, that of the Great Comet of 1843, which was 2 AU long. This record was broken in 2002 by comet 153P/Ikeya–Zhang, which had a tail-length of at least . Composition Terrestrial observers found ethane and methane in the comet, the first time either of these gases had been detected in a comet. Chemical analysis showed that the abundances of ethane and methane were roughly equal, which may imply that its ices formed in interstellar space, away from the Sun, which would have evaporated these volatile molecules. Hyakutake's ices must have formed at temperatures of 20 K or less, indicating that it probably formed in a denser-than-average interstellar cloud. The amount of deuterium in the comet's water ices was determined through spectroscopic observations. It was found that the ratio of deuterium to hydrogen (known as the D/H ratio) was about 3, which compares to a value in Earth's oceans of about 1.5. It has been proposed that cometary collisions with Earth might have supplied a large proportion of the water in the oceans, but the high D–H ratio measured in Hyakutake and other comets such as Hale–Bopp and Halley's Comet have caused problems for this theory. X-ray emission One of the great surprises of Hyakutake's passage through the inner Solar System was the discovery that it was emitting X-rays, with observations made using the ROSAT satellite revealing very strong X-ray emission. This was the first time a comet had been seen to do so, but astronomers soon found that almost every comet they looked at was emitting X-rays. The emission from Hyakutake was brightest in a crescent shape surrounding the nucleus with the ends of the crescent pointing away from the Sun. The cause of the X-ray emission is thought to be a combination of two mechanisms. Interactions between energetic solar wind particles and cometary material evaporating from the nucleus is likely to contribute significantly to this effect. Reflection of solar X-rays is seen in other Solar System objects such as the Moon, but a simple calculation assuming even the highest X-ray reflectivity possible per molecule or dust grain is not able to explain the majority of the observed flux from Hyakutake, as the comet's atmosphere is very tenuous and diffuse. Observations of comet C/1999 S4 (LINEAR) with the Chandra satellite in 2000 determined that X-rays observed from that comet were produced predominantly by charge exchange collisions between highly charged carbon, oxygen and nitrogen minor ions in the solar wind, and neutral water, oxygen and hydrogen in the comet's coma. Nucleus size and activity Radar results from the Arecibo Observatory indicated that the comet nucleus was about across, and surrounded by a flurry of pebble-sized particles ejected at a few metres per second. This size measurement corresponded well with indirect estimates using infrared emission and radio observations. The small size of the nucleus (Halley's Comet is about across, while Comet Hale–Bopp was about across) implies that Hyakutake must have been very active to become as bright as it did. Most comets undergo outgassing from a small proportion of their surface, but most or all of Hyakutake's surface seemed to have been active. The dust production rate was estimated to be about 2 kg/s at the beginning of March, rising to 3 kg/s as the comet approached perihelion. During the same period, dust ejection velocities increased from 50 m/s to 500 m/s. Observations of material being ejected from the nucleus allowed astronomers to establish its rotation period. As the comet passed the Earth, a large puff or blob of material was observed being ejected in the sunward direction every 6.23 hours. A second smaller ejection with the same period confirmed this as the rotation period of the nucleus.
Physical sciences
Notable comets
Astronomy
141828
https://en.wikipedia.org/wiki/Isthmus
Isthmus
An isthmus (; : isthmuses or isthmi) is a narrow piece of land connecting two larger areas across an expanse of water by which they are otherwise separated. A tombolo is an isthmus that consists of a spit or bar, and a strait is the sea counterpart of an isthmus, a narrow stretch of sea between two landmasses that connects two larger bodies of water. Isthmus vs land bridge vs peninsula Isthmus and land bridge are related terms, with isthmus having a broader meaning. A land bridge is an isthmus connecting Earth's major land masses. The term land bridge is usually used in biogeology to describe land connections that used to exist between continents at various times and were important for the migration of people and various species of animals and plants, e.g. Beringia and Doggerland. An isthmus is a land connection between two bigger landmasses, while a peninsula is rather a land protrusion that is connected to a bigger landmass on one side only and surrounded by water on all other sides. Technically, an isthmus can have canals running from coast to coast (e.g. the Panama Canal), and thus resemble two peninsulas; however, canals are artificial features distinguished from straits. Major isthmuses The world's major isthmuses include: Karelian Isthmus in Europe Kra Isthmus in Mainland Southeast Asia Bird's Neck Isthmus in Western New Guinea Isthmus of Tehuantepec in Middle America Isthmus of Perekop in Ukraine Isthmus of Panama in Middle America Isthmus of Suez between North Africa and Western Asia Of historic importance were: Isthmus of Catanzaro in Italy Isthmus of Corinth in Greece The cities of Auckland, Madison, Manila, and Seattle are located on isthmuses. Canals Canals are often built across isthmuses, where they may be a particularly advantageous shortcut for marine transport. For example: The Panama Canal crosses the Isthmus of Panama, connecting the Atlantic and Pacific Oceans The Suez Canal connects the Mediterranean Sea (part of the Atlantic Ocean) and the Red Sea (part of the Indian Ocean), cutting across the western side of the Isthmus of Suez, formed by the Sinai Peninsula The Crinan Canal crosses the isthmus between Loch Crinan and Loch Gilp, which connects the Kintyre peninsula with the rest of Scotland The Welland Canal in the Niagara Peninsula (technically an isthmus); it connects Lake Ontario to Lake Erie The Corinth Canal connects the Gulf of Corinth in the Ionian Sea with the Saronic Gulf in the Aegean Sea
Physical sciences
Oceanic and coastal landforms
Earth science
141888
https://en.wikipedia.org/wiki/Aluminium%20oxide
Aluminium oxide
Aluminium oxide (or aluminium(III) oxide) is a chemical compound of aluminium and oxygen with the chemical formula . It is the most commonly occurring of several aluminium oxides, and specifically identified as aluminium oxide. It is commonly called alumina and may also be called aloxide, aloxite, or alundum in various forms and applications. It occurs naturally in its crystalline polymorphic phase α-Al2O3 as the mineral corundum, varieties of which form the precious gemstones ruby and sapphire. Al2O3 is used to produce aluminium metal, as an abrasive owing to its hardness, and as a refractory material owing to its high melting point. Natural occurrence Corundum is the most common naturally occurring crystalline form of aluminium oxide. Rubies and sapphires are gem-quality forms of corundum, which owe their characteristic colours to trace impurities. Rubies are given their characteristic deep red colour and their laser qualities by traces of chromium. Sapphires come in different colours given by various other impurities, such as iron and titanium. An extremely rare δ form occurs as the mineral deltalumite. History The field of aluminium oxide ceramics has a long history. Aluminium salts were widely used in ancient and medieval alchemy. Several older textbooks cover the history of the field. A 2019 textbook by Andrew Ruys contains a detailed timeline on the history of aluminium oxide from ancient times to the 21st century. Properties Al2O3 is an electrical insulator but has a relatively high thermal conductivity () for a ceramic material. Aluminium oxide is insoluble in water. In its most commonly occurring crystalline form, called corundum or α-aluminium oxide, its hardness makes it suitable for use as an abrasive and as a component in cutting tools. Aluminium oxide is responsible for the resistance of metallic aluminium to weathering. Metallic aluminium is very reactive with atmospheric oxygen, and a thin passivation layer of aluminium oxide (4 nm thickness) forms on any exposed aluminium surface in a matter of hundreds of picoseconds. This layer protects the metal from further oxidation. The thickness and properties of this oxide layer can be enhanced using a process called anodising. A number of alloys, such as aluminium bronzes, exploit this property by including a proportion of aluminium in the alloy to enhance corrosion resistance. The aluminium oxide generated by anodising is typically amorphous, but discharge-assisted oxidation processes such as plasma electrolytic oxidation result in a significant proportion of crystalline aluminium oxide in the coating, enhancing its hardness. Aluminium oxide was taken off the United States Environmental Protection Agency's chemicals lists in 1988. Aluminium oxide is on the EPA's Toxics Release Inventory list if it is a fibrous form. Amphoteric nature Aluminium oxide is an amphoteric substance, meaning it can react with both acids and bases, such as hydrofluoric acid and sodium hydroxide, acting as an acid with a base and a base with an acid, neutralising the other and producing a salt. Al2O3 + 6 HF → 2 AlF3 + 3 H2O Al2O3 + 2 NaOH + 3 H2O → 2 NaAl(OH)4 (sodium aluminate) Structure The most common form of crystalline aluminium oxide is known as corundum, which is the thermodynamically stable form. The oxygen ions form a nearly hexagonal close-packed structure with the aluminium ions filling two-thirds of the octahedral interstices. Each Al3+ center is octahedral. In terms of its crystallography, corundum adopts a trigonal Bravais lattice with a space group of Rc (number 167 in the International Tables). The primitive cell contains two formula units of aluminium oxide. Aluminium oxide also exists in other metastable phases, including the cubic γ and η phases, the monoclinic θ phase, the hexagonal χ phase, the orthorhombic κ phase and the δ phase that can be tetragonal or orthorhombic. Each has a unique crystal structure and properties. Cubic γ-Al2O3 has important technical applications. The so-called β-Al2O3 proved to be NaAl11O17. Molten aluminium oxide near the melting temperature is roughly 2/3 tetrahedral (i.e. 2/3 of the Al are surrounded by 4 oxygen neighbors), and 1/3 5-coordinated, with very little (<5%) octahedral Al-O present. Around 80% of the oxygen atoms are shared among three or more Al-O polyhedra, and the majority of inter-polyhedral connections are corner-sharing, with the remaining 10–20% being edge-sharing. The breakdown of octahedra upon melting is accompanied by a relatively large volume increase (~33%), the density of the liquid close to its melting point is 2.93 g/cm3. The structure of molten alumina is temperature dependent and the fraction of 5- and 6-fold aluminium increases during cooling (and supercooling), at the expense of tetrahedral AlO4 units, approaching the local structural arrangements found in amorphous alumina. Production Aluminium hydroxide minerals are the main component of bauxite, the principal ore of aluminium. A mixture of the minerals comprise bauxite ore, including gibbsite (Al(OH)3), boehmite (γ-AlO(OH)), and diaspore (α-AlO(OH)), along with impurities of iron oxides and hydroxides, quartz and clay minerals. Bauxites are found in laterites. Bauxite is typically purified using the Bayer process: Al2O3 + H2O + NaOH → NaAl(OH)4 Al(OH)3 + NaOH → NaAl(OH)4 Except for SiO2, the other components of bauxite do not dissolve in base. Upon filtering the basic mixture, Fe2O3 is removed. When the Bayer liquor is cooled, Al(OH)3 precipitates, leaving the silicates in solution. NaAl(OH)4 → NaOH + Al(OH)3 The solid Al(OH)3 Gibbsite is then calcined (heated to over 1100 °C) to give aluminium oxide: 2 Al(OH)3 → Al2O3 + 3 H2O The product aluminium oxide tends to be multi-phase, i.e., consisting of several phases of aluminium oxide rather than solely corundum. The production process can therefore be optimized to produce a tailored product. The type of phases present affects, for example, the solubility and pore structure of the aluminium oxide product which, in turn, affects the cost of aluminium production and pollution control. Sintering Process The Sintering Process is a high-temperature method primarily used when the Bayer Process is not suitable, especially for ores with high silica content or when a more controlled product morphology is required. Firstly, Bauxite is mixed with additives like limestone and soda ash, then heating the mixture at high temperatures (1200 °C to 1500 °C) to form sodium aluminate and calcium silicate. After sintering, the material is leached with water to dissolve the sodium aluminate, leaving behind impurities. Sodium aluminate is then precipitated from the solution and calcined at around 1000 °C to produce alumina. This method is useful for the production of complex shapes and can be used to create porous or dense materials. Applications Known as alpha alumina in materials science, and as alundum (in fused form) or aloxite in mining and ceramic communities, aluminium oxide finds wide use. Annual global production of aluminium oxide in 2015 was approximately 115 million tonnes, over 90% of which was used in the manufacture of aluminium metal. The major uses of speciality aluminium oxides are in refractories, ceramics, polishing and abrasive applications. Large tonnages of aluminium hydroxide, from which alumina is derived, are used in the manufacture of zeolites, coating titania pigments, and as a fire retardant/smoke suppressant. Over 90% of aluminium oxide, termed smelter grade alumina (SGA), is consumed for the production of aluminium, usually by the Hall–Héroult process. The remainder, termed specialty alumina, is used in a wide variety of applications which take advantage of its inertness, temperature resistance and electrical resistance. Fillers Being fairly chemically inert and white, aluminium oxide is a favored filler for plastics. Aluminium oxide is a common ingredient in sunscreen and is often also present in cosmetics such as blush, lipstick, and nail polish. Glass Many formulations of glass have aluminium oxide as an ingredient. Aluminosilicate glass is a commonly used type of glass that often contains 5% to 10% alumina. Catalysis Aluminium oxide catalyses a variety of reactions that are useful industrially. In its largest scale application, aluminium oxide is the catalyst in the Claus process for converting hydrogen sulfide waste gases into elemental sulfur in refineries. It is also useful for dehydration of alcohols to alkenes. Aluminium oxide serves as a catalyst support for many industrial catalysts, such as those used in hydrodesulfurization and some Ziegler–Natta polymerizations. Gas purification Aluminium oxide is widely used to remove water from gas streams. Abrasion Aluminium oxide is used for its hardness and strength. Its naturally occurring form, corundum, is a 9 on the Mohs scale of mineral hardness (just below diamond). It is widely used as an abrasive, including as a much less expensive substitute for industrial diamond. Many types of sandpaper use aluminium oxide crystals. In addition, its low heat retention and low specific heat make it widely used in grinding operations, particularly cutoff tools. As the powdery abrasive mineral aloxite, it is a major component, along with silica, of the cue tip "chalk" used in billiards. Aluminium oxide powder is used in some CD/DVD polishing and scratch-repair kits. Its polishing qualities are also behind its use in toothpaste. It is also used in microdermabrasion, both in the machine process available through dermatologists and estheticians, and as a manual dermal abrasive used according to manufacturer directions. Paint Aluminium oxide flakes are used in paint for reflective decorative effects, such as in the automotive or cosmetic industries. Biomedical applications Aluminium oxide is a representative of bioinert ceramics. Due to its excellent biocompatibility, high strength, and wear resistance, alumina ceramics are used in medical applications to manufacture artificial bones and joints. In this case, aluminium oxide is used to coat the surfaces of medical implants to give biocompatibility and corrosion resistance. It is also used for manufacturing dental implants, joint replacements, and other medical devices. Composite fiber Aluminium oxide has been used in a few experimental and commercial fiber materials for high-performance applications (e.g., Fiber FP, Nextel 610, Nextel 720). Alumina nanofibers in particular have become a research field of interest. Armor Some body armors utilize alumina ceramic plates, usually in combination with aramid or UHMWPE backing to achieve effectiveness against most rifle threats. Alumina ceramic armor is readily available to most civilians in jurisdictions where it is legal, but is not considered military grade. It is also used to produce bullet-proof alumina glass capable to withstand impact of .50 BMG calibre rounds. Abrasion protection Aluminium oxide can be grown as a coating on aluminium by anodizing or by plasma electrolytic oxidation (see the "Properties" above). Both the hardness and abrasion-resistant characteristics of the coating originate from the high strength of aluminium oxide, yet the porous coating layer produced with conventional direct current anodizing procedures is within a 60–70 Rockwell hardness C range which is comparable only to hardened carbon steel alloys, but considerably inferior to the hardness of natural and synthetic corundum. Instead, with plasma electrolytic oxidation, the coating is porous only on the surface oxide layer while the lower oxide layers are much more compact than with standard DC anodizing procedures and present a higher crystallinity due to the oxide layers being remelted and densified to obtain α-Al2O3 clusters with much higher coating hardness values circa 2000 Vickers hardness. Alumina is used to manufacture tiles which are attached inside pulverized fuel lines and flue gas ducting on coal fired power stations to protect high wear areas. They are not suitable for areas with high impact forces as these tiles are brittle and susceptible to breakage. Electrical insulation Aluminium oxide is an electrical insulator used as a substrate (silicon on sapphire) for integrated circuits, but also as a tunnel barrier for the fabrication of superconducting devices such as single-electron transistors, superconducting quantum interference devices (SQUIDs) and superconducting qubits. For its application as an electrical insulator in integrated circuits, where the conformal growth of a thin film is a prerequisite and the preferred growth mode is atomic layer deposition, Al2O3 films can be prepared by the chemical exchange between trimethylaluminium (Al(CH3)3) and H2O: 2 Al(CH3)3 + 3 H2O → Al2O3 + 6 CH4 H2O in the above reaction can be replaced by ozone (O3) as the active oxidant and the following reaction then takes place: 2 Al(CH3)3 + O3 → Al2O3 + 3 C2H6 The Al2O3 films prepared using O3 show 10–100 times lower leakage current density compared with those prepared by H2O. Aluminium oxide, being a dielectric with relatively large band gap, is used as an insulating barrier in capacitors. Other In lighting, translucent aluminium oxide is used in some sodium vapor lamps. Aluminium oxide is also used in preparation of coating suspensions in compact fluorescent lamps. In chemistry laboratories, aluminium oxide is a medium for chromatography, available in basic (pH 9.5), acidic (pH 4.5 when in water) and neutral formulations. Additionally, small pieces of aluminium oxide are often used as boiling chips. Health and medical applications include it as a material in hip replacements and birth control pills. It is used as a scintillator and dosimeter for radiation protection and therapy applications for its optically stimulated luminescence properties. Insulation for high-temperature furnaces is often manufactured from aluminium oxide. Sometimes the insulation has varying percentages of silica depending on the temperature rating of the material. The insulation can be made in blanket, board, brick and loose fiber forms for various application requirements. It is also used to make spark plug insulators. Using a plasma spray process and mixed with titania, it is coated onto the braking surface of some bicycle rims to provide abrasion and wear resistance. Most ceramic eyes on fishing rods are circular rings made from aluminium oxide. In its finest powdered (white) form, called Diamantine, aluminium oxide is used as a superior polishing abrasive in watchmaking and clockmaking. Aluminium oxide is also used in the coating of stanchions in the motocross and mountain bike industries. This coating is combined with molybdenumdisulfate to provide long term lubrication of the surface.
Physical sciences
Oxide salts
Chemistry
141915
https://en.wikipedia.org/wiki/Fentanyl
Fentanyl
Fentanyl is a highly potent synthetic piperidine opioid primarily used as an analgesic. It is 30 to 50 times more potent than heroin and 100 times more potent than morphine; its primary clinical utility is in pain management for cancer patients and those recovering from painful surgeries. Fentanyl is also used as a sedative. Depending on the method of delivery, fentanyl can be very fast acting and ingesting a relatively small quantity can cause overdose. Fentanyl works by activating μ-opioid receptors. Fentanyl is sold under the brand names Actiq, Duragesic, and Sublimaze, among others. Pharmaceutical fentanyl's adverse effects are identical to those of other opioids and narcotics, including addiction, confusion, respiratory depression (which, if extensive and untreated, may lead to respiratory arrest), drowsiness, nausea, visual disturbances, dyskinesia, hallucinations, delirium, a subset of the latter known as "narcotic delirium", narcotic ileus, muscle rigidity, constipation, loss of consciousness, hypotension, coma, and death. Alcohol and other drugs (e.g., cocaine and heroin) can synergistically exacerbate fentanyl's side effects. Naloxone (also known as Narcan) can reverse the effects of an opioid overdose, but because fentanyl is so potent, multiple doses might be necessary. Fentanyl was first synthesized by Paul Janssen in 1959 and was approved for medical use in the United States in 1968. In 2015, were used in healthcare globally. , fentanyl was the most widely used synthetic opioid in medicine; in 2019, it was the 278th most commonly prescribed medication in the United States, with more than a million prescriptions. It is on the World Health Organization's List of Essential Medicines. Fentanyl continues to fuel an epidemic of synthetic opioid drug overdose deaths in the United States. From 2011 to 2021, prescription opioid deaths per year remained stable, while synthetic opioid deaths per year increased from 2,600 overdoses to 70,601. Since 2018, fentanyl and its analogues have been responsible for most drug overdose deaths in the United States, causing over 71,238 deaths in 2021. Fentanyl constitutes the majority of all drug overdose deaths in the United States since it overtook heroin in 2018. The United States National Forensic Laboratory estimates fentanyl reports by federal, state, and local forensic laboratories increased from 4,697 reports in 2014 to 117,045 reports in 2020. Fentanyl is often mixed, cut, or ingested alongside other drugs, including cocaine and heroin. Fentanyl has been reported in pill form, including pills mimicking pharmaceutical drugs such as oxycodone. Mixing with other drugs or disguising as a pharmaceutical makes it difficult to determine the correct treatment in the case of an overdose, resulting in more deaths. In an attempt to reduce the number of overdoses from taking other drugs mixed with fentanyl, drug testing kits, strips, and labs are available. Fentanyl's ease of manufacture and high potency makes it easier to produce and smuggle, resulting in fentanyl replacing other abused narcotics and becoming more widely used. Medical uses Anesthesia Intravenous fentanyl is often used for anesthesia and as an analgesic. To induce anesthesia, it is given with a sedative-hypnotic, like propofol or thiopental, and a euphoriant. To maintain anesthesia, inhaled anesthetics and additional fentanyl may be used. These are often given in 15–30minute intervals throughout procedures such as endoscopy and surgeries and in emergency rooms. For pain relief after surgery, use can decrease the amount of inhalational anesthetic needed for emergence from anesthesia. Balancing this medication and titrating the drug based on expected stimuli and the person's responses can result in stable blood pressure and heart rate throughout a procedure and a faster emergence from anesthesia with minimal pain. Regional anesthesia Fentanyl is the most commonly used intrathecal opioid because its lipophilic profile allows a quick onset of action (5–10 min) and intermediate duration of action (60–120 min). Spinal administration of hyperbaric bupivacaine with fentanyl may be the optimal combination. The almost immediate onset of fentanyl reduces visceral discomfort and even nausea during the procedure. Obstetrics Fentanyl is sometimes given intrathecally as part of spinal anesthesia or epidurally for epidural anaesthesia and analgesia. Because of fentanyl's high lipid solubility, its effects are more localized than morphine, and some clinicians prefer to use morphine to get a wider spread of analgesia. It is widely used in obstetrical anesthesia because of its short time to action peak (about 5 minutes), the rapid termination of its effect after a single dose, and the occurrence of relative cardiovascular stability. In obstetrics, the dose must be closely regulated to prevent large amounts of transfer from mother to fetus. At high doses, the drug may act on the fetus to cause postnatal Stimulant. For this reason, shorter-acting agents such as alfentanyl or remifentanil may be more suitable in the context of inducing general anaesthesia. Pain management The bioavailability of intranasal fentanyl is about 70–90% but with some imprecision due to clotted nostrils, pharyngeal swallow, and incorrect administration. For both emergency and palliative use, intranasal fentanyl is available in doses of 50, 100, 200, 400(PecFent)μg. In emergency medicine, safe administration of intranasal fentanyl with a low rate of side effects and a promising pain-reducing effect was demonstrated in a prospective observational study in about 900out-of-hospital patients. In children, intranasal fentanyl is useful for the treatment of moderate and severe pain and is well tolerated. Furthermore, a 2017 study suggested the efficacy of fentanyl lozenges in children as young as five, weighing as little as 13kg. Lozenges are more inclined to be used as the child is in control of sufficient dosage, in contrast to buccal tablets. Chronic pain It is also used in the management of chronic pain. Often, transdermal patches are used. The patches work by slowly releasing fentanyl through the skin into the bloodstream over 48 to 72hours, allowing for long-lasting pain management. Dosage is based on the size of the patch, since, in general, the transdermal absorption rate is constant at a constant skin temperature. Each patch should be changed every 72hours. Rate of absorption is dependent on a number of factors. Body temperature, skin type, amount of body fat, and placement of the patch can have major effects. The different delivery systems used by different makers will also affect individual rates of absorption, and route of administration. Under normal circumstances, the patch will reach its full effect within 12 to 24hours; thus, fentanyl patches are often prescribed with a fast-acting opioid (such as morphine or oxycodone) to handle breakthrough pain. It is unclear if fentanyl gives long-term pain relief to people with neuropathic pain. Breakthrough pain Sublingual fentanyl dissolves quickly and is absorbed through the sublingual mucosa to provide rapid analgesia. Fentanyl is a highly lipophilic compound, which is well absorbed sublingually and generally well tolerated. Such forms are particularly useful for breakthrough cancer pain episodes, which are often rapid in onset, short in duration, and severe in intensity. Palliative care In palliative care, transdermal fentanyl patches have a definitive, but limited role for: people already stabilized on other opioids who have persistent swallowing problems and cannot tolerate other parenteral routes such as subcutaneous administration. people with moderate to severe kidney failure. troublesome side effects of oral morphine, hydromorphone, or oxycodone. When using the transdermal patch, patients must be careful to minimize or avoid external heat sources (direct sunlight, heating pads, etc.), which can trigger the release and absorption of too much medication and cause potentially deadly complications. Combat medicine USAF Pararescue combat medics in Afghanistan used fentanyl lozenges in the form of lollipops on combat casualties from IED blasts and other trauma. The stick is taped to a finger and the lozenge put in the cheek of the person. When enough fentanyl has been absorbed, the (sedated) person generally lets the lollipop fall from the mouth, indicating sufficient analgesia and somewhat reducing the likelihood of overdose and associated risks. Breathing difficulties Fentanyl is used to help relieve shortness of breath (dyspnea) when patients cannot tolerate morphine, or whose breathlessness is refractory to morphine. Fentanyl is useful for such treatment in palliative care settings where pain and shortness of breath are severe and need to be treated with strong opioids. Nebulized fentanyl citrate is used to relieve end-of-life dyspnea in hospice settings. Other Some routes of administration such as nasal sprays and inhalers generally result in a faster onset of high blood levels, which can provide more immediate analgesia but also more severe side effects, especially in overdose. The much higher cost of some of these appliances may not be justified by marginal benefit compared with buccal or oral options. Intranasal fentanyl appears to be equally effective as IV morphine and superior to intramuscular morphine for the management of acute hospital pain. A fentanyl patient-controlled transdermal system (PCTS) is under development, which aims to allow patients to control the administration of fentanyl through the skin to treat postoperative pain. The technology consists of a "preprogrammed, self-contained drug-delivery system" that uses electrotransport technology to administer on-demand doses of 40μg of fentanyl hydrochloride over ten minutes. In a 2004 experiment including 189 patients with moderate to severe postoperative pain up to 24hours after major surgery, 25% of patients withdrew due to inadequate analgesia. However, the PCTS method proved superior to the placebo, showing lower mean VAS pain scores and having no significant respiratory depression effects. Adverse effects Fentanyl's most common side effects, which affect more than 10% of people, include nausea, vomiting, constipation, dry mouth, somnolence, confusion, and asthenia (weakness). Less frequently, in 3–10% of people, fentanyl can cause abdominal pain, headache, fatigue, anorexia and weight loss, dizziness, nervousness, anxiety, depression, flu-like symptoms, dyspepsia (indigestion), shortness of breath, hypoventilation, apnoea, and urinary retention. Fentanyl use has also been associated with aphasia. Despite being a more potent analgesic, fentanyl tends to induce less nausea, as well as less histamine-mediated itching, than morphine. In rare cases, serotonin syndrome is associated with fentanyl use. Existing studies advise medical practitioners to exercise caution when combining selective serotonin reuptake inhibitor (SSRI) drugs with fentanyl. The duration of action of fentanyl has sometimes been underestimated, leading to harm in a medical context. In 2006, the United States Food and Drug Administration (FDA) began investigating several respiratory deaths, but doctors in the United Kingdom were not warned of the risks with fentanyl until September 2008. The FDA reported in April 2012 that twelve young children had died and twelve more had become seriously ill from separate accidental exposures to fentanyl skin patches. Respiratory depression The most dangerous adverse effect of fentanyl is respiratory depression, that is, decreased sensitivity to carbon dioxide leading to reduced rate of breathing, which can cause anoxic brain injury or death. This risk is decreased when the airway is secured with an endotracheal tube (as during anesthesia). This risk is higher in specific groups, like those with obstructive sleep apnea. Other factors that increase the risk of respiratory depression are: High fentanyl doses Simultaneous use of methadone Sleep Older age Simultaneous use of CNS depressants like benzodiazepines (i.e. alprazolam, diazepam, clonazepam), barbiturates, alcohol, and inhaled anesthetics Hyperventilation Decreased CO2 levels in the serum Respiratory acidosis Decreased fentanyl clearance from the body Decreased blood flow to the liver Renal insufficiency Sustained release fentanyl preparations, such as patches, may also produce unexpected delayed respiratory depression. The precise reason for sudden respiratory depression is unclear, but there are several hypotheses: Saturation of the body fat compartment in people with rapid and profound body fat loss (people with cancer, cardiac or infection-induced cachexia can lose 80% of their body fat). Early carbon dioxide retention causes cutaneous vasodilation (releasing more fentanyl), together with acidosis, which reduces the protein binding of fentanyl, releasing yet more fentanyl. Reduced sedation, losing a useful early warning sign of opioid toxicity and resulting in levels closer to respiratory-depressant levels. Another related complication of fentanyl overdoses includes the so-called wooden chest syndrome, which quickly induces complete respiratory failure by paralyzing the thoracic muscles, explained in more detail in the Muscle rigidity section below. Heart and blood vessels Bradycardia: Fentanyl decreases the heart rate by increasing vagal nerve tone in the brainstem, which increases the parasympathetic drive. Vasodilation: It also vasodilates arterial and venous blood vessels through a central mechanism, by primarily slowing down vasomotor centers in the brainstem. To a lesser extent, it does this by directly affecting blood vessels. This is much more profound in patients who have an already increased sympathetic drive, like patients who have high blood pressure or congestive heart failure. It does not affect the contractility of the heart when regular doses are administered. Muscle rigidity If high boluses of fentanyl are administered quickly, muscle rigidity of the vocal cords can make bag-mask ventilation very difficult. The exact mechanism of this effect is unknown, but it can be prevented and treated using neuromuscular blockers. Wooden chest syndrome A prominent idiosyncratic adverse effect of fentanyl also includes a sudden onset of rigidity of the abdominal muscles and the diaphragm, which induces respiratory failure; this is seen with high doses and is known as wooden chest syndrome. The syndrome is believed to be the main cause of death as a result of fentanyl overdoses. Wooden chest syndrome is reversed by naloxone and is believed to be caused by a release of noradrenaline, which activates α-adrenergic receptors and also possibly via activation of cholinergic receptors. Wooden chest syndrome is unique to the most powerful opioidswhich today comprise fentanyl and its analogswhile other less-powerful opioids like heroin produce mild rigidity of the respiratory muscles to a much lesser degree. "Fentanyl fold" posture There are many reports of fentanyl users adopting a "folded" posture. Daniel Ciccarone of UCSF said what he calls the “nod” is a common side effect of opioid use, and later notes that "nods have always happened to varying degrees with other opioids, particularly heroin. The nods with fentanyl, however, seem to be more extreme. And it's often a sign that a person has taken too strong a dose". He also said "the fentanyl fold falls into the umbrella of a severe spinal deformity that can cause functional disability and can drive mental anguish" which is a factor given the socioeconomic status and more fragile mental health of drug users typically when compared to non-users. Overdose Fentanyl poses an exceptionally high overdose risk in humans since the amount required to cause toxicity is unpredictable. In its pharmaceutical form, most overdose deaths attributed solely to fentanyl occur at serum concentrations at a mean of 0.025μg/mL, with a range 0.005–0.027μg/mL. In contexts of poly-substance use, blood fentanyl concentrations of approximately 7ng/mL or greater have been associated with fatalities. Over 85% of overdoses involved at least one other drug, and there was no clear correlation showing at which level the mixtures were fatal. The dosages of fatal mixtures varied by over three magnitudes in some cases. This extremely unpredictable volatility with other drugs makes it especially difficult to avoid fatalities. Naloxone (sold under the brand name Narcan) can completely or partially reverse an opioid overdose. In July2014, the Medicines and Healthcare products Regulatory Agency (MHRA) of the UK issued a warning about the potential for life-threatening harm from accidental exposure to transdermal fentanyl patches, particularly in children, and advised that they should be folded, with the adhesive side in, before being discarded. The patches should be kept away from children, who are most at risk from fentanyl overdose. In the US, fentanyl and fentanyl analogs caused over 29,000deaths in 2017, a large increase over the previous four years. Some increases in fentanyl deaths do not involve prescription fentanyl but are related to illicitly made fentanyl that is being mixed with or sold as heroin. Death from fentanyl overdose continues to be a public health issue of national concern in Canada since September 2015. In 2016, deaths from fentanyl overdoses in the province of British Columbia averaged two persons per day. In 2017 the death rate increased by more than 100% with 368 overdose-related deaths in British Columbia between January and April 2017. Fentanyl has started to make its way into heroin as well as illicitly manufactured opioids and benzodiazepines. Fentanyl contamination in cocaine, methamphetamine, ketamine, MDMA, and other drugs is common. A kilogram of heroin laced with fentanyl may sell for more than US$100,000, but the fentanyl itself may be produced far more cheaply, for about US$6,000 per kilogram. While Mexico and China are the primary source countries for fentanyl and fentanyl-related substances trafficked directly into the United States, India is emerging as a source for finished fentanyl powder and fentanyl precursor chemicals. The United Kingdom illicit drug market is no longer reliant on China, as domestic fentanyl production is replacing imports. The intravenous dose causing 50% of opioid-naive experimental subjects to die () is "3mg/kg in rats, 1mg/kg in cats, 14mg/kg in dogs, and 0.03mg/kg in monkeys." The LD50 in mice has been given as 6.9mg/kg by intravenous administration, 17.5mg/kg intraperitoneally, 27.8mg/kg by oral administration. The LD50 in humans is unknown. In 2023, overdose deaths in the U.S. and Canada again reached record numbers. While overdoes involving fentanyl in the United States have decreased in 2024, the overall percentage of overdoes involving fentanyl has remained stable between 70% and 80% from 2021-2024. According to a 2023 report from the United Nations Office on Drugs and Crime (UNODC), the increased numbers of deaths are not related to an increased number of users but to the lethal effects of fentanyl itself. Fentanyl would require a special status as it is considerably more toxic than other widely abused opioids and opiates. Overdose deaths in pediatric cases are also concerning. In a report published in JAMA Pediatrics, 37.5% of all fatal pediatric cases between 1999 and 2021 were related to fentanyl; most of the deaths were among adolescents (89.6%) and children aged 0 to 4 years (6.6%). According to the UNODC, "the opioid crisis in North America is unabated, fueled by an unprecedented number of overdose deaths." False reports by police of poisonings through secondary exposure In the late 2010s, some media outlets began to report stories of police officers being hospitalized after touching powdered fentanyl, or after brushing it from their clothing. Topical (or transdermal; via the skin) and inhalative exposure to fentanyl is extremely unlikely to cause intoxication or overdose (except in cases of prolonged exposure with very large quantities of fentanyl), and first responders such as paramedics and police officers are at minimal risk of fentanyl poisoning through accidental contact with intact skin. A 2020 article from the Journal of Medical Toxicology stated that "the consensus of the scientific community remains that illness from unintentional exposures is extremely unlikely, because opioids are not efficiently absorbed through the skin and are unlikely to be carried in the air." The American College of Medical Toxicology and the American Academy of Clinical Toxicology issued a joint report in 2017 asserting the risk of fentanyl overdose via incidental transdermal exposure is very low, and it would take 200 minutes of breathing fentanyl at the highest airborne concentrations to yield a therapeutic dose, but not a potentially fatal one. The effects being reported in these cases, including rapid heartbeat, hyperventilation and chills, were not symptoms of a fentanyl overdose, and were more commonly associated with a panic attack. A 2021 paper expressed concern that these physical fears over fentanyl may inhibit effective emergency response to overdoses by causing responding officers to spend additional time on unnecessary precautions and that the media coverage could also perpetuate a wider social stigma that people who use drugs are dangerous to be around. A 2020 survey of first responders in New York found that 80% believed “briefly touching fentanyl could be deadly.” Many experts in toxicology are skeptical of police truly overdosing through mere touch. "This has never happened," said Dr. Ryan Marino, an emergency and addiction medicine physician at Case Western Reserve University. "There has never been an overdose through skin contact or accidentally inhaling fentanyl." Prevention Public health advisories to prevent fentanyl misuse and fatal overdose have been issued by the U.S. Centers for Disease Control (CDC). An initial HAN Advisory, also known as a Health Alert Network Advisory ("provides vital, time-sensitive information for a specific incident or situation; warrants immediate action or attention by health officials, laboratorians, clinicians, and members of the public; and conveys the highest level of importance") was issued during October 2015. A subsequent HAN Alert was issued in July 2018, warning of rising numbers of deaths due to fentanyl abuse and mixing with non-opioids. A December 2020 HAN Advisory warned of: substantial increases in drug overdose deaths across the United States, primarily driven by rapid increases in overdose deaths involving ... illicitly manufactured fentanyl; a concerning acceleration of the increase in drug overdose deaths, with the largest increase recorded from March 2020 to May 2020, coinciding with the implementation of widespread mitigation measures for the COVID-19 pandemic; significant increases in overdose deaths involving methamphetamine. 81,230 drug overdose deaths occurred during the 12 months from May 2019 to May 2020, the largest number of drug overdoses for a 12-month interval ever recorded for the U.S. The CDC recommended the following four actions to counter this rise: Local need to expand the distribution and use of naloxone and overdose prevention education, Expand awareness, access, and availability of treatment for substance use disorders, Intervene early with individuals at the highest risk for overdose, and improve detection of overdose outbreaks to facilitate more effective response. Another initiative is a social media campaign from the United States Drug Enforcement Administration (DEA) called "One Pill Can Kill". This social media campaign's goal is to spread awareness of the prevalence of counterfeit pills that are being sold in America that is leading to the large overdose epidemic in America. This campaign also shows the difference between counterfeit pills and real pills. Pharmacology Classification Fentanyl is a synthetic opioid in the phenylpiperidine family, which includes sufentanil, alfentanil, remifentanil, and carfentanil. Some fentanyl analogues, such as carfentanil, are up to 10,000 times stronger than morphine. Structure-activity The structures of opioids share many similarities. Whereas opioids like codeine, hydrocodone, oxycodone, and hydromorphone are synthesized by simple modifications of morphine, fentanyl, and its relatives are synthesized by modifications of meperidine. Meperidine is a fully synthetic opioid, and other members of the phenylpiperidine family like alfentanil and sufentanil are complex versions of this structure. Like other opioids, fentanyl is a weak base that is highly lipid-soluble, protein-bound, and protonated at physiological pH. All of these factors allow it to rapidly cross cellular membranes, contributing to its quick effect in the body and the central nervous system. Fentanyl analogs Fentanyl analogs are types of fentanyl with various chemical modifications on any number of positions of the molecule, but still maintain, or even exceed, its pharmacological effects. Many fentanyl analogs are termed "designer drugs" because they are synthesized solely to be used illicitly. Carfentanil, a fentanyl analog, has an additional carboxylic acid group attached to the 4 position. Carfentanil is 20–30 times as potent as fentanyl and is common in the illicit drug chain. The drug is commonly used to tranquilize elephants and other large animals. Mechanism of action Fentanyl, like other opioids, acts on opioid receptors. These receptors are G-protein-coupled receptors, which contain seven transmembrane portions, intracellular loops, extracellular loops, intracellular C-terminus, and extracellular N-terminus. The extracellular N-terminus is important in differentiating different types of binding substrates. When fentanyl binds, downstream signaling leads to inhibitory effects, such as decreased cAMP production, decreased calcium ion influx, and increased potassium efflux. This inhibits the ascending pathways in the central nervous system to increase pain threshold by changing the perception of pain; this is mediated by decreasing propagation of nociceptive signals, resulting in analgesic effects. As a μ-receptor agonist, fentanyl binds 50 to 100 times more potently than morphine. It can also bind to the delta and kappa opioid receptors but with a lower affinity. It has high lipid solubility, allowing it to more easily penetrate the central nervous system. It attenuates "second pain" with primary effects on slow-conducting, unmyelinated C-fibers and is less effective on neuropathic pain and "first pain" signals through small, myelinated A-fibers. Fentanyl can produce the following clinical effects strongly, through μ-receptor agonism: Supraspinal analgesia (μ1) Respiratory depression (μ2) Physical dependence Muscle rigidity It also produces sedation and spinal analgesia through Κ-receptor agonism. Therapeutic effects Pain relief: Primarily, fentanyl provides the relief of pain by acting on the brain and spinal μ-receptors. Sedation: Fentanyl produces sleep and drowsiness, as the dosage is increased, and can produce the δ-waves often seen in natural sleep on electroencephalogram. Suppression of the cough reflex: Fentanyl can decrease the struggle against an endotracheal tube and excessive coughing by decreasing the cough reflex, becoming useful when intubating people who are awake and have compromised airways. After receiving a bolus dose of fentanyl, people can also experience paradoxical coughing, which is a phenomenon that is not well understood. Detection in biological fluids Fentanyl may be measured in blood or urine to monitor for abuse, confirm a diagnosis of poisoning, or assist in a medicolegal death investigation. Commercially available immunoassays are often used as initial screening tests, but chromatographic techniques are generally used for confirmation and quantitation. The Marquis Color test may also be used to detect the presence of fentanyl. Using formaldehyde and sulfuric acid, the solution will turn purple when introduced to opium drugs. Blood or plasma fentanyl concentrations are expected to be in a range of 0.3–3.0 μg/L in persons using the medication therapeutically, 1–10 μg/L in intoxicated people, and 3–300 μg/L in victims of acute overdosage. Paper spray-mass spectrometry (PS-MS) may be useful for initial testing of samples. Detection for harm reduction purposes Fentanyl and fentanyl analogues can be qualitatively detected in drug samples using commercially available fentanyl testing strips or spot reagents. Following the principles of harm reduction, this test is to be used directly on drug samples as opposed to urine. To prepare a sample for testing, approximately 10 mg of the drug, about the size of hair on Abraham Lincoln's head on a penny, should be diluted into 1 teaspoon, or 5 mL, of water. Research in Dr. Lieberman's lab at the University of Notre Dame has reported false positive results on BTNX fentanyl testing strips with methamphetamine, MDMA, and diphenhydramine. The sensitivity and specificity of fentanyl test strips vary depending on the concentration of fentanyl tested, particularly from 10 to 250 ng/mL. Synthesis Fentanyl is a 4-anilopiperidine class synthetic opioid. The synthesis of Fentanyl is accomplished by one of four main methods as reported in the scientific literature: the Janssen, Siegfried, Gupta, or Suh method. Janssen The original synthesis as patented in 1964 by Paul Janssen involves the synthesis of benzylfentanyl from N-Benzyl-4-Piperidone. The resulting benzylfentanyl is used as feedstock to norfentanyl. It is norfentanyl that forms fentanyl upon reaction with phenethyl chloride. Siegfried The Siegfried method involves the initial synthesis of N-phenethyl-4-piperidone (NPP). This intermediate is reductively aminated to 4-anilino-N-phenethylpiperidine (4-ANPP). Fentanyl is produced following the reaction of 4-ANPP with an acyl chloride. The Siegfried method has been used in the early 2000s to manufacture fentanyl in both domestic and foreign clandestine laboratories. Gupta The Gupta (or 'one-pot') method starts from 4-Piperidone and skips the direct use of 4-ANPP/NPP; rather the compounds are formed only as impurities or temporary intermediates. For the first half of 2021, the U.S. Drug Enforcement Administration found the Gupta method was the predominant synthesis route in their samples of seized fentanyl. In 2022, Braga and coworkers described a synthesis of fentanyl involving continuous flow that uses reagents similar to the ones described for the Gupta procedure. Suh The Suh (or 'total synthesis') method skips the direct use of piperidine precursors in favor of creating the ring system in-situ. History Fentanyl was first synthesized in Belgium by Paul Janssen under the label of his relatively newly formed Janssen Pharmaceutica in 1959. It was developed by screening chemicals similar to pethidine (meperidine) for opioid activity. The widespread use of fentanyl triggered the production of fentanyl citrate (the salt formed by combining fentanyl and citric acid in a 1:1 stoichiometric ratio). Fentanyl citrate entered medical use as a general anaesthetic in 1968, manufactured by McNeil Laboratories under the brand name Sublimaze. In the mid-1990s, Janssen Pharmaceutica developed and introduced into clinical trials the Duragesic patch, which is a formulation of an inert alcohol gel infused with select fentanyl doses, which are worn to provide constant administration of the opioid over 48 to 72hours. After a set of successful clinical trials, Duragesic fentanyl patches were introduced into medical practice. Following the patch, a flavored lollipop of fentanyl citrate mixed with inert fillers was introduced in 1998 under the brand name Actiq, becoming the first quick-acting formation of fentanyl for use with chronic breakthrough pain. In 2009, the US Food and Drug Administration (FDA) approved Onsolis (fentanyl buccal soluble film), a fentanyl drug in a new dosage form for cancer pain management in opioid-tolerant subjects. It uses a medication delivery technology called BEMA (BioErodible MucoAdhesive), a small dissolvable polymer film containing various fentanyl doses applied to the inner lining of the cheek. Fentanyl has a US Drug Enforcement Administration (DEA) Administrative Controlled Substances Code Number (ACSCN) of 9801. Its annual aggregate manufacturing quota has significantly reduced in recent years from 2,300,000kg in 2015 and 2016 to only 731,452kg in 2021, a nearly 68.2% decrease. Society and culture Legal status In the UK, fentanyl is classified as a controlled Class A drug under the Misuse of Drugs Act 1971. In the Netherlands, fentanyl is a List I substance of the Opium Law. In the U.S., fentanyl is a Schedule II controlled substance per the Controlled Substance Act. Distributors of Abstral are required to implement an FDA-approved risk evaluation and mitigation strategy (REMS) program. In order to curb misuse, many health insurers have begun to require precertification and/or quantity limits for Actiq prescriptions. In Canada, fentanyl is considered a scheduleI drug as listed in Canada's Controlled Drugs and Substances Act. Estonia is known to have been home to the world's longest documented fentanyl epidemic, especially following the Taliban ban on opium poppy cultivation in Afghanistan. A 2018 report by The Guardian indicated that many major drug suppliers on the dark web have voluntarily banned the trafficking of fentanyl. The fentanyl epidemic has erupted in a highly acrimonious dispute between the U.S. and Mexican governments. While U.S. officials blame the flood of fentanyl crossing the border primarily on Mexican crime groups, then-President Andrés Manuel López Obrador insisted that the main source of this synthetic drug is Asia. He stated that the crisis of a lack of family values in the United States drives people to use the drug. Recreational use Illicit use of pharmaceutical fentanyl and its analogues first appeared in the mid-1970s in the medical community and continues in the present. More than 12 different analogues of fentanyl, all unapproved and clandestinely produced, have been identified in the U.S. drug traffic. In February 2018, the U.S. Drug Enforcement Administration indicated that illicit fentanyl analogs have no medically valid use, and thus applied a "Schedule I" classification to them. Fentanyl analogues may be hundreds of times more potent than heroin. Fentanyl is used orally, smoked, snorted, or injected. Fentanyl is sometimes sold as heroin or oxycodone, which can lead to overdose. Many fentanyl overdoses are initially classified as heroin overdoses. Recreational use is not particularly widespread in the EU except for Tallinn, Estonia, where it has largely replaced heroin. Estonia has the highest rate of 3-methylfentanyl overdose deaths in the EU, due to its high rate of recreational use. Fentanyl is sometimes sold on the black market in the form of transdermal fentanyl patches such as Duragesic, diverted from legitimate medical supplies. The gel from inside the patches is sometimes ingested or injected. Another form of fentanyl that has appeared on the streets is the Actiq lollipop formulation. The pharmacy retail price ranges from US$15 to US$50 per unit based on the strength of the lozenge, with the black market cost ranging from US$5 to US$25, depending on the dose. The attorneys general of Connecticut and Pennsylvania have launched investigations into its diversion from the legitimate pharmaceutical market, including Cephalon's "sales and promotional practices for Provigil, Actiq and Gabitril." Non-medical use of fentanyl by individuals without opioid tolerance can be very dangerous and has resulted in numerous deaths. Even those with opiate tolerances are at high risk for overdoses. Like all opioids, the effects of fentanyl can be reversed with naloxone, or other opiate antagonists. Naloxone is increasingly available to the public. Long-acting or sustained-release opioids may require repeat dosage. Illicitly synthesized fentanyl powder has also appeared on the United States market. Because of the extremely high strength of pure fentanyl powder, it is very difficult to dilute appropriately, and often the resulting mixture may be far too strong and, therefore, very dangerous. Some heroin dealers mix fentanyl powder with heroin to increase potency or compensate for low-quality heroin. In 2006, illegally manufactured, non-pharmaceutical fentanyl often mixed with cocaine or heroin caused an outbreak of overdose deaths in the United States and Canada, heavily concentrated in the cities of Dayton, Ohio; Chicago, Illinois; Detroit, Michigan; and Philadelphia, Pennsylvania. Enforcement Several large quantities of illicitly produced fentanyl have been seized by U.S. law enforcement agencies. In November2016, the DEA uncovered an operation making counterfeit oxycodone and Xanax from a home in Cottonwood Heights, Utah. They found about 70,000pills in the appearance of oxycodone and more than 25,000 in the appearance of Xanax. The DEA reported that millions of pills could have been distributed from this location over the course of time. The accused owned a tablet press and ordered fentanyl in powder form from China. A seizure of a record amount of fentanyl occurred on 2 February 2019, by U.S. Customs and Border Protection in Nogales, Arizona. The of fentanyl, which was estimated to be worth US$3.5M, was concealed in a compartment under a false floor of a truck transporting cucumbers. The "China White" form of fentanyl refers to any of a number of clandestinely produced analogues, especially α-methylfentanyl (AMF). One US Department of Justice publication lists "China White" as a synonym for a number of fentanyl analogues, including 3-methylfentanyl and α-methylfentanyl, which today are classified as Schedule I drugs in the United States. Part of the motivation for AMF is that, despite the extra difficulty from a synthetic standpoint, the resultant drug is more resistant to metabolic degradation. This results in a drug with an increased duration. In June 2013, the United States Centers for Disease Control and Prevention (CDC) issued a health advisory to emergency departments alerting to 14 overdose deaths among intravenous drug users in Rhode Island associated with acetylfentanyl, a synthetic opioid analog of fentanyl that has never been licensed for medical use. In a separate study conducted by the CDC, 82% of fentanyl overdose deaths involved illegally manufactured fentanyl, while only 4% were suspected to originate from a prescription. Beginning in 2015, Canada has seen several fentanyl overdoses. Authorities suspected that the drug was being imported from Asia to the western coast by organized crime groups in powder form and being pressed into pseudo-OxyContin tablets. Traces of the drug have also been found in other recreational drugs, including cocaine, MDMA, and heroin. The drug has been implicated in the deaths of people from all walks of life—from homeless individuals to professionals—including teens and young parents. Because of the rising deaths across the country, especially in British Columbia where 1,716deaths were reported in 2020 and 1,782 from January to October2021, Health Canada is putting a rush on a review of the prescription-only status of naloxone in an effort to combat overdoses of the drug. In 2018, Global News reported allegations that diplomatic tensions between Canada and China hindered cooperation to seize imports, with Beijing being accused of inaction. Fentanyl has been discovered for sale in illicit markets in Australia in 2017 and in New Zealand in 2018. In response, New Zealand experts called for wider availability of naloxone. In May 2019, China regulated the entire class of fentanyl-type drugs and two fentanyl precursors. Nevertheless, it remains the principal origin of fentanyl in the United States: Mexican cartels source fentanyl precursors from Chinese suppliers such as Yuancheng Group, which are finished in Mexico and smuggled to the United States. Following the 2022 visit by Nancy Pelosi to Taiwan, China halted cooperation with the United States on combatting drug trafficking. India has also emerged as a source of fentanyl and fentanyl precursors, where Mexican cartels have already developed networks for the import of synthetic drugs. It is possible that fentanyl and precursor production may disperse to other countries, such as Nigeria, South Africa, Indonesia, Myanmar, and the Netherlands. In 2020, the Myanmar military and police confiscated 990 gallons of "methyl fentanyl", as well as precursors for the illicit synthesis of the drug. According to the United Nations Office on Drugs and Crime, the Shan State of Myanmar has been identified as a major source for fentanyl derivatives. In 2021, the agency reported a further drop in opium poppy cultivation in Burma, as the region's synthetic drug market continues to expand and diversify. In 2023, a California police union director was charged with importing synthetic opioids, including fentanyl and tapentadol disguised as chocolate. U.S. law enforcement had been slow in their response to the fentanyl crisis, according to the Washington Post. The response by the federal government to the fentanyl crisis had also faltered, according to the press release. Overdose deaths by fentanyl and other illegally imported opioids were surging since 2019 and are presently a major cause of death in all U.S. states. According to the national archives and the DEA, direct fentanyl shipments from China have stopped since 2022. The majority of illicit fentanyl and analogues now entering the U.S. from Mexico are final products in form of "tablets" and adulterated heroin from previously synthesized fentanyl. From the sophistication of full fentanyl synthesis and acute toxicity in laboratory environments, 'clandestine' labs in Mexico relate to making an illicit dosage form from available fentanyl rather than the synthesis itself. Based on further research by investigators, fentanyl and analogues are likely synthesized in labs that have the appearance of a legal entity, or are diverted from pharmaceutical laboratories. Recent investigations and convictions of members of the Sinaloa drug cartel by federal agencies made a clear connection between illegal arms trafficking from the U.S. to Mexico and the smuggling of fentanyl into the U.S. Mexico had repeatedly made official complaints since illegal guns are easily purchased for example in Arizona and as far north as Wisconsin and even Alaska, according to U.S. intelligence sources, and transported onto Mexican territory through a chain of American brokers and couriers often financed by those drug cartels that also engage in money laundering. Therefore, the lack of arms controls in the U.S. has directly contributed to the U.S. opioid overdose crisis. Brand names Brand names include Sublimaze, Actiq, Durogesic, Duragesic, Fentora, Matrifen, Haldid, Onsolis, Instanyl, Abstral, Lazanda and others. Economics In the United States, the 800 mcg tablet was 6.75 times more expensive as of 2020 than the lozenge. As of 2023, the average cost for an injectable fentanyl solution (50 mcg/mL) is around US$17 for a supply of 20 milliliters, depending on the pharmacy. In a 2020 report by the Australian Institute of Criminology, a 100-microgram transdermal patch was valued from between AU$75 and AU$450 on illicit markets. Furthermore, in another 2020 study, the average price per gram of non-pharmaceutical fentanyl on various cryptomarkets was US$1,470.40 for offerings of less than five grams; the average for offers over five grams was US$139.50. In addition, on DreamMarket furanfentanyl (Fu-F), the most common analog on said market, the average price per gram was US$243.10 for retail listings and US$26.50 per gram for wholesale listings. Storage and disposal The fentanyl patch is one of a few medications that may be especially harmful, and in some cases fatal, with just one dose, if misused by a child. Experts have advised that any unused fentanyl patches be kept in a secure location out of children's sight and reach, such as a locked cabinet. In British Columbia, Canada, where there are environmental concerns about toilet flushing or garbage disposal, pharmacists recommend that unused patches be sealed in a child-proof container that is then returned to a pharmacy. In the United States where patches cannot always be returned through a medication take-back program, flushing is recommended for fentanyl patches, because it is the fastest and surest way to remove them from the home to prevent ingestion by children, pets or others not intended to use them. Notable deaths On 25 September 2003, American professional wrestler Anthony Durante, also known as "Pitbull #2", died from a fentanyl-induced overdose. On 24 May 2009, Wilco guitarist Jay Bennett died from an accidental overdose of fentanyl. On 24 May 2010, Slipknot bassist Paul Gray died from an overdose of morphine and fentanyl. On 21 April 2016, musician Prince died and medical examiners concluded he had accidentally overdosed on fentanyl. Fentanyl was among many substances identified in counterfeit pills recovered from his home, including some that were mislabeled as Watson 385, a combination of hydrocodone and paracetamol. On 21 April 2016, American author and journalist Michelle McNamara died from an accidental overdose; medical examiners determined fentanyl was a contributing factor. On 11 November 2016, Canadian video game composer Saki Kaskas died of a fentanyl overdose; he had been battling heroin addiction for over a decade. On 15 November 2017, American rapper Lil Peep died of an accidental fentanyl overdose. On 19 January 2018, the Los Angeles County Department of Medical Examiner said musician Tom Petty died from an accidental drug overdose as a result of mixing medications that included fentanyl, acetyl fentanyl, and despropionyl fentanyl (among others). He was reportedly treating "many serious ailments" that included a broken hip. On 7 September 2018, American rapper Mac Miller died from an accidental overdose of fentanyl, cocaine and alcohol. On 16 December 2018, American tech entrepreneur Colin Kroll, founder of social media video-sharing app Vine and quiz app HQ Trivia, died from an overdose of fentanyl, heroin, and cocaine. On 1 July 2019, American baseball player Tyler Skaggs died from pulmonary aspiration while under the influence of fentanyl, oxycodone, and alcohol. On 1 January 2020, American rapper, singer, and songwriter Lexii Alijai died from accidental toxicity resulting from the combination of alcohol and fentanyl. On 20 August 2020, American singer, songwriter, and musician Justin Townes Earle died from an accidental overdose caused by cocaine laced with fentanyl. On 24 August 2020, Riley Gale, frontman for the Texas metal band Power Trip, died as a result of the toxic effects of fentanyl in a manner that was ruled accidental. On 2 March 2021, American musician Mark Goffeney, also known as "Big Toe" (because being born without arms, he played guitar with his feet), died from an overdose of fentanyl. On 22 April 2021, Digital Underground frontman, rapper, and musician Shock G died from an accidental overdose of fentanyl, meth, and alcohol. On 6 September 2021, actor Michael K. Williams, who performed as Omar Little on the HBO drama series The Wire, died from an overdose of fentanyl, parafluorofentanyl, heroin, and cocaine. On 28 September 2022, rapper Coolio (Artis Leon Ivey, Jr.) died from an accidental overdose of fentanyl, heroin, and methamphetamine. On 31 July 2023, Angus Cloud, best known for his portrayal of Fezco on the HBO drama series Euphoria, died from an accidental overdose of methamphetamine, cocaine, fentanyl, and benzodiazepines. On 15 September 2023, an infant died at a daycare center in The Bronx, New York City, due to fentanyl contamination, which is also believed to have caused sickness in other children. On 22 March 2024, Mark D'Wit was sentenced to a minimum of 37 years imprisonment for murdering Stephen and Carol Baxter in Essex, England on 9 April 2023 by giving them drinks laced with fentanyl. Governmental usage In August 2018, Nebraska became the first American state to use fentanyl to execute a prisoner. Carey Dean Moore, at the time one of the longest-serving death row inmates in the United States, was executed at the Nebraska State Penitentiary. Moore received a lethal injection, administered as an intravenous series of four drugs that included fentanyl citrate, to inhibit breathing and render the subject unconscious. The other drugs included diazepam as a tranquilizer, cisatracurium besylate as a muscle relaxant, and potassium chloride to stop the heart. The use of fentanyl in execution caused concern among death penalty experts because it was part of a previously untested drug cocktail. The execution was also protested by anti-death penalty advocates at the prison during the execution and later at the Nebraska State Capitol. Russian Spetsnaz security forces are suspected to have used a fentanyl analogue, or derivative (suspected to be carfentanil and remifentanil), to rapidly incapacitate people in the Moscow theater hostage crisis in 2002. The siege was ended, but many hostages died from the gas after their health was severely taxed during the days long siege. The Russian Health Minister later stated that the gas was based on fentanyl, but the exact chemical agent has not been clearly identified. Recalls In February2004, a leading fentanyl supplier, Janssen Pharmaceutica Products recalled one lot, and later, additional lots of fentanyl (brand name: Duragesic) patches because of seal breaches that might have allowed the medication to leak from the patch. A series of classII recalls was initiated in March2004, and in February2008, the ALZA Corporation recalled their 25μg/h Duragesic patches due to a concern that small cuts in the gel reservoir could result in accidental exposure of patients or health care providers to the fentanyl gel. In April 2023, Teva Pharmaceuticals USA recalled 13 lots of their Fentanyl Buccal Tablets CII due to missing safety information sheets on how to properly administer their product. The corporation issued a consumer recall report and stressed the importance of safety in the use and administration of opioid therapeutics. Veterinary use Fentanyl is commonly used for analgesia and as a component of balanced sedation and general anesthesia in small animal patients. In addition, its efficacy is higher than many other pure-opiate and synthetic pure-opioid agonists regarding vomiting, depth of sedation, and cardiovascular effects when given as a continuous infusion as well as a transdermal patch. As with other pure-opioid agonists, fentanyl has been associated with dysphoria in dogs. Furthermore, transdermal fentanyl's potency and short duration of action make it popular as an intra-operative and post-operative analgesic in cats and dogs. This is usually done with off-label fentanyl patches manufactured for humans with chronic pain. In 2012, a highly concentrated (50mg/mL) transdermal solution, brand name Recuvyra, has become commercially available for dogs only. It is approved by the Food and Drug Administration to provide four days of analgesia after a single application before surgery. It is not approved for multiple doses or other species. The drug is also approved in Europe.
Biology and health sciences
Pain treatments
Health
141922
https://en.wikipedia.org/wiki/Steroid
Steroid
A steroid is an organic compound with four fused rings (designated A, B, C, and D) arranged in a specific molecular configuration. Steroids have two principal biological functions: as important components of cell membranes that alter membrane fluidity; and as signaling molecules. Examples include the lipid cholesterol, sex hormones estradiol and testosterone, anabolic steroids, and the anti-inflammatory corticosteroid drug dexamethasone. Hundreds of steroids are found in fungi, plants, and animals. All steroids are manufactured in cells from the sterols lanosterol (opisthokonts) or cycloartenol (plants). Lanosterol and cycloartenol are derived from the cyclization of the triterpene squalene. Steroids are named after the steroid cholesterol which was first described in gall stones from Ancient Greek chole- 'bile' and stereos 'solid'. The steroid nucleus (core structure) is called gonane (cyclopentanoperhydrophenanthrene). It is typically composed of seventeen carbon atoms, bonded in four fused rings: three six-member cyclohexane rings (rings A, B and C in the first illustration) and one five-member cyclopentane ring (the D ring). Steroids vary by the functional groups attached to this four-ring core and by the oxidation state of the rings. Sterols are forms of steroids with a hydroxy group at position three and a skeleton derived from cholestane. Steroids can also be more radically modified, such as by changes to the ring structure, for example, cutting one of the rings. Cutting Ring B produces secosteroids one of which is vitamin D3. Nomenclature Rings and functional groups Gonane, also known as steran or cyclopentanoperhydrophenanthrene, the simplest steroid and the nucleus of all steroids and sterols, is composed of seventeen carbon atoms in carbon-carbon bonds forming four fused rings in a three-dimensional shape. The three cyclohexane rings (A, B, and C in the first illustration) form the skeleton of a perhydro derivative of phenanthrene. The D ring has a cyclopentane structure. When the two methyl groups and eight carbon side chains (at C-17, as shown for cholesterol) are present, the steroid is said to have a cholestane framework. The two common 5α and 5β stereoisomeric forms of steroids exist because of differences in the side of the largely planar ring system where the hydrogen (H) atom at carbon-5 is attached, which results in a change in steroid A-ring conformation. Isomerisation at the C-21 side chain produces a parallel series of compounds, referred to as isosteroids. Examples of steroid structures are: In addition to the ring scissions (cleavages), expansions and contractions (cleavage and reclosing to a larger or smaller rings)—all variations in the carbon-carbon bond framework—steroids can also vary: in the bond orders within the rings, in the number of methyl groups attached to the ring (and, when present, on the prominent side chain at C17), in the functional groups attached to the rings and side chain, and in the configuration of groups attached to the rings and chain. For instance, sterols such as cholesterol and lanosterol have a hydroxyl group attached at position C-3, while testosterone and progesterone have a carbonyl (oxo substituent) at C-3. Among these compounds, only lanosterol has two methyl groups at C-4. Cholesterol which has a C-5 to C-6 double bond, differs from testosterone and progesterone which have a C-4 to C-5 double bond. Naming convention Almost all biologically relevant steroids can be presented as a derivative of a parent cholesterol-like hydrocarbon structure that serves as a skeleton. These parent structures have specific names, such as pregnane, androstane, etc. The derivatives carry various functional groups called suffixes or prefixes after the respective numbers, indicating their position in the steroid nucleus. There are widely used trivial steroid names of natural origin with significant biologic activity, such as progesterone, testosterone or cortisol. Some of these names are defined in The Nomenclature of Steroids. These trivial names can also be used as a base to derive new names, however, by adding prefixes only rather than suffixes, e.g., the steroid 17α-hydroxyprogesterone has a hydroxy group (-OH) at position 17 of the steroid nucleus comparing to progesterone. The letters α and β denote absolute stereochemistry at chiral centers—a specific nomenclature distinct from the R/S convention of organic chemistry to denote absolute configuration of functional groups, known as Cahn–Ingold–Prelog priority rules. The R/S convention assigns priorities to substituents on a chiral center based on their atomic number. The highest priority group is assigned to the atom with the highest atomic number, and the lowest priority group is assigned to the atom with the lowest atomic number. The molecule is then oriented so that the lowest priority group points away from the viewer, and the remaining three groups are arranged in order of decreasing priority around the chiral center. If this arrangement is clockwise, it is assigned an R configuration; if it is counterclockwise, it is assigned an S configuration. In contrast, steroid nomenclature uses α and β to denote stereochemistry at chiral centers. The α and β designations are based on the orientation of substituents relative to each other in a specific ring system. In general, α refers to a substituent that is oriented towards the plane of the ring system, while β refers to a substituent that is oriented away from the plane of the ring system. In steroids drawn from the standard perspective used in this paper, α-bonds are depicted on figures as dashed wedges and β-bonds as solid wedges. The name "11-deoxycortisol" is an example of a derived name that uses cortisol as a parent structure without an oxygen atom (hence "deoxy") attached to position 11 (as a part of a hydroxy group). The numbering of positions of carbon atoms in the steroid nucleus is set in a template found in the Nomenclature of Steroids that is used regardless of whether an atom is present in the steroid in question. Unsaturated carbons (generally, ones that are part of a double bond) in the steroid nucleus are indicated by changing -ane to -ene. This change was traditionally done in the parent name, adding a prefix to denote the position, with or without Δ (Greek capital delta) which designates unsaturation, for example, 4-pregnene-11β,17α-diol-3,20-dione (also Δ4-pregnene-11β,17α-diol-3,20-dione) or 4-androstene-3,11,17-trione (also Δ4-androstene-3,11,17-trione). However, the Nomenclature of Steroids recommends the locant of a double bond to be always adjacent to the syllable designating the unsaturation, therefore, having it as a suffix rather than a prefix, and without the use of the Δ character, i.e. pregn-4-ene-11β,17α-diol-3,20-dione or androst-4-ene-3,11,17-trione. The double bond is designated by the lower-numbered carbon atom, i.e. "Δ4-" or "4-ene" means the double bond between positions 4 and 5. The saturation of carbons of a parent steroid can be done by adding "dihydro-" prefix, i.e., a saturation of carbons 4 and 5 of testosterone with two hydrogen atoms is 4,5α-dihydrotestosterone or 4,5β-dihydrotestosterone. Generally, when there is no ambiguity, one number of a hydrogen position from a steroid with a saturated bond may be omitted, leaving only the position of the second hydrogen atom, e.g., 5α-dihydrotestosterone or 5β-dihydrotestosterone. The Δ5-steroids are those with a double bond between carbons 5 and 6 and the Δ4 steroids are those with a double bond between carbons 4 and 5. The abbreviations like "P4" for progesterone and "A4" for androstenedione for refer to Δ4-steroids, while "P5" for pregnenolone and "A5" for androstenediol refer to Δ5-steroids. The suffix -ol denotes a hydroxy group, while the suffix -one denotes an oxo group. When two or three identical groups are attached to the base structure at different positions, the suffix is indicated as -diol or -triol for hydroxy, and -dione or -trione for oxo groups, respectively. For example, 5α-pregnane-3α,17α-diol-20-one has a hydrogen atom at the 5α position (hence the "5α-" prefix), two hydroxy groups (-OH) at the 3α and 17α positions (hence "3α,17α-diol" suffix) and an oxo group (=O) at the position 20 (hence the "20-one" suffix). However, erroneous use of suffixes can be found, e.g., "5α-pregnan-17α-diol-3,11,20-trione" [sic] — since it has just one hydroxy group (at 17α) rather than two, then the suffix should be -ol, rather than -diol, so that the correct name to be "5α-pregnan-17α-ol-3,11,20-trione". According to the rule set in the Nomenclature of Steroids, the terminal "e" in the parent structure name should be elided before the vowel (the presence or absence of a number does not affect such elision). This means, for instance, that if the suffix immediately appended to the parent structure name begins with a vowel, the trailing "e" is removed from that name. An example of such removal is "5α-pregnan-17α-ol-3,20-dione", where the last "e" of "pregnane" is dropped due to the vowel ("o") at the beginning of the suffix -ol. Some authors incorrectly use this rule, eliding the terminal "e" where it should be kept, or vice versa. The term "11-oxygenated" refers to the presence of an oxygen atom as an oxo (=O) or hydroxy (-OH) substituent at carbon 11. "Oxygenated" is consistently used within the chemistry of the steroids since the 1950s. Some studies use the term "11-oxyandrogens" as an abbreviation for 11-oxygenated androgens, to emphasize that they all have an oxygen atom attached to carbon at position 11. However, in chemical nomenclature, the prefix "oxy" is associated with ether functional groups, i.e., a compound with an oxygen atom connected to two alkyl or aryl groups (R-O-R), therefore, using "oxy" within the name of a steroid class may be misleading. One can find clear examples of "oxygenated" to refer to a broad class of organic molecules containing a variety of oxygen containing functional groups in other domains of organic chemistry, and it is appropriate to use this convention. Even though "keto" is a standard prefix in organic chemistry, the 1989 recommendations of the Joint Commission on Biochemical Nomenclature discourage the application of the prefix "keto" for steroid names, and favor the prefix "oxo" (e.g., 11-oxo steroids rather than 11-keto steroids), because "keto" includes the carbon that is part of the steroid nucleus and the same carbon atom should not be specified twice. Species distribution Steroids are present across all domains of life, including bacteria, archaea, and eukaryotes. In eukaryotes, steroids are particularly abundant in fungi, plants, and animals. Eukaryotic Eukaryotic cells, encompassing animals, plants, fungi, and protists, are characterized by their complex cellular structures, including a true nucleus and membrane-bound organelles. Sterols, a subgroup of steroids, play crucial roles in maintaining membrane fluidity, supporting cell signaling, and enhancing stress tolerance. These compounds are integral to eukaryotic membranes, where they contribute to membrane integrity and functionality. During eukaryogenesis—the evolutionary process that gave rise to modern eukaryotic cells—steroids likely facilitated the endosymbiotic acquisition of mitochondria. Prokaryotic Although sterol biosynthesis is rare in prokaryotes, certain bacteria, including Methylococcus capsulatus, specific methanotrophs, myxobacteria, and the planctomycete Gemmata obscuriglobus, are capable of producing sterols. In G. obscuriglobus, sterols are essential for cell viability, but their roles in other bacteria remain poorly understood. Prokaryotic sterol synthesis involves the tetracyclic steroid framework, as found in myxobacteria, as well as hopanoids, pentacyclic lipids that regulate bacterial membrane functions. These sterol biosynthetic pathways may have originated in bacteria or been transferred from eukaryotes. Sterol synthesis depends on two key enzymes: squalene monooxygenase and oxidosqualene cyclase. Phylogenetic analyses of oxidosqualene cyclase (Osc) suggest that some bacterial Osc genes may have been acquired via horizontal gene transfer from eukaryotes, as certain bacterial Osc proteins closely resemble their eukaryotic homologs. Fungal Fungal steroids include the ergosterols, which are involved in maintaining the integrity of the fungal cellular membrane. Various antifungal drugs, such as amphotericin B and azole antifungals, utilize this information to kill pathogenic fungi. Fungi can alter their ergosterol content (e.g. through loss of function mutations in the enzymes ERG3 or ERG6, inducing depletion of ergosterol, or mutations that decrease the ergosterol content) to develop resistance to drugs that target ergosterol. Ergosterol is analogous to the cholesterol found in the cellular membranes of animals (including humans), or the phytosterols found in the cellular membranes of plants. All mushrooms contain large quantities of ergosterol, in the range of tens to hundreds of milligrams per 100 grams of dry weight. Oxygen is necessary for the synthesis of ergosterol in fungi. Ergosterol is responsible for the vitamin D content found in mushrooms; ergosterol is chemically converted into provitamin D2 by exposure to ultraviolet light. Provitamin D2 spontaneously forms vitamin D2. However, not all fungi utilize ergosterol in their cellular membranes; for example, the pathogenic fungal species Pneumocystis jirovecii does not, which has important clinical implications (given the mechanism of action of many antifungal drugs). Using the fungus Saccharomyces cerevisiae as an example, other major steroids include ergosta‐5,7,22,24(28)‐tetraen‐3β‐ol, zymosterol, and lanosterol. S. cerevisiae utilizes 5,6‐dihydroergosterol in place of ergosterol in its cell membrane. Plant Plant steroids include steroidal alkaloids found in Solanaceae and Melanthiaceae (specially the genus Veratrum), cardiac glycosides, the phytosterols and the brassinosteroids (which include several plant hormones). Animal Animal steroids include compounds of vertebrate and insect origin, the latter including ecdysteroids such as ecdysterone (controlling molting in some species). Vertebrate examples include the steroid hormones and cholesterol; the latter is a structural component of cell membranes that helps determine the fluidity of cell membranes and is a principal constituent of plaque (implicated in atherosclerosis). Steroid hormones include: Sex hormones, which influence sex differences and support reproduction. These include androgens, estrogens, and progestogens. Corticosteroids, including most synthetic steroid drugs, with natural product classes the glucocorticoids (which regulate many aspects of metabolism and immune function) and the mineralocorticoids (which help maintain blood volume and control renal excretion of electrolytes) Anabolic steroids, natural and synthetic, which interact with androgen receptors to increase muscle and bone synthesis. In popular use, the term "steroids" often refers to anabolic steroids. Types By function The major classes of steroid hormones, with prominent members and examples of related functions, are: Corticosteroids: Glucocorticoids: Cortisol, a glucocorticoid whose functions include immunosuppression Mineralocorticoids: Aldosterone, a mineralocorticoid that helps regulate blood pressure through water and electrolyte balance Sex steroids: Progestogens: Progesterone, which regulates cyclical changes in the endometrium of the uterus and maintains a pregnancy Androgens: Testosterone, which contributes to the development and maintenance of male secondary sex characteristics Estrogens: Estradiol, which contributes to the development and maintenance of female secondary sex characteristics Additional classes of steroids include: Neurosteroids such as and allopregnanolone Bile acids such as taurocholic acid Aminosteroid neuromuscular blocking agents (mainly synthetic) such as pancuronium bromide Steroidal antiandrogens (mainly synthetic) such as cyproterone acetate Steroidogenesis inhibitors (mainly exogenous) such as alfatradiol Membrane sterols such as cholesterol, ergosterol, and various phytosterols Toxins such as steroidal saponins and cardenolides/cardiac glycosides As well as the following class of secosteroids (open-ring steroids): Vitamin D forms such as ergocalciferol, cholecalciferol, and calcitriol By structure Intact ring system Steroids can be classified based on their chemical composition. One example of how MeSH performs this classification is available at the Wikipedia MeSH catalog. Examples of this classification include: In biology, it is common to name the above steroid classes by the number of carbon atoms present when referring to hormones: C18-steroids for the estranes (mostly estrogens), C19-steroids for the androstanes (mostly androgens), and C21-steroids for the pregnanes (mostly corticosteroids). The classification "17-ketosteroid" is also important in medicine. The gonane (steroid nucleus) is the parent 17-carbon tetracyclic hydrocarbon molecule with no alkyl sidechains. Cleaved, contracted, and expanded rings Secosteroids (Latin seco, "to cut") are a subclass of steroidal compounds resulting, biosynthetically or conceptually, from scission (cleavage) of parent steroid rings (generally one of the four). Major secosteroid subclasses are defined by the steroid carbon atoms where this scission has taken place. For instance, the prototypical secosteroid cholecalciferol, vitamin D3 (shown), is in the 9,10-secosteroid subclass and derives from the cleavage of carbon atoms C-9 and C-10 of the steroid B-ring; 5,6-secosteroids and 13,14-steroids are similar. Norsteroids (nor-, L. norma; "normal" in chemistry, indicating carbon removal) and homosteroids (homo-, Greek homos; "same", indicating carbon addition) are structural subclasses of steroids formed from biosynthetic steps. The former involves enzymic ring expansion-contraction reactions, and the latter is accomplished (biomimetically) or (more frequently) through ring closures of acyclic precursors with more (or fewer) ring atoms than the parent steroid framework. Combinations of these ring alterations are known in nature. For instance, ewes who graze on corn lily ingest cyclopamine (shown) and veratramine, two of a sub-family of steroids where the C- and D-rings are contracted and expanded respectively via a biosynthetic migration of the original C-13 atom. Ingestion of these C-nor-D-homosteroids results in birth defects in lambs: cyclopia from cyclopamine and leg deformity from veratramine. A further C-nor-D-homosteroid (nakiterpiosin) is excreted by Okinawan cyanobacteriosponges. e.g., Terpios hoshinota, leading to coral mortality from black coral disease. Nakiterpiosin-type steroids are active against the signaling pathway involving the smoothened and hedgehog proteins, a pathway which is hyperactive in a number of cancers. Biological significance Steroids and their metabolites often function as signalling molecules (the most notable examples are steroid hormones), and steroids and phospholipids are components of cell membranes. Steroids such as cholesterol decrease membrane fluidity. Similar to lipids, steroids are highly concentrated energy stores. However, they are not typically sources of energy; in mammals, they are normally metabolized and excreted. Steroids play critical roles in a number of disorders, including malignancies like prostate cancer, where steroid production inside and outside the tumour promotes cancer cell aggressiveness. Biosynthesis and metabolism The hundreds of steroids found in animals, fungi, and plants are made from lanosterol (in animals and fungi; see examples above) or cycloartenol (in other eukaryotes). Both lanosterol and cycloartenol derive from cyclization of the triterpenoid squalene. Lanosterol and cycloartenol are sometimes called protosterols because they serve as the starting compounds for all other steroids. Steroid biosynthesis is an anabolic pathway which produces steroids from simple precursors. A unique biosynthetic pathway is followed in animals (compared to many other organisms), making the pathway a common target for antibiotics and other anti-infection drugs. Steroid metabolism in humans is also the target of cholesterol-lowering drugs, such as statins. In humans and other animals the biosynthesis of steroids follows the mevalonate pathway, which uses acetyl-CoA as building blocks for dimethylallyl diphosphate (DMAPP) and isopentenyl diphosphate (IPP). In subsequent steps DMAPP and IPP conjugate to form farnesyl diphosphate (FPP), which further conjugates with each other to form the linear triterpenoid squalene. Squalene biosynthesis is catalyzed by squalene synthase, which belongs to the squalene/phytoene synthase family. Subsequent epoxidation and cyclization of squalene generate lanosterol, which is the starting point for additional modifications into other steroids (steroidogenesis). In other eukaryotes, the cyclization product of epoxidized squalene (oxidosqualene) is cycloartenol. Mevalonate pathway The mevalonate pathway (also called HMG-CoA reductase pathway) begins with acetyl-CoA and ends with dimethylallyl diphosphate (DMAPP) and isopentenyl diphosphate (IPP). DMAPP and IPP donate isoprene units, which are assembled and modified to form terpenes and isoprenoids (a large class of lipids, which include the carotenoids and form the largest class of plant natural products). Here, the activated isoprene units are joined to make squalene and folded into a set of rings to make lanosterol. Lanosterol can then be converted into other steroids, such as cholesterol and ergosterol. Two classes of drugs target the mevalonate pathway: statins (like rosuvastatin), which are used to reduce elevated cholesterol levels, and bisphosphonates (like zoledronate), which are used to treat a number of bone-degenerative diseases. Steroidogenesis Steroidogenesis is the biological process by which steroids are generated from cholesterol and changed into other steroids. The pathways of steroidogenesis differ among species. The major classes of steroid hormones, as noted above (with their prominent members and functions), are the progestogens, corticosteroids (corticoids), androgens, and estrogens. Human steroidogenesis of these classes occurs in a number of locations: Progestogens are the precursors of all other human steroids, and all human tissues which produce steroids must first convert cholesterol to pregnenolone. This conversion is the rate-limiting step of steroid synthesis, which occurs inside the mitochondrion of the respective tissue. It is catalyzed by the mitochondrial P450scc system. Cortisol, corticosterone, aldosterone are produced in the adrenal cortex. Estradiol, estrone and progesterone are made primarily in the ovary, estriol in placenta during pregnancy, and testosterone primarily in the testes (some testosterone may also be produced in the adrenal cortex). Estradiol is converted from testosterone directly (in males), or via the primary pathway DHEA – androstenedione – estrone and secondarily via testosterone (in females). Stromal cells have been shown to produce steroids in response to signaling produced by androgen-starved prostate cancer cells. Some neurons and glia in the central nervous system (CNS) express the enzymes required for the local synthesis of pregnenolone, progesterone, DHEA and DHEAS, de novo or from peripheral sources. Alternative pathways In plants and bacteria, the non-mevalonate pathway (MEP pathway) uses pyruvate and glyceraldehyde 3-phosphate as substrates to produce IPP and DMAPP. During diseases pathways otherwise not significant in healthy humans can become utilized. For example, in one form of congenital adrenal hyperplasia a deficiency in the 21-hydroxylase enzymatic pathway leads to an excess of 17α-Hydroxyprogesterone (17-OHP) – this pathological excess of 17-OHP in turn may be converted to dihydrotestosterone (DHT, a potent androgen) through among others 17,20 Lyase (a member of the cytochrome P450 family of enzymes), 5α-Reductase and 3α-Hydroxysteroid dehydrogenase. Catabolism and excretion Steroids are primarily oxidized by cytochrome P450 oxidase enzymes, such as CYP3A4. These reactions introduce oxygen into the steroid ring, allowing the cholesterol to be broken up by other enzymes into bile acids. These acids can then be eliminated by secretion from the liver in bile. The expression of the oxidase gene can be upregulated by the steroid sensor PXR when there is a high blood concentration of steroids. Steroid hormones, lacking the side chain of cholesterol and bile acids, are typically hydroxylated at various ring positions or oxidized at the 17 position, conjugated with sulfate or glucuronic acid and excreted in the urine. Isolation, structure determination, and methods of analysis Steroid isolation, depending on context, is the isolation of chemical matter required for chemical structure elucidation, derivitzation or degradation chemistry, biological testing, and other research needs (generally milligrams to grams, but often more or the isolation of "analytical quantities" of the substance of interest (where the focus is on identifying and quantifying the substance (for example, in biological tissue or fluid). The amount isolated depends on the analytical method, but is generally less than one microgram. The methods of isolation to achieve the two scales of product are distinct, but include extraction, precipitation, adsorption, chromatography, and crystallization. In both cases, the isolated substance is purified to chemical homogeneity; combined separation and analytical methods, such as LC-MS, are chosen to be "orthogonal"—achieving their separations based on distinct modes of interaction between substance and isolating matrix—to detect a single species in the pure sample. Structure determination refers to the methods to determine the chemical structure of an isolated pure steroid, using an evolving array of chemical and physical methods which have included NMR and small-molecule crystallography. Methods of analysis overlap both of the above areas, emphasizing analytical methods to determining if a steroid is present in a mixture and determining its quantity. Chemical synthesis Microbial catabolism of phytosterol side chains yields C-19 steroids, C-22 steroids, and 17-ketosteroids (i.e. precursors to adrenocortical hormones and contraceptives). The addition and modification of functional groups is key when producing the wide variety of medications available within this chemical classification. These modifications are performed using conventional organic synthesis and/or biotransformation techniques. Precursors Semisynthesis The semisynthesis of steroids often begins from precursors such as cholesterol, phytosterols, or sapogenins. The efforts of Syntex, a company involved in the Mexican barbasco trade, used Dioscorea mexicana to produce the sapogenin diosgenin in the early days of the synthetic steroid pharmaceutical industry. Total synthesis Some steroidal hormones are economically obtained only by total synthesis from petrochemicals (e.g. 13-alkyl steroids). For example, the pharmaceutical Norgestrel begins from methoxy-1-tetralone, a petrochemical derived from phenol. Research awards A number of Nobel Prizes have been awarded for steroid research, including: 1927 (Chemistry) Heinrich Otto Wieland — Constitution of bile acids and sterols and their connection to vitamins 1928 (Chemistry) Adolf Otto Reinhold Windaus — Constitution of sterols and their connection to vitamins 1939 (Chemistry) Adolf Butenandt and Leopold Ružička — Isolation and structural studies of steroid sex hormones, and related studies on higher terpenes 1950 (Physiology or Medicine) Edward Calvin Kendall, Tadeus Reichstein, and Philip Hench — Structure and biological effects of adrenal hormones 1965 (Chemistry) Robert Burns Woodward — In part, for the synthesis of cholesterol, cortisone, and lanosterol 1969 (Chemistry) Derek Barton and Odd Hassel — Development of the concept of conformation in chemistry, emphasizing the steroid nucleus 1975 (Chemistry) Vladimir Prelog — In part, for developing methods to determine the stereochemical course of cholesterol biosynthesis from mevalonic acid via squalene
Biology and health sciences
Biochemistry and molecular biology
null
142100
https://en.wikipedia.org/wiki/Hydride
Hydride
In chemistry, a hydride is formally the anion of hydrogen (H−), a hydrogen ion with two electrons. In modern usage, this is typically only used for ionic bonds, but it is sometimes (and more frequently in the past) been applied to all compounds containing covalently bound H atoms. In this broad and potentially archaic sense, water (H2O) is a hydride of oxygen, ammonia is a hydride of nitrogen, etc. In covalent compounds, it implies hydrogen is attached to a less electronegative element. In such cases, the H centre has nucleophilic character, which contrasts with the protic character of acids. The hydride anion is very rarely observed. Almost all of the elements form binary compounds with hydrogen, the exceptions being He, Ne, Ar, Kr, Pm, Os, Ir, Rn, Fr, and Ra. Exotic molecules such as positronium hydride have also been made. Bonds Bonds between hydrogen and the other elements range from being highly ionic to somewhat covalent. Some hydrides, e.g. boron hydrides, do not conform to classical electron counting rules and the bonding is described in terms of multi-centered bonds, whereas the interstitial hydrides often involve metallic bonding. Hydrides can be discrete molecules, oligomers or polymers, ionic solids, chemisorbed monolayers, bulk metals (interstitial), or other materials. While hydrides traditionally react as Lewis bases or reducing agents, some metal hydrides behave as hydrogen-atom donors and act as acids. Applications Hydrides such as sodium borohydride, lithium aluminium hydride, diisobutylaluminium hydride (DIBAL) and super hydride, are commonly used as reducing agents in chemical synthesis. The hydride adds to an electrophilic center, typically unsaturated carbon. Hydrides such as sodium hydride and potassium hydride are used as strong bases in organic synthesis. The hydride reacts with the weak Bronsted acid releasing H2. Hydrides such as calcium hydride are used as desiccants, i.e. drying agents, to remove trace water from organic solvents. The hydride reacts with water forming hydrogen and hydroxide salt. The dry solvent can then be distilled or vacuum transferred from the "solvent pot". Hydrides are important in storage battery technologies such as nickel-metal hydride battery. Various metal hydrides have been examined for use as a means of hydrogen storage for fuel cell-powered electric cars and other purposed aspects of a hydrogen economy. Hydride complexes are catalysts and catalytic intermediates in a variety of homogeneous and heterogeneous catalytic cycles. Important examples include hydrogenation, hydroformylation, hydrosilylation, hydrodesulfurization catalysts. Even certain enzymes, the hydrogenase, operate via hydride intermediates. The energy carrier nicotinamide adenine dinucleotide reacts as a hydride donor or hydride equivalent. Hydride ion Free hydride anions exist only under extreme conditions and are not invoked for homogeneous solution. Instead, many compounds have hydrogen centres with hydridic character. Aside from electride, the hydride ion is the simplest possible anion, consisting of two electrons and a proton. Hydrogen has a relatively low electron affinity, 72.77 kJ/mol and reacts exothermically with protons as a powerful Lewis base. H- + H+ -> H2 The low electron affinity of hydrogen and the strength of the H–H bond () means that the hydride ion would also be a strong reducing agent H2 + 2e- <=> 2H- Types of hydrides According to the general definition, every element of the periodic table (except some noble gases) forms one or more hydrides. These substances have been classified into three main types according to the nature of their bonding: Ionic hydrides, which have significant ionic bonding character. Covalent hydrides, which include the hydrocarbons and many other compounds which covalently bond to hydrogen atoms. Interstitial hydrides, which may be described as having metallic bonding. While these divisions have not been used universally, they are still useful to understand differences in hydrides. Ionic hydrides These are stoichiometric compounds of hydrogen. Ionic or saline hydrides are composed of hydride bound to an electropositive metal, generally an alkali metal or alkaline earth metal. The divalent lanthanides such as europium and ytterbium form compounds similar to those of heavier alkaline earth metals. In these materials the hydride is viewed as a pseudohalide. Saline hydrides are insoluble in conventional solvents, reflecting their non-molecular structures. Ionic hydrides are used as bases and, occasionally, as reducing reagents in organic synthesis. C6H5C(O)CH3 + KH → C6H5C(O)CH2K + H2 Typical solvents for such reactions are ethers. Water and other protic solvents cannot serve as a medium for ionic hydrides because the hydride ion is a stronger base than hydroxide and most hydroxyl anions. Hydrogen gas is liberated in a typical acid-base reaction. NaH + H2O -> H2_{(g)}{} + NaOH ΔH = −83.6 kJ/mol, ΔG = −109.0 kJ/mol Often alkali metal hydrides react with metal halides. Lithium aluminium hydride (often abbreviated as LAH) arises from reactions of lithium hydride with aluminium chloride. 4 LiH + AlCl3 → LiAlH4 + 3 LiCl Covalent hydrides According to some definitions, covalent hydrides cover all other compounds containing hydrogen. Some definitions limit hydrides to hydrogen centres that formally react as hydrides, i.e. are nucleophilic, and hydrogen atoms bound to metal centers. These hydrides are formed by all the true non-metals (except zero group elements) and the elements like Al, Ga, Sn, Pb, Bi, Po, etc., which are normally metallic in nature, i.e., this class includes the hydrides of p-block elements. In these substances the hydride bond is formally a covalent bond much like the bond made by a proton in a weak acid. This category includes hydrides that exist as discrete molecules, polymers or oligomers, and hydrogen that has been chem-adsorbed to a surface. A particularly important segment of covalent hydrides are complex metal hydrides, powerful soluble hydrides commonly used in synthetic procedures. Molecular hydrides often involve additional ligands; for example, diisobutylaluminium hydride (DIBAL) consists of two aluminum centers bridged by hydride ligands. Hydrides that are soluble in common solvents are widely used in organic synthesis. Particularly common are sodium borohydride () and lithium aluminium hydride and hindered reagents such as DIBAL. Interstitial hydrides or metallic hydrides Interstitial hydrides most commonly exist within metals or alloys. They are traditionally termed "compounds" even though they do not strictly conform to the definition of a compound, more closely resembling common alloys such as steel. In such hydrides, hydrogen can exist as either atomic or diatomic entities. Mechanical or thermal processing, such as bending, striking, or annealing, may cause the hydrogen to precipitate out of solution by degassing. Their bonding is generally considered metallic. Such bulk transition metals form interstitial binary hydrides when exposed to hydrogen. These systems are usually non-stoichiometric, with variable amounts of hydrogen atoms in the lattice. In materials engineering, the phenomenon of hydrogen embrittlement results from the formation of interstitial hydrides. Hydrides of this type form according to either one of two main mechanisms. The first mechanism involves the adsorption of dihydrogen, succeeded by the cleaving of the H-H bond, the delocalisation of the hydrogen's electrons, and finally the diffusion of the protons into the metal lattice. The other main mechanism involves the electrolytic reduction of ionised hydrogen on the surface of the metal lattice, also followed by the diffusion of the protons into the lattice. The second mechanism is responsible for the observed temporary volume expansion of certain electrodes used in electrolytic experiments. Palladium absorbs up to 900 times its own volume of hydrogen at room temperatures, forming palladium hydride. This material has been discussed as a means to carry hydrogen for vehicular fuel cells. Interstitial hydrides show certain promise as a way for safe hydrogen storage. Neutron diffraction studies have shown that hydrogen atoms randomly occupy the octahedral interstices in the metal lattice (in an fcc lattice there is one octahedral hole per metal atom). The limit of absorption at normal pressures is PdH0.7, indicating that approximately 70% of the octahedral holes are occupied. Many interstitial hydrides have been developed that readily absorb and discharge hydrogen at room temperature and atmospheric pressure. They are usually based on intermetallic compounds and solid-solution alloys. However, their application is still limited, as they are capable of storing only about 2 weight percent of hydrogen, insufficient for automotive applications. Transition metal hydride complexes Transition metal hydrides include compounds that can be classified as covalent hydrides. Some are even classified as interstitial hydrides and other bridging hydrides. Classical transition metal hydride feature a single bond between the hydrogen centre and the transition metal. Some transition metal hydrides are acidic, e.g., and . The anions potassium nonahydridorhenate and are examples from the growing collection of known molecular homoleptic metal hydrides. As pseudohalides, hydride ligands are capable of bonding with positively polarized hydrogen centres. This interaction, called dihydrogen bonding, is similar to hydrogen bonding, which exists between positively polarized protons and electronegative atoms with open lone pairs. Protides Hydrides containing protium are known as protides. Deuterides Hydrides containing deuterium are known as deuterides. Some deuterides, such as LiD, are important fusion fuels in thermonuclear weapons and useful moderators in nuclear reactors. Tritides Hydrides containing tritium are known as tritides. Mixed anion compounds Mixed anion compounds exist that contain hydride with other anions. These include boride hydrides, carbohydrides, hydridonitrides, oxyhydrides and others. Appendix on nomenclature Protide, deuteride and tritide are used to describe ions or compounds that contain enriched hydrogen-1, deuterium or tritium, respectively. In the classic meaning, hydride refers to any compound hydrogen forms with other elements, ranging over groups 1–16 (the binary compounds of hydrogen). The following is a list of the nomenclature for the hydride derivatives of main group compounds according to this definition: alkali and alkaline earth metals: metal hydride boron: borane, BH3 aluminium: alumane, AlH3 gallium: gallane, GaH3 indium: indigane, InH3 thallium: thallane, TlH3 carbon: alkanes, alkenes, alkynes, and all hydrocarbons silicon: silane germanium: germane tin: stannane lead: plumbane nitrogen: ammonia ("azane" when substituted), hydrazine phosphorus: phosphine (note "phosphane" is the IUPAC recommended name) arsenic: arsine (note "arsane" is the IUPAC recommended name) antimony: stibine (note "stibane" is the IUPAC recommended name) bismuth: bismuthine (note "bismuthane" is the IUPAC recommended name) helium: helium hydride (only exists as an ion) According to the convention above, the following are "hydrogen compounds" and not "hydrides": oxygen: water ("oxidane" when substituted; synonym: hydrogen oxide), hydrogen peroxide sulfur: hydrogen sulfide ("sulfane" when substituted) selenium: hydrogen selenide ("selane" when substituted) tellurium: hydrogen telluride ("tellane" when substituted) polonium: hydrogen polonide ("polane" when substituted) halogens: hydrogen halides Examples: nickel hydride: used in NiMH batteries palladium hydride: electrodes in cold fusion experiments lithium aluminium hydride: a powerful reducing agent used in organic chemistry sodium borohydride: selective specialty reducing agent, hydrogen storage in fuel cells sodium hydride: a powerful base used in organic chemistry diborane: reducing agent, rocket fuel, semiconductor dopant, catalyst, used in organic synthesis; also borane, pentaborane and decaborane arsine: used for doping semiconductors stibine: used in semiconductor industry phosphine: used for fumigation silane: many industrial uses, e.g. manufacture of composite materials and water repellents ammonia: coolant, fuel, fertilizer, many other industrial uses hydrogen sulfide: component of natural gas, important source of sulfur Chemically, even water and hydrocarbons could be considered hydrides. All metalloid hydrides are highly flammable. All solid non-metallic hydrides except ice are highly flammable. But when hydrogen combines with halogens it produces acids rather than hydrides, and they are not flammable. Precedence convention According to IUPAC convention, by precedence (stylized electronegativity), hydrogen falls between group 15 and group 16 elements. Therefore, we have NH3, "nitrogen hydride" (ammonia), versus H2O, "hydrogen oxide" (water). This convention is sometimes broken for polonium, which on the grounds of polonium's metallicity is often referred to as "polonium hydride" instead of the expected "hydrogen polonide".
Physical sciences
Hydride salts
Chemistry
142425
https://en.wikipedia.org/wiki/Herpetology
Herpetology
Herpetology (from Greek ἑρπετόν herpetón, meaning "reptile" or "creeping animal") is a branch of zoology concerned with the study of amphibians (including frogs, salamanders, and caecilians (Gymnophiona)) and reptiles (including snakes, lizards, turtles, crocodilians, and tuataras). Birds, which are cladistically included within Reptilia, are traditionally excluded here; the separate scientific study of birds is the subject of ornithology. The precise definition of herpetology is the study of ectothermic (cold-blooded) tetrapods. This definition of "herps" (otherwise called "herptiles" or "herpetofauna") excludes fish; however, it is not uncommon for herpetological and ichthyological scientific societies to collaborate. For instance, groups such as the American Society of Ichthyologists and Herpetologists have co-published journals and hosted conferences to foster the exchange of ideas between the fields. Herpetological societies are formed to promote interest in reptiles and amphibians, both captive and wild. Herpetological studies can offer benefits relevant to other fields by providing research on the role of amphibians and reptiles in global ecology. For example, by monitoring amphibians that are very sensitive to environmental changes, herpetologists record visible warnings that significant climate changes are taking place. Although they can be deadly, some toxins and venoms produced by reptiles and amphibians are useful in human medicine. Currently, some snake venom has been used to create anti-coagulants that work to treat strokes and heart attacks. Naming and etymology The word herpetology is from Greek: ἑρπετόν, herpetón, "creeping animal" and , -logia, "knowledge". "Herp" is a vernacular term for non-avian reptiles and amphibians. It is derived from the archaic term "herpetile", with roots back to Linnaeus's classification of animals, in which he grouped reptiles and amphibians in the same class. There are over 6700 species of amphibians and over 9000 species of reptiles. Despite its modern taxonomic irrelevance, the term has persisted, particularly in the names of herpetology, the scientific study of non-avian reptiles and amphibians, and herpetoculture, the captive care and breeding of reptiles and amphibians. Subfields The field of herpetology can be divided into areas dealing with particular taxonomic groups such as frogs and other amphibians (batrachology), snakes (ophiology or ophidiology), lizards (saurology) and turtles (cheloniology, chelonology, or testudinology). More generally, herpetologists work on functional problems in the ecology, evolution, physiology, behavior, taxonomy, or molecular biology of amphibians and reptiles. Amphibians or reptiles can be used as model organisms for specific questions in these fields, such as the role of frogs in the ecology of a wetland. All of these areas are related through their evolutionary history, an example being the evolution of viviparity (including behavior and reproduction). Careers Career options in the field of herpetology include lab research, field studies and surveys, assistance in veterinary and medical procedures, zoological staff, museum staff, and college teaching. In modern academic science, it is rare for an individual to solely consider themselves to be a herpetologist. Most individuals focus on a particular field such as ecology, evolution, taxonomy, physiology, or molecular biology, and within that field ask questions pertaining to or best answered by examining reptiles and amphibians. For example, an evolutionary biologist who is also a herpetologist may choose to work on an issue such as the evolution of warning coloration in coral snakes. Modern herpetological writers include Mark O'Shea and Philip Purser. Modern herpetological showmen include Jeff Corwin, Steve Irwin (popularly known as the "Crocodile Hunter"), and Austin Stevens, popularly known as "Austin Snakeman" in the TV series Austin Stevens: Snakemaster. Herpetology is an established hobby around the world due to the varied biodiversity in many environments. Many amateur herpetologists coin themselves as "herpers". Study Most colleges or universities do not offer a major in herpetology at the undergraduate or the graduate level. Instead, persons interested in herpetology select a major in the biological sciences. The knowledge learned about all aspects of the biology of animals is then applied to an individual study of herpetology. Journals Herpetology research is published in academic journals including Ichthyology & Herpetology, founded in 1913 (under the name Copeia in honour of Edward Drinker Cope); Herpetologica, founded in 1936; Reptiles and amphibians, founded in 1990; and Contemporary Herpetology, founded in 1997 and stopped publishing in 2009.
Biology and health sciences
Basics_2
Biology
142431
https://en.wikipedia.org/wiki/Homology%20%28biology%29
Homology (biology)
In biology, homology is similarity in anatomical structures or genes between organisms of different taxa due to shared ancestry, regardless of current functional differences. Evolutionary biology explains homologous structures as retained heredity from a common ancestor after having been subjected to adaptive modifications for different purposes as the result of natural selection. The term was first applied to biology in a non-evolutionary context by the anatomist Richard Owen in 1843. Homology was later explained by Charles Darwin's theory of evolution in 1859, but had been observed before this from Aristotle's biology onwards, and it was explicitly analysed by Pierre Belon in 1555. A common example of homologous structures is the forelimbs of vertebrates, where the wings of bats and birds, the arms of primates, the front flippers of whales, and the forelegs of four-legged vertebrates like horses and crocodilians are all derived from the same ancestral tetrapod structure. In developmental biology, organs that developed in the embryo in the same manner and from similar origins, such as from matching primordia in successive segments of the same animal, are serially homologous. Examples include the legs of a centipede, the maxillary and labial palps of an insect, and the spinous processes of successive vertebrae in a vertebrate's backbone. Male and female reproductive organs are homologous if they develop from the same embryonic tissue, as do the ovaries and testicles of mammals, including humans. Sequence homology between protein or DNA sequences is similarly defined in terms of shared ancestry. Two segments of DNA can have shared ancestry because of either a speciation event (orthologs) or a duplication event (paralogs). Homology among proteins or DNA is inferred from their sequence similarity. Significant similarity is strong evidence that two sequences are related by divergent evolution from a common ancestor. Alignments of multiple sequences are used to discover the homologous regions. Homology remains controversial in animal behaviour, but there is suggestive evidence that, for example, dominance hierarchies are homologous across the primates. History Homology was noticed by Aristotle (c. 350 BC), and was explicitly analysed by Pierre Belon in his 1555 Book of Birds, where he systematically compared the skeletons of birds and humans. The pattern of similarity was interpreted as part of the static great chain of being through the mediaeval and early modern periods: it was not then seen as implying evolutionary change. In the German Naturphilosophie tradition, homology was of special interest as demonstrating unity in nature. In 1790, Goethe stated his foliar theory in his essay "Metamorphosis of Plants", showing that flower parts are derived from leaves. The serial homology of limbs was described late in the 18th century. The French zoologist Etienne Geoffroy Saint-Hilaire showed in 1818 in his theorie d'analogue ("theory of homologues") that structures were shared between fishes, reptiles, birds, and mammals. When Geoffroy went further and sought homologies between Georges Cuvier's embranchements, such as vertebrates and molluscs, his claims triggered the 1830 Cuvier-Geoffroy debate. Geoffroy stated the principle of connections, namely that what is important is the relative position of different structures and their connections to each other. Embryologist Karl Ernst von Baer stated what are now called von Baer's laws in 1828, noting that related animals begin their development as similar embryos and then diverge: thus, animals in the same family are more closely related and diverge later than animals which are only in the same order and have fewer homologies. Von Baer's theory recognises that each taxon (such as a family) has distinctive shared features, and that embryonic development parallels the taxonomic hierarchy: not the same as recapitulation theory. The term "homology" was first used in biology by the anatomist Richard Owen in 1843 when studying the similarities of vertebrate fins and limbs, defining it as the "same organ in different animals under every variety of form and function", and contrasting it with the matching term "analogy" which he used to describe different structures with the same function. Owen codified 3 main criteria for determining if features were homologous: position, development, and composition. In 1859, Charles Darwin explained homologous structures as meaning that the organisms concerned shared a body plan from a common ancestor, and that taxa were branches of a single tree of life. Definition The word homology, coined in about 1656, is derived from the Greek ὁμόλογος from ὁμός 'same' and λόγος 'relation'. Similar biological structures or sequences in different taxa are homologous if they are derived from a common ancestor. Homology thus implies divergent evolution. For example, many insects (such as dragonflies) possess two pairs of flying wings. In beetles, the first pair of wings has evolved into a pair of hard wing covers, while in Dipteran flies the second pair of wings has evolved into small halteres used for balance. Similarly, the forelimbs of ancestral vertebrates have evolved into the front flippers of whales, the wings of birds, the running forelegs of dogs, deer, and horses, the short forelegs of frogs and lizards, and the grasping hands of primates including humans. The same major forearm bones (humerus, radius, and ulna) are found in fossils of lobe-finned fish such as Eusthenopteron. Homology vs. analogy The opposite of homologous organs are analogous organs which do similar jobs in two taxa that were not present in their most recent common ancestor but rather evolved separately. For example, the wings of insects and birds evolved independently in widely separated groups, and converged functionally to support powered flight, so they are analogous. Similarly, the wings of a sycamore maple seed and the wings of a bird are analogous but not homologous, as they develop from quite different structures. A structure can be homologous at one level, but only analogous at another. Pterosaur, bird and bat wings are analogous as wings, but homologous as forelimbs because the organ served as a forearm (not a wing) in the last common ancestor of tetrapods, and evolved in different ways in the three groups. Thus, in the pterosaurs, the "wing" involves both the forelimb and the hindlimb. Analogy is called homoplasy in cladistics, and convergent or parallel evolution in evolutionary biology. In cladistics Specialised terms are used in taxonomic research. Primary homology is a researcher's initial hypothesis based on similar structure or anatomical connections, suggesting that a character state in two or more taxa share is shared due to common ancestry. Primary homology may be conceptually broken down further: we may consider all of the states of the same character as "homologous" parts of a single, unspecified, transformation series. This has been referred to as topographical correspondence. For example, in an aligned DNA sequence matrix, all of the A, G, C, T or implied gaps at a given nucleotide site are homologous in this way. Character state identity is the hypothesis that the particular condition in two or more taxa is "the same" as far as our character coding scheme is concerned. Thus, two Adenines at the same aligned nucleotide site are hypothesized to be homologous unless that hypothesis is subsequently contradicted by other evidence. Secondary homology is implied by parsimony analysis, where a character state that arises only once on a tree is taken to be homologous. As implied in this definition, many cladists consider secondary homology to be synonymous with synapomorphy, a shared derived character or trait state that distinguishes a clade from other organisms. Shared ancestral character states, symplesiomorphies, represent either synapomorphies of a more inclusive group, or complementary states (often absences) that unite no natural group of organisms. For example, the presence of wings is a synapomorphy for pterygote insects, but a symplesiomorphy for holometabolous insects. Absence of wings in non-pterygote insects and other organisms is a complementary symplesiomorphy that unites no group (for example, absence of wings provides no evidence of common ancestry of silverfish, spiders and annelid worms). On the other hand, absence (or secondary loss) of wings is a synapomorphy for fleas. Patterns such as these lead many cladists to consider the concept of homology and the concept of synapomorphy to be equivalent. Some cladists follow the pre-cladistic definition of homology of Haas and Simpson, and view both synapomorphies and symplesiomorphies as homologous character states. In different taxa Homologies provide the fundamental basis for all biological classification, although some may be highly counter-intuitive. For example, deep homologies like the pax6 genes that control the development of the eyes of vertebrates and arthropods were unexpected, as the organs are anatomically dissimilar and appeared to have evolved entirely independently. In arthropods The embryonic body segments (somites) of different arthropod taxa have diverged from a simple body plan with many similar appendages which are serially homologous, into a variety of body plans with fewer segments equipped with specialised appendages. The homologies between these have been discovered by comparing genes in evolutionary developmental biology. Among insects, the stinger of the female honey bee is a modified ovipositor, homologous with ovipositors in other insects such as the Orthoptera, Hemiptera, and those Hymenoptera without stingers. In mammals The three small bones in the middle ear of mammals including humans, the malleus, incus, and stapes, are today used to transmit sound from the eardrum to the inner ear. The malleus and incus develop in the embryo from structures that form jaw bones (the quadrate and the articular) in lizards, and in fossils of lizard-like ancestors of mammals. Both lines of evidence show that these bones are homologous, sharing a common ancestor. Among the many homologies in mammal reproductive systems, ovaries and testicles are homologous. Rudimentary organs such as the human tailbone, now much reduced from their functional state, are readily understood as signs of evolution, the explanation being that they were cut down by natural selection from functioning organs when their functions were no longer needed, but make no sense at all if species are considered to be fixed. The tailbone is homologous to the tails of other primates. In plants Leaves, stems, and roots In many plants, defensive or storage structures are made by modifications of the development of primary leaves, stems, and roots. Leaves are variously modified from photosynthetic structures to form the insect-trapping pitchers of pitcher plants, the insect-trapping jaws of the Venus flytrap, and the spines of cactuses, all homologous. Certain compound leaves of flowering plants are partially homologous both to leaves and shoots, because their development has evolved from a genetic mosaic of leaf and shoot development. Flower parts The four types of flower parts, namely carpels, stamens, petals, and sepals, are homologous with and derived from leaves, as Goethe correctly noted in 1790. The development of these parts through a pattern of gene expression in the growing zones (meristems) is described by the ABC model of flower development. Each of the four types of flower parts is serially repeated in concentric whorls, controlled by a small number of genes acting in various combinations. Thus, A genes working alone result in sepal formation; A and B together produce petals; B and C together create stamens; C alone produces carpels. When none of the genes are active, leaves are formed. Two more groups of genes, D to form ovules and E for the floral whorls, complete the model. The genes are evidently ancient, as old as the flowering plants themselves. Developmental biology Developmental biology can identify homologous structures that arose from the same tissue in embryogenesis. For example, adult snakes have no legs, but their early embryos have limb-buds for hind legs, which are soon lost as the embryos develop. The implication that the ancestors of snakes had hind legs is confirmed by fossil evidence: the Cretaceous snake Pachyrhachis problematicus had hind legs complete with hip bones (ilium, pubis, ischium), thigh bone (femur), leg bones (tibia, fibula) and foot bones (calcaneum, astragalus) as in tetrapods with legs today. Sequence homology As with anatomical structures, sequence homology between protein or DNA sequences is defined in terms of shared ancestry. Two segments of DNA can have shared ancestry because of either a speciation event (orthologs) or a duplication event (paralogs). Homology among proteins or DNA is typically inferred from their sequence similarity. Significant similarity is strong evidence that two sequences are related by divergent evolution of a common ancestor. Alignments of multiple sequences are used to indicate which regions of each sequence are homologous. Homologous sequences are orthologous if they are descended from the same ancestral sequence separated by a speciation event: when a species diverges into two separate species, the copies of a single gene in the two resulting species are said to be orthologous. The term "ortholog" was coined in 1970 by the molecular evolutionist Walter Fitch. Homologous sequences are paralogous if they were created by a duplication event within the genome. For gene duplication events, if a gene in an organism is duplicated, the two copies are paralogous. They can shape the structure of whole genomes and thus explain genome evolution to a large extent. Examples include the Homeobox (Hox) genes in animals. These genes not only underwent gene duplications within chromosomes but also whole genome duplications. As a result, Hox genes in most vertebrates are spread across multiple chromosomes: the HoxA–D clusters are the best studied. Some sequences are homologous, but they have diverged so much that their sequence similarity is not sufficient to establish homology. However, many proteins have retained very similar structures, and structural alignment can be used to demonstrate their homology. In behaviour It has been suggested that some behaviours might be homologous, based either on sharing across related taxa or on common origins of the behaviour in an individual's development; however, the notion of homologous behavior remains controversial, largely because behavior is more prone to multiple realizability than other biological traits. For example, D. W. Rajecki and Randall C. Flanery, using data on humans and on nonhuman primates, argue that patterns of behaviour in dominance hierarchies are homologous across the primates. As with morphological features or DNA, shared similarity in behavior provides evidence for common ancestry. The hypothesis that a behavioral character is not homologous should be based on an incongruent distribution of that character with respect to other features that are presumed to reflect the true pattern of relationships. This is an application of Willi Hennig's auxiliary principle.
Biology and health sciences
Basics_4
Biology
142432
https://en.wikipedia.org/wiki/Homology%20%28mathematics%29
Homology (mathematics)
In mathematics, the term homology, originally introduced in algebraic topology, has three primary, closely-related usages. The most direct usage of the term is to take the homology of a chain complex, resulting in a sequence of abelian groups called homology groups. This operation, in turn, allows one to associate various named homologies or homology theories to various other types of mathematical objects. Lastly, since there are many homology theories for topological spaces that produce the same answer, one also often speaks of the homology of a topological space. (This latter notion of homology admits more intuitive descriptions for 1- or 2-dimensional topological spaces, and is sometimes referenced in popular mathematics.) There is also a related notion of the cohomology of a cochain complex, giving rise to various cohomology theories, in addition to the notion of the cohomology of a topological space. Homology of chain complexes To take the homology of a chain complex, one starts with a chain complex, which is a sequence of abelian groups (whose elements are called chains) and group homomorphisms (called boundary maps) such that the composition of any two consecutive maps is zero: The th homology group of this chain complex is then the quotient group of cycles modulo boundaries, where the th group of cycles is given by the kernel subgroup , and the th group of boundaries is given by the image subgroup . One can optionally endow chain complexes with additional structure, for example by additionally taking the groups to be modules over a coefficient ring , and taking the boundary maps to be -module homomorphisms, resulting in homology groups that are also quotient modules. Tools from homological algebra can be used to relate homology groups of different chain complexes. Homology theories To associate a homology theory to other types of mathematical objects, one first gives a prescription for associating chain complexes to that object, and then takes the homology of such a chain complex. For the homology theory to be valid, all such chain complexes associated to the same mathematical object must have the same homology. The resulting homology theory is often named according to the type of chain complex prescribed. For example, singular homology, Morse homology, Khovanov homology, and Hochschild homology are respectively obtained from singular chain complexes, Morse complexes, Khovanov complexes, and Hochschild complexes. In other cases, such as for group homology, there are multiple common methods to compute the same homology groups. In the language of category theory, a homology theory is a type of functor from the category of the mathematical object being studied to the category of abelian groups and group homomorphisms, or more generally to the category corresponding to the associated chain complexes. One can also formulate homology theories as derived functors on appropriate abelian categories, measuring the failure of an appropriate functor to be exact. One can describe this latter construction explicitly in terms of resolutions, or more abstractly from the perspective of derived categories or model categories. Regardless of how they are formulated, homology theories help provide information about the structure of the mathematical objects to which they are associated, and can sometimes help distinguish different objects. Homology of a topological space Perhaps the most familiar usage of the term homology is for the homology of a topological space. For sufficiently nice topological spaces and compatible choices of coefficient rings, any homology theory satisfying the Eilenberg-Steenrod axioms yields the same homology groups as the singular homology (see below) of that topological space, with the consequence that one often simply refers to the "homology" of that space, instead of specifying which homology theory was used to compute the homology groups in question. For 1-dimensional topological spaces, probably the simplest homology theory to use is graph homology, which could be regarded as a 1-dimensional special case of simplicial homology, the latter of which involves a decomposition of the topological space into simplices. (Simplices are a generalization of triangles to arbitrary dimension; for example, an edge in a graph is homeomorphic to a one-dimensional simplex, and a triangle-based pyramid is a 3-simplex.) Simplicial homology can in turn be generalized to singular homology, which allows more general maps of simplices into the topological space. Replacing simplices with disks of various dimensions results in a related construction called cellular homology. There are also other ways of computing these homology groups, for example via Morse homology, or by taking the output of the Universal Coefficient Theorem when applied to a cohomology theory such as Čech cohomology or (in the case of real coefficients) De Rham cohomology. Inspirations for homology (informal discussion) One of the ideas that led to the development of homology was the observation that certain low-dimensional shapes can be topologically distinguished by examining their "holes." For instance, a figure-eight shape has more holes than a circle , and a 2-torus (a 2-dimensional surface shaped like an inner tube) has different holes from a 2-sphere (a 2-dimensional surface shaped like a basketball). Studying topological features such as these led to the notion of the cycles that represent homology classes (the elements of homology groups). For example, the two embedded circles in a figure-eight shape provide examples of one-dimensional cycles, or 1-cycles, and the 2-torus and 2-sphere represent 2-cycles. Cycles form a group under the operation of formal addition, which refers to adding cycles symbolically rather than combining them geometrically. Any formal sum of cycles is again called a cycle. Cycles and boundaries (informal discussion) Explicit constructions of homology groups are somewhat technical. As mentioned above, an explicit realization of the homology groups of a topological space is defined in terms of the cycles and boundaries of a chain complex associated to , where the type of chain complex depends on the choice of homology theory in use. These cycles and boundaries are elements of abelian groups, and are defined in terms of the boundary homomorphisms of the chain complex, where each is an abelian group, and the are group homomorphisms that satisfy for all . Since such constructions are somewhat technical, informal discussions of homology sometimes focus instead on topological notions that parallel some of the group-theoretic aspects of cycles and boundaries. For example, in the context of chain complexes, a boundary is any element of the image of the boundary homomorphism , for some . In topology, the boundary of a space is technically obtained by taking the space's closure minus its interior, but it is also a notion familiar from examples, e.g., the boundary of the unit disk is the unit circle, or more topologically, the boundary of is . Topologically, the boundary of the closed interval is given by the disjoint union , and with respect to suitable orientation conventions, the oriented boundary of is given by the union of a positively-oriented with a negatively oriented The simplicial chain complex analog of this statement is that . (Since is a homomorphism, this implies for any integer .) In the context of chain complexes, a cycle is any element of the kernel, for some . In other words, is a cycle if and only if . The closest topological analog of this idea would be a shape that has "no boundary," in the sense that its boundary is the empty set. For example, since , and have no boundary, one can associate cycles to each of these spaces. However, the chain complex notion of cycles (elements whose boundary is a "zero chain") is more general than the topological notion of a shape with no boundary. It is this topological notion of no boundary that people generally have in mind when they claim that cycles can intuitively be thought of as detecting holes. The idea is that for no-boundary shapes like , , and , it is possible in each case to glue on a larger shape for which the original shape is the boundary. For instance, starting with a circle , one could glue a 2-dimensional disk to that such that the is the boundary of that . Similarly, given a two-sphere , one can glue a ball to that such that the is the boundary of that . This phenomenon is sometimes described as saying that has a -shaped "hole" or that it could be "filled in" with a . More generally, any shape with no boundary can be "filled in" with a cone, since if a given space has no boundary, then the boundary of the cone on is given by , and so if one "filled in" by gluing the cone on onto , then would be the boundary of that cone. (For example, a cone on is homeomorphic to a disk whose boundary is that .) However, it is sometimes desirable to restrict to nicer spaces such as manifolds, and not every cone is homeomorphic to a manifold. Embedded representatives of 1-cycles, 3-cycles, and oriented 2-cycles all admit manifold-shaped holes, but for example the real projective plane and complex projective plane have nontrivial cobordism classes and therefore cannot be "filled in" with manifolds. On the other hand, the boundaries discussed in the homology of a topological space are different from the boundaries of "filled in" holes, because the homology of a topological space has to do with the original space , and not with new shapes built from gluing extra pieces onto . For example, any embedded circle in already bounds some embedded disk in , so such gives rise to a boundary class in the homology of . By contrast, no embedding of into one of the 2 lobes of the figure-eight shape gives a boundary, despite the fact that it is possible to glue a disk onto a figure-eight lobe. Homology groups Given a sufficiently-nice topological space , a choice of appropriate homology theory, and a chain complex associated to that is compatible with that homology theory, the th homology group is then given by the quotient group of -cycles (-dimensional cycles) modulo -dimensional boundaries. In other words, the elements of , called homology classes, are equivalence classes whose representatives are -cycles, and any two cycles are regarded as equal in if and only if they differ by the addition of a boundary. This also implies that the "zero" element of is given by the group of -dimensional boundaries, which also includes formal sums of such boundaries. Informal examples The homology of a topological space X is a set of topological invariants of X represented by its homology groups where the homology group describes, informally, the number of holes in X with a k-dimensional boundary. A 0-dimensional-boundary hole is simply a gap between two components. Consequently, describes the path-connected components of X. A one-dimensional sphere is a circle. It has a single connected component and a one-dimensional-boundary hole, but no higher-dimensional holes. The corresponding homology groups are given as where is the group of integers and is the trivial group. The group represents a finitely-generated abelian group, with a single generator representing the one-dimensional hole contained in a circle. A two-dimensional sphere has a single connected component, no one-dimensional-boundary holes, a two-dimensional-boundary hole, and no higher-dimensional holes. The corresponding homology groups are In general for an n-dimensional sphere the homology groups are A two-dimensional ball is a solid disc. It has a single path-connected component, but in contrast to the circle, has no higher-dimensional holes. The corresponding homology groups are all trivial except for . In general, for an n-dimensional ball The torus is defined as a product of two circles . The torus has a single path-connected component, two independent one-dimensional holes (indicated by circles in red and blue) and one two-dimensional hole as the interior of the torus. The corresponding homology groups are If n products of a topological space X is written as , then in general, for an n-dimensional torus , (see Torus#n-dimensional torus and Betti number#More examples for more details). The two independent 1-dimensional holes form independent generators in a finitely-generated abelian group, expressed as the product group For the projective plane P, a simple computation shows (where is the cyclic group of order 2): corresponds, as in the previous examples, to the fact that there is a single connected component. is a new phenomenon: intuitively, it corresponds to the fact that there is a single non-contractible "loop", but if we do the loop twice, it becomes contractible to zero. This phenomenon is called torsion. Construction of homology groups The following text describes a general algorithm for constructing the homology groups. It may be easier for the reader to look at some simple examples first: graph homology and simplicial homology. The general construction begins with an object such as a topological space X, on which one first defines a C(X) encoding information about X. A chain complex is a sequence of abelian groups or modules . connected by homomorphisms which are called boundary operators. That is, where 0 denotes the trivial group and for i < 0. It is also required that the composition of any two consecutive boundary operators be trivial. That is, for all n, i.e., the constant map sending every element of to the group identity in The statement that the boundary of a boundary is trivial is equivalent to the statement that , where denotes the image of the boundary operator and its kernel. Elements of are called boundaries and elements of are called cycles. Since each chain group Cn is abelian all its subgroups are normal. Then because is a subgroup of Cn, is abelian, and since therefore is a normal subgroup of . Then one can create the quotient group called the nth homology group of X. The elements of Hn(X) are called homology classes. Each homology class is an equivalence class over cycles and two cycles in the same homology class are said to be homologous. A chain complex is said to be exact if the image of the (n+1)th map is always equal to the kernel of the nth map. The homology groups of X therefore measure "how far" the chain complex associated to X is from being exact. The reduced homology groups of a chain complex C(X) are defined as homologies of the augmented chain complex where the boundary operator is for a combination of points which are the fixed generators of C0. The reduced homology groups coincide with for The extra in the chain complex represents the unique map from the empty simplex to X. Computing the cycle and boundary groups is usually rather difficult since they have a very large number of generators. On the other hand, there are tools which make the task easier. The simplicial homology groups Hn(X) of a simplicial complex X are defined using the simplicial chain complex C(X), with Cn(X) the free abelian group generated by the n-simplices of X. See simplicial homology for details. The singular homology groups Hn(X) are defined for any topological space X, and agree with the simplicial homology groups for a simplicial complex. Cohomology groups are formally similar to homology groups: one starts with a cochain complex, which is the same as a chain complex but whose arrows, now denoted point in the direction of increasing n rather than decreasing n; then the groups of cocycles and of follow from the same description. The nth cohomology group of X is then the quotient group in analogy with the nth homology group. Homology vs. homotopy The nth homotopy group of a topological space is the group of homotopy classes of basepoint-preserving maps from the -sphere to , under the group operation of concatenation. The most fundamental homotopy group is the fundamental group . For connected , the Hurewicz theorem describes a homomorphism called the Hurewicz homomorphism. For , this homomorphism can be complicated, but when , the Hurewicz homomorphism coincides with abelianization. That is, is surjective and its kernel is the commutator subgroup of , with the consequence that is isomorphic to the abelianization of . Higher homotopy groups are sometimes difficult to compute. For instance, the homotopy groups of spheres are poorly understood and are not known in general, in contrast to the straightforward description given above for the homology groups. For an example, suppose is the figure eight. As usual, its first homotopy group, or fundamental group, is the group of homotopy classes of directed loops starting and ending at a predetermined point (e.g. its center). It is isomorphic to the free group of rank 2, , which is not commutative: looping around the lefthand cycle and then around the righthand cycle is different from looping around the righthand cycle and then looping around the lefthand cycle. By contrast, the figure eight's first homology group is abelian. To express this explicitly in terms of homology classes of cycles, one could take the homology class of the lefthand cycle and the homology class of the righthand cycle as basis elements of , allowing us to write . Types of homology The different types of homology theory arise from functors mapping from various categories of mathematical objects to the category of chain complexes. In each case the composition of the functor from objects to chain complexes and the functor from chain complexes to homology groups defines the overall homology functor for the theory. Simplicial homology The motivating example comes from algebraic topology: the simplicial homology of a simplicial complex X. Here the chain group Cn is the free abelian group or free module whose generators are the n-dimensional oriented simplexes of X. The orientation is captured by ordering the complex's vertices and expressing an oriented simplex as an n-tuple of its vertices listed in increasing order (i.e. in the complex's vertex ordering, where is the th vertex appearing in the tuple). The mapping from Cn to Cn−1 is called the and sends the simplex to the formal sum which is considered 0 if This behavior on the generators induces a homomorphism on all of Cn as follows. Given an element , write it as the sum of generators where is the set of n-simplexes in X and the mi are coefficients from the ring Cn is defined over (usually integers, unless otherwise specified). Then define The dimension of the n-th homology of X turns out to be the number of "holes" in X at dimension n. It may be computed by putting matrix representations of these boundary mappings in Smith normal form. Singular homology Using simplicial homology example as a model, one can define a singular homology for any topological space X. A chain complex for X is defined by taking Cn to be the free abelian group (or free module) whose generators are all continuous maps from n-dimensional simplices into X. The homomorphisms ∂n arise from the boundary maps of simplices. Group homology In abstract algebra, one uses homology to define derived functors, for example the Tor functors. Here one starts with some covariant additive functor F and some module X. The chain complex for X is defined as follows: first find a free module and a surjective homomorphism Then one finds a free module and a surjective homomorphism Continuing in this fashion, a sequence of free modules and homomorphisms can be defined. By applying the functor F to this sequence, one obtains a chain complex; the homology of this complex depends only on F and X and is, by definition, the n-th derived functor of F, applied to X. A common use of group (co)homology is to classify the possible extension groups E which contain a given G-module M as a normal subgroup and have a given quotient group G, so that Other homology theories Borel–Moore homology Cellular homology Cyclic homology Hochschild homology Floer homology Intersection homology K-homology Khovanov homology Morse homology Persistent homology Steenrod homology Homology functors Chain complexes form a category: A morphism from the chain complex () to the chain complex () is a sequence of homomorphisms such that for all n. The n-th homology Hn can be viewed as a covariant functor from the category of chain complexes to the category of abelian groups (or modules). If the chain complex depends on the object X in a covariant manner (meaning that any morphism induces a morphism from the chain complex of X to the chain complex of Y), then the Hn are covariant functors from the category that X belongs to into the category of abelian groups (or modules). The only difference between homology and cohomology is that in cohomology the chain complexes depend in a contravariant manner on X, and that therefore the homology groups (which are called cohomology groups in this context and denoted by Hn) form contravariant functors from the category that X belongs to into the category of abelian groups or modules. Properties If () is a chain complex such that all but finitely many An are zero, and the others are finitely generated abelian groups (or finite-dimensional vector spaces), then we can define the Euler characteristic (using the rank in the case of abelian groups and the Hamel dimension in the case of vector spaces). It turns out that the Euler characteristic can also be computed on the level of homology: and, especially in algebraic topology, this provides two ways to compute the important invariant for the object X which gave rise to the chain complex. Every short exact sequence of chain complexes gives rise to a long exact sequence of homology groups All maps in this long exact sequence are induced by the maps between the chain complexes, except for the maps The latter are called and are provided by the zig-zag lemma. This lemma can be applied to homology in numerous ways that aid in calculating homology groups, such as the theories of relative homology and Mayer-Vietoris sequences. Applications Application in pure mathematics Notable theorems proved using homology include the following: The Brouwer fixed point theorem: If f is any continuous map from the ball Bn to itself, then there is a fixed point with Invariance of domain: If U is an open subset of and is an injective continuous map, then is open and f is a homeomorphism between U and V. The Hairy ball theorem: any continuous vector field on the 2-sphere (or more generally, the 2k-sphere for any ) vanishes at some point. The Borsuk–Ulam theorem: any continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. (Two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.) Invariance of dimension: if non-empty open subsets and are homeomorphic, then Application in science and engineering In topological data analysis, data sets are regarded as a point cloud sampling of a manifold or algebraic variety embedded in Euclidean space. By linking nearest neighbor points in the cloud into a triangulation, a simplicial approximation of the manifold is created and its simplicial homology may be calculated. Finding techniques to robustly calculate homology using various triangulation strategies over multiple length scales is the topic of persistent homology. In sensor networks, sensors may communicate information via an ad-hoc network that dynamically changes in time. To understand the global context of this set of local measurements and communication paths, it is useful to compute the homology of the network topology to evaluate, for instance, holes in coverage. In dynamical systems theory in physics, Poincaré was one of the first to consider the interplay between the invariant manifold of a dynamical system and its topological invariants. Morse theory relates the dynamics of a gradient flow on a manifold to, for example, its homology. Floer homology extended this to infinite-dimensional manifolds. The KAM theorem established that periodic orbits can follow complex trajectories; in particular, they may form braids that can be investigated using Floer homology. In one class of finite element methods, boundary-value problems for differential equations involving the Hodge-Laplace operator may need to be solved on topologically nontrivial domains, for example, in electromagnetic simulations. In these simulations, solution is aided by fixing the cohomology class of the solution based on the chosen boundary conditions and the homology of the domain. FEM domains can be triangulated, from which the simplicial homology can be calculated. Software Various software packages have been developed for the purposes of computing homology groups of finite cell complexes. Linbox is a C++ library for performing fast matrix operations, including Smith normal form; it interfaces with both Gap and Maple. Chomp, CAPD::Redhom and Perseus are also written in C++. All three implement pre-processing algorithms based on simple-homotopy equivalence and discrete Morse theory to perform homology-preserving reductions of the input cell complexes before resorting to matrix algebra. Kenzo is written in Lisp, and in addition to homology it may also be used to generate presentations of homotopy groups of finite simplicial complexes. Gmsh includes a homology solver for finite element meshes, which can generate Cohomology bases directly usable by finite element software. Some non-homology-based discussions of surfaces Origins Homology theory can be said to start with the Euler polyhedron formula, or Euler characteristic. This was followed by Riemann's definition of genus and n-fold connectedness numerical invariants in 1857 and Betti's proof in 1871 of the independence of "homology numbers" from the choice of basis. Surfaces On the ordinary sphere , the curve b in the diagram can be shrunk to the pole, and even the equatorial great circle a can be shrunk in the same way. The Jordan curve theorem shows that any closed curve such as c can be similarly shrunk to a point. This implies that has trivial fundamental group, so as a consequence, it also has trivial first homology group. The torus has closed curves which cannot be continuously deformed into each other, for example in the diagram none of the cycles a, b or c can be deformed into one another. In particular, cycles a and b cannot be shrunk to a point whereas cycle c can. If the torus surface is cut along both a and b, it can be opened out and flattened into a rectangle or, more conveniently, a square. One opposite pair of sides represents the cut along a, and the other opposite pair represents the cut along b. The edges of the square may then be glued back together in different ways. The square can be twisted to allow edges to meet in the opposite direction, as shown by the arrows in the diagram. The various ways of gluing the sides yield just four topologically distinct surfaces: is the Klein bottle, which is a torus with a twist in it (In the square diagram, the twist can be seen as the reversal of the bottom arrow). It is a theorem that the re-glued surface must self-intersect (when immersed in Euclidean 3-space). Like the torus, cycles a and b cannot be shrunk while c can be. But unlike the torus, following b forwards right round and back reverses left and right, because b happens to cross over the twist given to one join. If an equidistant cut on one side of b is made, it returns on the other side and goes round the surface a second time before returning to its starting point, cutting out a twisted Möbius strip. Because local left and right can be arbitrarily re-oriented in this way, the surface as a whole is said to be non-orientable. The projective plane has both joins twisted. The uncut form, generally represented as the Boy surface, is visually complex, so a hemispherical embedding is shown in the diagram, in which antipodal points around the rim such as A and A′ are identified as the same point. Again, a is non-shrinkable while c is. If b were only wound once, it would also be non-shrinkable and reverse left and right. However it is wound a second time, which swaps right and left back again; it can be shrunk to a point and is homologous to c. Cycles can be joined or added together, as a and b on the torus were when it was cut open and flattened down. In the Klein bottle diagram, a goes round one way and −a goes round the opposite way. If a is thought of as a cut, then −a can be thought of as a gluing operation. Making a cut and then re-gluing it does not change the surface, so a + (−a) = 0. But now consider two a-cycles. Since the Klein bottle is nonorientable, you can transport one of them all the way round the bottle (along the b-cycle), and it will come back as −a. This is because the Klein bottle is made from a cylinder, whose a-cycle ends are glued together with opposite orientations. Hence 2a = a + a = a + (−a) = 0. This phenomenon is called torsion. Similarly, in the projective plane, following the unshrinkable cycle b round twice remarkably creates a trivial cycle which can be shrunk to a point; that is, b + b = 0. Because b must be followed around twice to achieve a zero cycle, the surface is said to have a torsion coefficient of 2. However, following a b-cycle around twice in the Klein bottle gives simply b + b = 2b, since this cycle lives in a torsion-free homology class. This corresponds to the fact that in the fundamental polygon of the Klein bottle, only one pair of sides is glued with a twist, whereas in the projective plane both sides are twisted. A square is a contractible topological space, which implies that it has trivial homology. Consequently, additional cuts disconnect it. The square is not the only shape in the plane that can be glued into a surface. Gluing opposite sides of an octagon, for example, produces a surface with two holes. In fact, all closed surfaces can be produced by gluing the sides of some polygon and all even-sided polygons (2n-gons) can be glued to make different manifolds. Conversely, a closed surface with n non-zero classes can be cut into a 2n-gon. Variations are also possible, for example a hexagon may also be glued to form a torus. The first recognisable theory of homology was published by Henri Poincaré in his seminal paper "Analysis situs", J. Ecole polytech. (2) 1. 1–121 (1895). The paper introduced homology classes and relations. The possible configurations of orientable cycles are classified by the Betti numbers of the manifold (Betti numbers are a refinement of the Euler characteristic). Classifying the non-orientable cycles requires additional information about torsion coefficients. The complete classification of 1- and 2-manifolds is given in the table.
Mathematics
Geometry
null
142440
https://en.wikipedia.org/wiki/Climatology
Climatology
Climatology (from Greek , klima, "slope"; and , -logia) or climate science is the scientific study of Earth's climate, typically defined as weather conditions averaged over a period of at least 30 years. Climate concerns the atmospheric condition during an extended to indefinite period of time; weather is the condition of the atmosphere during a relative brief period of time. The main topics of research are the study of climate variability, mechanisms of climate changes and modern climate change. This topic of study is regarded as part of the atmospheric sciences and a subdivision of physical geography, which is one of the Earth sciences. Climatology includes some aspects of oceanography and biogeochemistry. The main methods employed by climatologists are the analysis of observations and modelling of the physical processes that determine climate. Short term weather forecasting can be interpreted in terms of knowledge of longer-term phenomena of climate, for instance climatic cycles such as the El Niño–Southern Oscillation (ENSO), the Madden–Julian oscillation (MJO), the North Atlantic oscillation (NAO), the Arctic oscillation (AO), the Pacific decadal oscillation (PDO), and the Interdecadal Pacific Oscillation (IPO). Climate models are used for a variety of purposes from studying the dynamics of the weather and climate system to predictions of future climate. History The Greeks began the formal study of climate; in fact, the word "climate" is derived from the Greek word klima, meaning "slope", referring to the slope or inclination of the Earth's axis. Arguably the most influential classic text concerning climate was On Airs, Water and Places written by Hippocrates about 400 BCE. This work commented on the effect of climate on human health and cultural differences between Asia and Europe. This idea that climate controls which populations excel depending on their climate, or climatic determinism, remained influential throughout history. Chinese scientist Shen Kuo (1031–1095) inferred that climates naturally shifted over an enormous span of time, after observing petrified bamboos found underground near Yanzhou (modern Yan'an, Shaanxi province), a dry-climate area unsuitable at that time for the growth of bamboo. The invention of thermometers and barometers during the Scientific Revolution allowed for systematic recordkeeping, that began as early as 1640–1642 in England. Early climate researchers include Edmund Halley, who published a map of the trade winds in 1686 after a voyage to the southern hemisphere. Benjamin Franklin (1706–1790) first mapped the course of the Gulf Stream for use in sending mail from North America to Europe. Francis Galton (1822–1911) invented the term anticyclone. Helmut Landsberg (1906–1985) fostered the use of statistical analysis in climatology. During the early 20th century, climatology mostly emphasized the description of regional climates. This descriptive climatology was mainly an applied science, giving farmers and other interested people statistics about what the normal weather was and how great chances were of extreme events. To do this, climatologists had to define a climate normal, or an average of weather and weather extremes over a period of typically 30 years. While scientists knew of past climate change such as the ice ages, the concept of climate as changing only very gradually was useful for descriptive climatology. This started to change during the decades that followed, and while the history of climate change science started earlier, climate change only became one of the main topics of study for climatologists during the 1970s and afterward. Subfields Various subtopics of climatology study different aspects of climate. There are different categorizations of the sub-topics of climatology. The American Meteorological Society for instance identifies descriptive climatology, scientific climatology and applied climatology as the three subcategories of climatology, a categorization based on the complexity and the purpose of the research. Applied climatologists apply their expertise to different industries such as manufacturing and agriculture. Paleoclimatology is the attempt to reconstruct and understand past climates by examining records such as ice cores and tree rings (dendroclimatology). Paleotempestology uses these same records to help determine hurricane frequency over millennia. Historical climatology is the study of climate as related to human history and is thus concerned mainly with the last few thousand years. Boundary-layer climatology concerns exchanges in water, energy and momentum near surfaces. Further identified subtopics are physical climatology, dynamic climatology, tornado climatology, regional climatology, bioclimatology, and synoptic climatology. The study of the hydrological cycle over long time scales is sometimes termed hydroclimatology, in particular when studying the effects of climate change on the water cycle. Methods The study of contemporary climates incorporates meteorological data accumulated over many years, such as records of rainfall, temperature and atmospheric composition. Knowledge of the atmosphere and its dynamics is also embodied in models, either statistical or mathematical, which help by integrating different observations and testing how well they match. Modeling is used for understanding past, present and potential future climates. Climate research is made difficult by the large scale, long time periods, and complex processes which govern climate. Climate is governed by physical principles which can be expressed as differential equations. These equations are coupled and nonlinear, so that approximate solutions are obtained by using numerical methods to create global climate models. Climate is sometimes modeled as a stochastic process but this is generally accepted as an approximation to processes that are otherwise too complicated to analyze. Climate data The collection of a long record of climate variables is essential for the study of climate. Climatology deals with the aggregate data that meteorologists have recorded. Scientists use both direct and indirect observations of the climate, from Earth observing satellites and scientific instrumentation such as a global network of thermometers, to prehistoric ice extracted from glaciers. As measuring technology changes over time, records of data often cannot be compared directly. As cities are generally warmer than the areas surrounding, urbanization has made it necessary to constantly correct data for this urban heat island effect. Models Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface, and ice. They are used for a variety of purposes from study of the dynamics of the weather and climate system to projections of future climate. All climate models balance, or very nearly balance, incoming energy as short wave (including visible) electromagnetic radiation to the Earth with outgoing energy as long wave (infrared) electromagnetic radiation from the Earth. Any unbalance results in a change of the average temperature of the Earth. Most climate models include the radiative effects of greenhouse gases such as carbon dioxide. These models predict a trend of increase of surface temperatures, as well as a more rapid increase of temperature at higher latitudes. Models can range from relatively simple to complex: A simple radiant heat transfer model that treats the Earth as a single point and averages outgoing energy. This can be expanded vertically (radiative-convective models), or horizontally. Coupled atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange. Earth system models further include the biosphere. Additionally, they are available with different resolutions ranging from >100 km to 1 km. High resolutions in global climate models are computational very demanding and only few global datasets exists. Examples are ICON or mechanistically downscaled data such as CHELSA (Climatologies at high resolution for the Earth's land surface areas). Topics of research Topics that climatologists study comprise three main categories: climate variability, mechanisms of climatic change, and modern changes of climate. Climatological processes Various factors affect the average state of the atmosphere at a particular location. For instance, midlatitudes will have a pronounced seasonal cycle of temperature whereas tropical regions show little variation of temperature over a year. Another major variable of climate is continentality: the distance to major water bodies such as oceans. Oceans act as a moderating factor, so that land close to it has typically less difference of temperature between winter and summer than areas further from it. The atmosphere interacts with other parts of the climate system, with winds generating ocean currents that transport heat around the globe. Climate classification Classification is an important method of simplifying complicated processes. Different climate classifications have been developed over the centuries, with the first ones in Ancient Greece. How climates are classified depends on what the application is. A wind energy producer will require different information (wind) in a classification than someone more interested in agriculture, for whom precipitation and temperature are more important. The most widely used classification, the Köppen climate classification, was developed during the late nineteenth century and is based on vegetation. It uses monthly data concerning temperature and precipitation. Climate variability There are different types of variability: recurring patterns of temperature or other climate variables. They are quantified with different indices. Much in the way the Dow Jones Industrial Average, which is based on the stock prices of 30 companies, is used to represent the fluctuations of stock prices in general, climate indices are used to represent the essential elements of climate. Climate indices are generally devised with the twin objectives of simplicity and completeness, and each index typically represents the status and timing of the climate factor it represents. By their very nature, indices are simple, and combine many details into a generalized, overall description of the atmosphere or ocean which can be used to characterize the factors which effect the global climate system. El Niño–Southern Oscillation (ENSO) is a coupled ocean-atmosphere phenomenon in the Pacific Ocean responsible for much of the global variability of temperature, and has a cycle between two and seven years. The North Atlantic oscillation is a mode of variability that is mainly contained to the lower atmosphere, the troposphere. The layer of atmosphere above, the stratosphere is also capable of creating its own variability, most importantly the Madden–Julian oscillation (MJO), which has a cycle of approximately 30 to 60 days. The Interdecadal Pacific oscillation can create changes in the Pacific Ocean and lower atmosphere on decadal time scales. Climate change Climate change occurs when changes of Earth's climate system result in new weather patterns that remain for an extended period of time. This duration of time can be as brief as a few decades to as long as millions of years. The climate system receives nearly all of its energy from the sun. The climate system also gives off energy to outer space. The balance of incoming and outgoing energy, and the passage of the energy through the climate system, determines Earth's energy budget. When the incoming energy is greater than the outgoing energy, earth's energy budget is positive and the climate system is warming. If more energy goes out, the energy budget is negative and earth experiences cooling. Climate change also influences the average sea level. Modern climate change is caused largely by the human emissions of greenhouse gas from the burning of fossil fuel which increases global mean surface temperatures. Increasing temperature is only one aspect of modern climate change, which also includes observed changes of precipitation, storm tracks and cloudiness. Warmer temperatures are causing further changes of the climate system, such as the widespread melt of glaciers, sea level rise and shifts of flora and fauna. Differences with meteorology In contrast to meteorology, which emphasises short term weather systems lasting no more than a few weeks, climatology studies the frequency and trends of those systems. It studies the periodicity of weather events over years to millennia, as well as changes of long-term average weather patterns in relation to atmospheric conditions. Climatologists study both the nature of climates – local, regional or global – and the natural or human-induced factors that cause climates to change. Climatology considers the past and can help predict future climate change. Phenomena of climatological interest include the atmospheric boundary layer, circulation patterns, heat transfer (radiative, convective and latent), interactions between the atmosphere and the oceans and land surface (particularly vegetation, land use and topography), and the chemical and physical composition of the atmosphere. Use in weather forecasting A relative difficult method of forecast, the analog technique requires remembering a previous weather event which is expected to be mimicked by an upcoming event. What makes it a difficult technique is that there is rarely a perfect analog for an event of the future. Some refer to this type of forecasting as pattern recognition, which remains a useful method of estimating rainfall over data voids such as oceans using knowledge of how satellite imagery relates to precipitation rates over land, as well as the forecasting of precipitation amounts and distribution of the future. A variation of this theme, used for medium range forecasting, is known as teleconnections, when systems in other locations are used to help determine the location of a system within the regime surrounding. One method of using teleconnections are by using climate indices such as ENSO-related phenomena.
Physical sciences
Climatology
null
142473
https://en.wikipedia.org/wiki/Plantation
Plantation
Plantations are farms specializing in cash crops, usually mainly planting a single crop, with perhaps ancillary areas for vegetables for eating and so on. Plantations, centered on a plantation house, grow crops including cotton, cannabis, coffee, tea, cocoa, sugar cane, opium, sisal, oil seeds, oil palms, fruits, rubber trees and forest trees. Protectionist policies and natural comparative advantage have sometimes contributed to determining where plantations are located. In modern use, the term usually refers only to large-scale estates. Before about 1860, it was the usual term for a farm of any size in the southern parts of British North America, with, as Noah Webster noted, "farm" becoming the usual term from about Maryland northward. The enslavement of people was the norm in Maryland and states southward. The plantations there were forced-labor farms. The term "plantation" was used in most British colonies but very rarely in the United Kingdom itself in this sense. There it was used mainly for tree plantations, areas artificially planted with trees, whether purely for commercial forestry, or partly for ornamental effect in gardens and parks, when it might also cover plantings of garden shrubs. Among the earliest examples of plantations were the latifundia of the Roman Empire, which produced large quantities of grain, wine, and olive oil for export. Plantation agriculture proliferated with the increase in international trade and the development of a worldwide economy that followed the expansion of European colonialism. By crop Tree plantations Tree plantations, in the United States often called tree farms, are established for the commercial production of timber or tree products such as palm oil, coffee, or rubber. Teak and bamboo plantations in India have given good results and an alternative crop solution to farmers of central India, where conventional farming was widespread. But due to the rising input costs of agriculture, many farmers have done teak and bamboo plantations, which require very little water (only during the first two years). Teak and bamboo have legal protection from theft. Bamboo, once planted, gives output for 50 years till flowering occurs. Teak requires 20 years to grow to full maturity and fetch returns. These may be established for watershed or soil protection. They are established for erosion control, landslide stabilization, and windbreaks. Such plantations are established to foster native species and promote forest regeneration on degraded lands as a tool of environmental restoration. Sugar Sugar plantations were highly valued in the Caribbean by the British and French colonists in the 17th and 18th centuries, and the use of sugar in Europe rose during this period. Sugarcane is still an important crop in Cuba. Sugar plantations also arose in countries such as Barbados and Cuba because of the natural endowments that they had. These natural endowments included soil conducive to growing sugar and a high marginal product of labor realized through the increasing number of enslaved people. Rubber Plantings of the Pará rubber tree (Hevea brasiliensis) are usually called anything Oil palm plants Oil palm agriculture rapidly expands across wet tropical regions and is usually developed at a plantation scale. Orchards Fruit orchards are NOT plantations. Arable crops These include tobacco, sugarcane, pineapple, bell pepper, and cotton, especially in historical usage. Before the rise of cotton in the American South, indigo and rice were also sometimes called plantation crops. Ecological impact Probably the most critical factor a plantation has on the local environment is the site where the plantation is established. In Brazil, coffee plantations would use slash-and-burn agriculture, tearing down rainforests and planting coffee trees that depleted the nutrients in soil. Once the soil had been sapped, growers would move on to another place. If a natural forest is cleared for a planted forest, then a reduction in biodiversity and loss of habitat will likely result. In some cases, their establishment may involve draining wetlands to replace mixed hardwoods that formerly predominated with pine species. If a plantation is established on abandoned agricultural land or highly degraded land, it can increase both habitat and biodiversity. A planted forest can be profitably established on lands that will not support agriculture or suffer from a lack of natural regeneration. The tree species used in a plantation are also an important factor. Where non-native varieties or species are grown, few native faunas are adapted to exploit these, and further biodiversity loss occurs. However, even non-native tree species may serve as corridors for wildlife and act as a buffer for native forests, reducing edge effect. Once a plantation is established, managing it becomes an important environmental factor. The most critical aspect of management is the rotation period. Plantations harvested on more extended rotation periods (30 years or more) can provide similar benefits to a naturally regenerated forest managed for wood production on a similar rotation. This is especially true if native species are used. In the case of exotic species, the habitat can be improved significantly if the impact is mitigated by measures such as leaving blocks of native species in the plantation or retaining corridors of natural forest. In Brazil, similar measures are required by government regulation. Slave plantation Plantation owners extensively used enslaved Africans to work on early plantations (such as tobacco, rice, cotton, hemp, and sugar plantations) in the American colonies and the United States, throughout the Caribbean, the Americas, and in European-occupied areas of Africa. In modern times, the low wages typically paid to plantation workers are the basis of plantation profitability in some areas. In more recent times, overt slavery has been replaced by para-slavery or slavery-in-kind, including the sharecropping system, and even that has been severely reduced. At its most extreme, workers are in "debt bondage": they must work to pay off a debt at such punitive interest rates that it may never be paid off. Others work unreasonably long hours and are paid subsistence wages that (in practice) may only be spent in the company store. In Brazil, a sugarcane plantation was termed an engenho ("engine"), and the 17th-century English usage for organized colonial production was "factory." Such colonial social and economic structures are discussed at Plantation economy. Sugar workers on plantations in Cuba and elsewhere in the Caribbean lived in company towns known as bateyes. American South Society and culture Fishing When Newfoundland was colonized by England in 1610, the original colonists were called "planters", and their fishing rooms were known as "fishing plantations". These terms were used well into the 20th century. The following three plantations are maintained by the Government of Newfoundland and Labrador as provincial heritage sites: Sea-Forest Plantation was a 17th-century fishing plantation established at Cuper's Cove (present-day Cupids) under a royal charter issued by King James I. Mockbeggar Plantation is an 18th-century fishing plantation at Bonavista. Pool Plantation a 17th-century fishing plantation maintained by Sir David Kirke and his heirs at Ferryland. The plantation was destroyed by French invaders in 1696. Other fishing plantations: Bristol's Hope Plantation, a 17th-century fishing plantation established at Harbour Grace, created by the Bristol Society of Merchant-Adventurers. Benger Plantation, an 18th-century fishing plantation maintained by James Benger and his heirs at Ferryland. It was built on the site of a Georgia plantation. Piggeon's Plantation, an 18th-century fishing plantation maintained by Ellias Piggeon at Ferryland.
Technology
Agriculture_2
null
142475
https://en.wikipedia.org/wiki/Papermaking
Papermaking
Papermaking is the manufacture of paper and cardboard, which are used widely for printing, writing, and packaging, among many other purposes. Today almost all paper is made using industrial machinery, while handmade paper survives as a specialized craft and a medium for artistic expression. In papermaking, a dilute suspension consisting mostly of separate cellulose fibres in water is drained through a sieve-like screen, so that a mat of randomly interwoven fibres is laid down. Water is further removed from this sheet by pressing, sometimes aided by suction or vacuum, or heating. Once dry, a generally flat, uniform and strong sheet of paper is achieved. Before the invention and current widespread adoption of automated machinery, all paper was made by hand, formed or laid one sheet at a time by specialized laborers. Even today those who make paper by hand use tools and technologies quite similar to those existing hundreds of years ago, as originally developed in China and other regions of Asia, or those further modified in Europe. Handmade paper is still appreciated for its distinctive uniqueness and the skilled craft involved in making each sheet, in contrast with the higher degree of uniformity and perfection at lower prices achieved among industrial products. History The word "paper" is etymologically derived from papyrus, Ancient Greek for the Cyperus papyrus plant. Papyrus is a thick, paper-like material produced from the pith of the Cyperus papyrus plant which was used in ancient Egypt and other Mediterranean societies for writing long before paper was used in China. Papyrus is prepared by cutting off thin ribbon-like strips of the pith (interior) of the Cyperus papyrus plant and then laying out the strips side-by-side to make a sheet. A second layer is then placed on top, with the strips running perpendicular to the first. The two layers are then pounded together using a mallet to make a sheet. The result is very strong, but has an uneven surface, especially at the edges of the strips. When used in scrolls, repeated rolling and unrolling causes the strips to come apart again, typically along vertical lines. This effect can be seen in many ancient papyrus documents. Hemp paper had been used in China for wrapping and padding since the eighth century BC. Paper with legible Chinese writings on it has been dated to 8 BC. The traditional inventor attribution is of Cai Lun, an official attached to the Imperial court during the Han dynasty (202 BC – 220 CE), said to have invented paper about 105 CE using mulberry and other bast fibres along with fishnets, old rags, and hemp waste. Paper used as a writing medium had become widespread by the 3rd century and, by the 6th century, toilet paper was starting to be used in China as well. During the Tang dynasty (618–907 CE) paper was folded and sewn into square bags to preserve the flavour of tea, while the later Song dynasty (960–1279 CE) was the first government to issue paper-printed money. In the 8th century, papermaking spread to the Islamic world, where the process was refined, and machinery was designed for bulk manufacturing. Production began in Samarkand, Baghdad, Damascus, Cairo, Morocco, and then Muslim Spain. In Baghdad, papermaking was under the supervision of the Grand Vizier Ja'far ibn Yahya. Muslims invented a method to make a thicker sheet of paper. This innovation helped transform papermaking from an art into a major industry. The earliest use of water-powered mills in paper production, specifically the use of pulp mills for preparing the pulp for papermaking, dates back to Samarkand in the 8th century. The earliest references to paper mills also come from the medieval Islamic world, where they were first noted in the 9th century by Arabic geographers in Damascus. Traditional papermaking in Asia uses the inner bark fibers of plants. This fiber is soaked, cooked, rinsed and traditionally hand-beaten to form the paper pulp. The long fibers are layered to form strong, translucent sheets of paper. In Eastern Asia, three traditional fibers are abaca, kōzo and gampi. In the Himalayas, paper is made from the lokta plant. This paper is used for calligraphy, printing, book arts, and three-dimensional work, including origami. In Europe, papermaking moulds using metallic wire were developed, and features like the watermark were well established by 1300 CE, while hemp and linen rags were the main source of pulp, cotton eventually taking over after Southern plantations made that product in large quantities. Papermaking was originally not popular in Europe due to not having many advantages over papyrus and parchment. It was not until the 15th century with the invention of the movable type of printing and its demand for paper that many paper mills entered production, and papermaking became an industry. Modern papermaking began in the early 19th century in Europe with the development of the Fourdrinier machine. This machine produces a continuous roll of paper rather than individual sheets. These machines are large. Some produce paper 150 meters in length and 10 meters wide. They can produce paper at a rate of 100 km/h. In 1844, Canadian Charles Fenerty and German Friedrich Gottlob Keller had invented the machine and associated process to make use of wood pulp in papermaking. This innovation ended the nearly 2,000-year use of pulped rags and start a new era for the production of newsprint and eventually almost all paper was made out of pulped wood. Manual Papermaking, regardless of the scale on which it is done, involves making a dilute suspension of fibres in water, called "furnish", and forcing this suspension to drain through a screen, to produce a mat of interwoven fibres. Water is removed from this mat of fibres using a press. The method of manual papermaking changed very little over time, despite advances in technologies. The process of manufacturing handmade paper can be generalized into five steps: Separating the useful fibre from the rest of raw materials. (e.g. cellulose from wood, cotton, etc.) Beating down the fibre into pulp Adjusting the colour, mechanical, chemical, biological, and other properties of the paper by adding special chemical premixes Screening the resulting solution Pressing and drying to get the actual paper Screening the fibre involves using a mesh made from non-corroding and inert material, such as brass, stainless steel or a synthetic fibre, which is stretched in a paper mould, a wooden frame similar to that of a window. The size of the paper is governed by the open area of the frame. The mould is then completely submerged in the furnish, then pulled, shaken and drained, forming a uniform coating on the screen. Excess water is then removed, the wet mat of fibre laid on top of a damp cloth or felt in a process called "couching". The process is repeated for the required number of sheets. This stack of wet mats is then pressed in a hydraulic press. The fairly damp fibre is then dried using a variety of methods, such as vacuum drying or simply air drying. Sometimes, the individual sheet is rolled to flatten, harden, and refine the surface. Finally, the paper is then cut to the desired shape or the standard shape (A4, letter, legal, etc.) and packed. The wooden frame is called a "deckle". The deckle leaves the edges of the paper slightly irregular and wavy, called "deckle edges", one of the indications that the paper was made by hand. Deckle-edged paper is occasionally mechanically imitated today to create the impression of old-fashioned luxury. The impressions in paper caused by the wires in the screen that run sideways are called "laid lines" and the impressions made, usually from top to bottom, by the wires holding the sideways wires together are called "chain lines". Watermarks are created by weaving a design into the wires in the mould. Handmade paper generally folds and tears more evenly along the laid lines. The International Association of Hand Papermakers and Paper Artists (IAPMA) is the world-leading association for handmade paper artists. Handmade paper is also prepared in laboratories to study papermaking and in paper mills to check the quality of the production process. The "handsheets" made according to TAPPI Standard T 205 are circular sheets 15.9 cm (6.25 in) in diameter and are tested for paper characteristics such as brightness, strength and degree of sizing. Paper made from other fibers, cotton being the most common, tends to be valued higher than wood-based paper. Industrial A modern paper mill is divided into several sections, roughly corresponding to the processes involved in making handmade paper. Pulp is refined and mixed in water with other additives to make a pulp slurry. The head-box of the paper machine called Fourdrinier machine distributes the slurry onto a moving continuous screen, water drains from the slurry by gravity or under vacuum, the wet paper sheet goes through presses and dries, and finally rolls into large rolls. The outcome often weighs several tons. Another type of paper machine, invented by John Dickinson in 1809, makes use of a cylinder mould that rotates while partially immersed in a vat of dilute pulp. The pulp is picked up by the wire mesh and covers the mould as it rises out of the vat. A couch roller is pressed against the mould to smooth out the pulp, and picks the wet sheet off the mould. Papermaking continues to be of concern from an environmental perspective, due to its use of harsh chemicals, its need for large amounts of water, and the resulting contamination risks, as well as the carbon sequestration lost by deforestation caused by clearcutting the trees used as the primary source of wood pulp. Notable papermakers While papermaking was considered a lifework, exclusive profession for most of its history, the term "notable papermakers" is often not strictly limited to those who actually make paper. Especially in the hand papermaking field there is currently an overlap of certain celebrated paper art practitioners with their other artistic pursuits, while in academia the term may be applied to those conducting research, education, or conservation of books and paper artifacts. In the industrial field it tends to overlap with science, technology and engineering, and often with management of the pulp and paper business itself. Some well-known and recognized papermakers have found fame in other fields, to the point that their papermaking background is almost forgotten. One of the most notable examples might be that of the first humans that achieved flight, the Montgolfier brothers, where many accounts barely mention the paper mill their family owned, although paper used in their balloons did play a relevant role in their success, as probably did their familiarity with this light and strong material. Key inventors include James Whatman, Henry Fourdrinier, Heinrich Voelter and Carl Daniel Ekman, among others. By the mid-19th century, making paper by hand was extinct in the United States. By 1912, fine book printer and publisher, Dard Hunter had reestablished the craft of fine hand paper making but by the 1930s the craft had lapsed in interest again. When artist Douglass Howell returned to New York City after serving in World War II, he established himself as a fine art printmaker and discovered that art paper was in short supply. During the 1940s and 1950s, Howell started reading Hunter's books on paper making, as well as he learned about hand paper making history, conducted paper making research, and learned about printed books.
Technology
Materials
null
142488
https://en.wikipedia.org/wiki/Harmonic%20series%20%28mathematics%29
Harmonic series (mathematics)
In mathematics, the harmonic series is the infinite series formed by summing all positive unit fractions: The first terms of the series sum to approximately , where is the natural logarithm and is the Euler–Mascheroni constant. Because the logarithm has arbitrarily large values, the harmonic series does not have a finite limit: it is a divergent series. Its divergence was proven in the 14th century by Nicole Oresme using a precursor to the Cauchy condensation test for the convergence of infinite series. It can also be proven to diverge by comparing the sum to an integral, according to the integral test for convergence. Applications of the harmonic series and its partial sums include Euler's proof that there are infinitely many prime numbers, the analysis of the coupon collector's problem on how many random trials are needed to provide a complete range of responses, the connected components of random graphs, the block-stacking problem on how far over the edge of a table a stack of blocks can be cantilevered, and the average case analysis of the quicksort algorithm. History The name of the harmonic series derives from the concept of overtones or harmonics in music: the wavelengths of the overtones of a vibrating string are etc., of the string's fundamental wavelength. Every term of the harmonic series after the first is the harmonic mean of the neighboring terms, so the terms form a harmonic progression; the phrases harmonic mean and harmonic progression likewise derive from music. Beyond music, harmonic sequences have also had a certain popularity with architects. This was so particularly in the Baroque period, when architects used them to establish the proportions of floor plans, of elevations, and to establish harmonic relationships between both interior and exterior architectural details of churches and palaces. The divergence of the harmonic series was first proven in 1350 by Nicole Oresme. Oresme's work, and the contemporaneous work of Richard Swineshead on a different series, marked the first appearance of infinite series other than the geometric series in mathematics. However, this achievement fell into obscurity. Additional proofs were published in the 17th century by Pietro Mengoli and by Jacob Bernoulli. Bernoulli credited his brother Johann Bernoulli for finding the proof, and it was later included in Johann Bernoulli's collected works. The partial sums of the harmonic series were named harmonic numbers, and given their usual notation , in 1968 by Donald Knuth. Definition and divergence The harmonic series is the infinite series in which the terms are all of the positive unit fractions. It is a divergent series: as more terms of the series are included in partial sums of the series, the values of these partial sums grow arbitrarily large, beyond any finite limit. Because it is a divergent series, it should be interpreted as a formal sum, an abstract mathematical expression combining the unit fractions, rather than as something that can be evaluated to a numeric value. There are many different proofs of the divergence of the harmonic series, surveyed in a 2006 paper by S. J. Kifowit and T. A. Stamps. Two of the best-known are listed below. Comparison test One way to prove divergence is to compare the harmonic series with another divergent series, where each denominator is replaced with the next-largest power of two: Grouping equal terms shows that the second series diverges (because every grouping of convergent series is only convergent): Because each term of the harmonic series is greater than or equal to the corresponding term of the second series (and the terms are all positive), and since the second series diverges, it follows (by the comparison test) that the harmonic series diverges as well. The same argument proves more strongly that, for every positive This is the original proof given by Nicole Oresme in around 1350. The Cauchy condensation test is a generalization of this argument. Integral test It is possible to prove that the harmonic series diverges by comparing its sum with an improper integral. Specifically, consider the arrangement of rectangles shown in the figure to the right. Each rectangle is 1 unit wide and units high, so if the harmonic series converged then the total area of the rectangles would be the sum of the harmonic series. The curve stays entirely below the upper boundary of the rectangles, so the area under the curve (in the range of from one to infinity that is covered by rectangles) would be less than the area of the union of the rectangles. However, the area under the curve is given by a divergent improper integral, Because this integral does not converge, the sum cannot converge either. In the figure to the right, shifting each rectangle to the left by 1 unit, would produce a sequence of rectangles whose boundary lies below the curve rather than above it. This shows that the partial sums of the harmonic series differ from the integral by an amount that is bounded above and below by the unit area of the first rectangle: Generalizing this argument, any infinite sum of values of a monotone decreasing positive function (like the harmonic series) has partial sums that are within a bounded distance of the values of the corresponding integrals. Therefore, the sum converges if and only if the integral over the same range of the same function converges. When this equivalence is used to check the convergence of a sum by replacing it with an easier integral, it is known as the integral test for convergence. Partial sums Adding the first terms of the harmonic series produces a partial sum, called a harmonic number and Growth rate These numbers grow very slowly, with logarithmic growth, as can be seen from the integral test. More precisely, by the Euler–Maclaurin formula, where is the Euler–Mascheroni constant and which approaches 0 as goes to infinity. Divisibility No harmonic numbers are integers except for One way to prove that is not an integer is to consider the highest power of two in the range from If is the least common multiple of the numbers from then can be rewritten as a sum of fractions with equal denominators in which only one of the numerators, is odd and the rest are even, and is itself even. Therefore, the result is a fraction with an odd numerator and an even denominator, which cannot be an integer. More generally, any sequence of consecutive integers has a unique member divisible by a greater power of two than all the other sequence members, from which it follows by the same argument that no two harmonic numbers differ by an integer. Another proof that the harmonic numbers are not integers observes that the denominator of must be divisible by all prime numbers greater than and less than or equal to , and uses Bertrand's postulate to prove that this set of primes is non-empty. The same argument implies more strongly that, except for , , and , no harmonic number can have a terminating decimal representation. It has been conjectured that every prime number divides the numerators of only a finite subset of the harmonic numbers, but this remains unproven. Interpolation The digamma function is defined as the logarithmic derivative of the gamma function Just as the gamma function provides a continuous interpolation of the factorials, the digamma function provides a continuous interpolation of the harmonic numbers, in the sense that This equation can be used to extend the definition to harmonic numbers with rational indices. Applications Many well-known mathematical problems have solutions involving the harmonic series and its partial sums. Crossing a desert The jeep problem or desert-crossing problem is included in a 9th-century problem collection by Alcuin, Propositiones ad Acuendos Juvenes (formulated in terms of camels rather than jeeps), but with an incorrect solution. The problem asks how far into the desert a jeep can travel and return, starting from a base with loads of fuel, by carrying some of the fuel into the desert and leaving it in depots. The optimal solution involves placing depots spaced at distances from the starting point and each other, where is the range of distance that the jeep can travel with a single load of fuel. On each trip out and back from the base, the jeep places one more depot, refueling at the other depots along the way, and placing as much fuel as it can in the newly placed depot while still leaving enough for itself to return to the previous depots and the base. Therefore, the total distance reached on the th trip is where is the harmonic number. The divergence of the harmonic series implies that crossings of any length are possible with enough fuel. For instance, for Alcuin's version of the problem, : a camel can carry 30 measures of grain and can travel one leuca while eating a single measure, where a leuca is a unit of distance roughly equal to . The problem has : there are 90 measures of grain, enough to supply three trips. For the standard formulation of the desert-crossing problem, it would be possible for the camel to travel leucas and return, by placing a grain storage depot 5 leucas from the base on the first trip and 12.5 leucas from the base on the second trip. However, Alcuin instead asks a slightly different question, how much grain can be transported a distance of 30 leucas without a final return trip, and either strands some camels in the desert or fails to account for the amount of grain consumed by a camel on its return trips. Stacking blocks In the block-stacking problem, one must place a pile of identical rectangular blocks, one per layer, so that they hang as far as possible over the edge of a table without falling. The top block can be placed with of its length extending beyond the next lower block. If it is placed in this way, the next block down needs to be placed with at most of its length extending beyond the next lower block, so that the center of mass of the top two block is supported and they do not topple. The third block needs to be placed with at most of its length extending beyond the next lower block, and so on. In this way, it is possible to place the blocks in such a way that they extend lengths beyond the table, where is the harmonic number. The divergence of the harmonic series implies that there is no limit on how far beyond the table the block stack can extend. For stacks with one block per layer, no better solution is possible, but significantly more overhang can be achieved using stacks with more than one block per layer. Counting primes and divisors In 1737, Leonhard Euler observed that, as a formal sum, the harmonic series is equal to an Euler product in which each term comes from a prime number: where denotes the set of prime numbers. The left equality comes from applying the distributive law to the product and recognizing the resulting terms as the prime factorizations of the terms in the harmonic series, and the right equality uses the standard formula for a geometric series. The product is divergent, just like the sum, but if it converged one could take logarithms and obtain Here, each logarithm is replaced by its Taylor series, and the constant on the right is the evaluation of the convergent series of terms with exponent greater than one. It follows from these manipulations that the sum of reciprocals of primes, on the right hand of this equality, must diverge, for if it converged these steps could be reversed to show that the harmonic series also converges, which it does not. An immediate corollary is that there are infinitely many prime numbers, because a finite sum cannot diverge. Although Euler's work is not considered adequately rigorous by the standards of modern mathematics, it can be made rigorous by taking more care with limits and error bounds. Euler's conclusion that the partial sums of reciprocals of primes grow as a double logarithm of the number of terms has been confirmed by later mathematicians as one of Mertens' theorems, and can be seen as a precursor to the prime number theorem. Another problem in number theory closely related to the harmonic series concerns the average number of divisors of the numbers in a range from 1 to , formalized as the average order of the divisor function, The operation of rounding each term in the harmonic series to the next smaller integer multiple of causes this average to differ from the harmonic numbers by a small constant, and Peter Gustav Lejeune Dirichlet showed more precisely that the average number of divisors is (expressed in big O notation). Bounding the final error term more precisely remains an open problem, known as Dirichlet's divisor problem. Collecting coupons Several common games or recreations involve repeating a random selection from a set of items until all possible choices have been selected; these include the collection of trading cards and the completion of parkrun bingo, in which the goal is to obtain all 60 possible numbers of seconds in the times from a sequence of running events. More serious applications of this problem include sampling all variations of a manufactured product for its quality control, and the connectivity of random graphs. In situations of this form, once there are items remaining to be collected out of a total of equally-likely items, the probability of collecting a new item in a single random choice is and the expected number of random choices needed until a new item is collected Summing over all values of from shows that the total expected number of random choices needed to collect all items where is the harmonic number. Analyzing algorithms The quicksort algorithm for sorting a set of items can be analyzed using the harmonic numbers. The algorithm operates by choosing one item as a "pivot", comparing it to all the others, and recursively sorting the two subsets of items whose comparison places them before the pivot and after the pivot. In either its average-case complexity (with the assumption that all input permutations are equally likely) or in its expected time analysis of worst-case inputs with a random choice of pivot, all of the items are equally likely to be chosen as the pivot. For such cases, one can compute the probability that two items are ever compared with each other, throughout the recursion, as a function of the number of other items that separate them in the final sorted order. If items and are separated by other items, then the algorithm will make a comparison between and only when, as the recursion progresses, it picks or as a pivot before picking any of the other items between them. Because each of these items is equally likely to be chosen first, this happens with probability . The total expected number of comparisons, which controls the total running time of the algorithm, can then be calculated by summing these probabilities over all pairs, giving The divergence of the harmonic series corresponds in this application to the fact that, in the comparison model of sorting used for quicksort, it is not possible to sort in linear time. Related series Alternating harmonic series The series is known as the alternating harmonic series. It is conditionally convergent by the alternating series test, but not absolutely convergent. Its sum is the natural logarithm of 2. More precisely, the asymptotic expansion of the series begins as This results from the equality and the Euler–Maclaurin formula. Using alternating signs with only odd unit fractions produces a related series, the Leibniz formula for Riemann zeta function The Riemann zeta function is defined for real by the convergent series which for would be the harmonic series. It can be extended by analytic continuation to a holomorphic function on all complex numbers where the extended function has a simple pole. Other important values of the zeta function include the solution to the Basel problem, Apéry's constant proved by Roger Apéry to be an irrational number, and the "critical line" of complex numbers with conjectured by the Riemann hypothesis to be the only values other than negative integers where the function can be zero. Random harmonic series The random harmonic series is where the values are independent and identically distributed random variables that take the two values and with equal It converges with probability 1, as can be seen by using the Kolmogorov three-series theorem or of the closely related Kolmogorov maximal inequality. The sum of the series is a random variable whose probability density function is for values between and decreases to near-zero for values greater or less Intermediate between these ranges, at the the probability density is for a nonzero but very small value Depleted harmonic series The depleted harmonic series where all of the terms in which the digit 9 appears anywhere in the denominator are removed can be shown to converge to the value . In fact, when all the terms containing any particular string of digits (in any base) are removed, the series converges.
Mathematics
Sequences and series
null
142523
https://en.wikipedia.org/wiki/Halothane
Halothane
Halothane, sold under the brand name Fluothane among others, is a general anaesthetic. It can be used to induce or maintain anaesthesia. One of its benefits is that it does not increase the production of saliva, which can be particularly useful in those who are difficult to intubate. It is given by inhalation. Side effects include an irregular heartbeat, respiratory depression, and hepatotoxicity. Like all volatile anesthetics, it should not be used in people with a personal or family history of malignant hyperthermia. It appears to be safe in porphyria. It is unclear whether its usage during pregnancy is harmful to the fetus, and its use during a C-section is generally discouraged. Halothane is a chiral molecule that is used as a racemic mixture. Halothane was discovered in 1951. It was approved for medical use in the United States in 1958. It is on the World Health Organization's List of Essential Medicines. Its use in developed countries has been mostly replaced by newer anesthetic agents such as sevoflurane. It is no longer commercially available in the United States. Halothane also contributes to ozone depletion. Medical uses It is a potent anesthetic with a minimum alveolar concentration (MAC) of 0.74%. Its blood/gas partition coefficient of 2.4 makes it an agent with moderate induction and recovery time. It is not a good analgesic and its muscle relaxation effect is moderate. Halothane is colour-coded red on anaesthetic vaporisers. Side effects Side effects include irregular heartbeat, respiratory depression, and hepatotoxicity. It appears to be safe in porphyria. It is unclear whether use during pregnancy is harmful to the baby, and it is not generally recommended for use during a C-section. In rare cases, repeated exposure to halothane in adults was noted to result in severe liver injury. This occurred in about one in 10,000 exposures. The resulting syndrome was referred to as halothane hepatitis, immunoallergic in origin, and is thought to result from the metabolism of halothane to trifluoroacetic acid via oxidative reactions in the liver. About 20% of inhaled halothane is metabolized by the liver and these products are excreted in the urine. The hepatitis syndrome had a mortality rate of 30% to 70%. Concern for hepatitis resulted in a dramatic reduction in the use of halothane for adults and it was replaced in the 1980s by enflurane and isoflurane. By 2005, the most common volatile anesthetics used were isoflurane, sevoflurane, and desflurane. Since the risk of halothane hepatitis in children was substantially lower than in adults, halothane continued to be used in pediatrics in the 1990s as it was especially useful for inhalation induction of anesthesia. However, by 2000, sevoflurane, excellent for inhalation induction, had largely replaced the use of halothane in children. Halothane sensitises the heart to catecholamines, so it is liable to cause cardiac arrhythmia, occasionally fatal, particularly if hypercapnia has been allowed to develop. This seems to be especially problematic in dental anesthesia. Like all the potent inhalational anaesthetic agents, it is a potent trigger for malignant hyperthermia. Similarly, in common with the other potent inhalational agents, it relaxes uterine smooth muscle and this may increase blood loss during delivery or termination of pregnancy. Occupational safety People can be exposed to halothane in the workplace by breathing it in as waste anaesthetic gas, skin contact, eye contact, or swallowing it. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 2 ppm (16.2 mg/m3) over 60 minutes. Pharmacology The exact mechanism of the action of general anaesthetics has not been delineated. Halothane activates GABAA and glycine receptors. It also acts as an NMDA receptor antagonist, inhibits nACh and voltage-gated sodium channels, and activates 5-HT3 and twin-pore K+ channels. It does not affect the AMPA or kainate receptors. Chemical and physical properties Halothane (2-bromo-2-chloro-1,1,1-trifluoroethane) is a dense, highly volatile, clear, colourless, nonflammable liquid with a chloroform-like sweet odour. It is very slightly soluble in water and miscible with various organic solvents. Halothane can decompose to hydrogen fluoride, hydrogen chloride and hydrogen bromide in the presence of light and heat. Chemically, halothane is an alkyl halide (not an ether like many other anesthetics). The structure has one stereocenter, so (R)- and (S)-optical isomers occur. Synthesis The commercial synthesis of halothane starts from trichloroethylene, which is reacted with hydrogen fluoride in the presence of antimony trichloride at 130 °C to form 2-chloro-1,1,1-trifluoroethane. This is then reacted with bromine at 450 °C to produce halothane. Related substances Attempts to find anesthetics with less metabolism led to halogenated ethers such as enflurane and isoflurane. The incidence of hepatic reactions with these agents is lower. The exact degree of hepatotoxic potential of enflurane is debated, although it is minimally metabolized. Isoflurane is essentially not metabolized and reports of associated liver injury are quite rare. Small amounts of trifluoroacetic acid can be formed from both halothane and isoflurane metabolism and possibly accounts for cross sensitization of patients between these agents. The main advantage of the more modern agents is lower blood solubility, resulting in faster induction of and recovery from anaesthesia. History Halothane was first synthesized by C. W. Suckling of Imperial Chemical Industries in 1951 at the ICI Widnes Laboratory and was first used clinically by M. Johnstone in Manchester in 1956. Initially, many pharmacologists and anaesthesiologists had doubts about the safety and efficacy of the new drug. But halothane, which required specialist knowledge and technologies for safe administration, also afforded British anaesthesiologists the opportunity to remake their speciality as a profession during a period, when the newly established National Health Service needed more specialist consultants. In this context, halothane eventually became popular as a nonflammable general anesthetic replacing other volatile anesthetics such as trichloroethylene, diethyl ether and cyclopropane. In many parts of the world it has been largely replaced by newer agents since the 1980s but is still widely used in developing countries because of its lower cost. Halothane was given to many millions of people worldwide from its introduction in 1956 through the 1980s. Its properties include cardiac depression at high levels, cardiac sensitization to catecholamines such as norepinephrine, and potent bronchial relaxation. Its lack of airway irritation made it a common inhalation induction agent in pediatric anesthesia. Its use in developed countries has been mostly replaced by newer anesthetic agents such as sevoflurane. It is not commercially available in the United States. Society and culture Availability It is on the World Health Organization's List of Essential Medicines. It is available as a volatile liquid, at 30, 50, 200, and 250 ml per container but in many developed nations is not available having been displaced by newer agents. It is the only inhalational anesthetic containing bromine, which makes it radiopaque. It is colorless and pleasant-smelling, but unstable in light. It is packaged in dark-colored bottles and contains 0.01% thymol as a stabilizing agent. Greenhouse gas Owing to the presence of covalently bonded fluorine, halothane absorbs in the atmospheric window and is therefore a greenhouse gas. However, it is much less potent than most other chlorofluorocarbons and bromofluorocarbons due to its short atmospheric lifetime, estimated at only one year vis-à-vis over 100 years for many perfluorocarbons. Despite its short lifespan, halothane still has a global warming potential 47 times that of carbon dioxide, although this is over 100 times smaller than the most abundant fluorinated gases, and about 800 times smaller than the GWP of sulfur hexafluoride over 500 years. Halothane is believed to make a negligible contribution to global warming. Ozone depletion Halothane is an ozone depleting substance with an ODP of 1.56 and it is calculated to be responsible for 1% of total stratospheric ozone layer depletion.
Biology and health sciences
Anesthetics
Health
142534
https://en.wikipedia.org/wiki/Electron%20hole
Electron hole
In physics, chemistry, and electronic engineering, an electron hole (often simply called a hole) is a quasiparticle denoting the lack of an electron at a position where one could exist in an atom or atomic lattice. Since in a normal atom or crystal lattice the negative charge of the electrons is balanced by the positive charge of the atomic nuclei, the absence of an electron leaves a net positive charge at the hole's location. Holes in a metal or semiconductor crystal lattice can move through the lattice as electrons can, and act similarly to positively-charged particles. They play an important role in the operation of semiconductor devices such as transistors, diodes (including light-emitting diodes) and integrated circuits. If an electron is excited into a higher state it leaves a hole in its old state. This meaning is used in Auger electron spectroscopy (and other x-ray techniques), in computational chemistry, and to explain the low electron-electron scattering-rate in crystals (metals and semiconductors). Although they act like elementary particles, holes are rather quasiparticles; they are different from the positron, which is the antiparticle of the electron. (
Physical sciences
States of matter
Physics
142586
https://en.wikipedia.org/wiki/Domestication
Domestication
Domestication is a multi-generational mutualistic relationship in which an animal species, such as humans or leafcutter ants, takes over control and care of another species, such as sheep or fungi, to obtain from them a steady supply of resources, such as meat, milk, or labor. The process is gradual and geographically diffuse, based on trial and error. Domestication affected genes for behavior in animals, making them less aggressive. In plants, domestication affected genes for morphology, such as increasing seed size and stopping the shattering of cereal seedheads. Such changes both make domesticated organisms easier to handle and reduce their ability to survive in the wild. The first animal to be domesticated by humans was the dog, as a commensal, at least 15,000 years ago. Other animals, including goats, sheep, and cows, were domesticated around 11,000 years ago. Among birds, the chicken was first domesticated in East Asia, seemingly for cockfighting, some 7,000 years ago. The horse came under domestication around 5,500 years ago in central Asia as a working animal. Among invertebrates, the silkworm and the western honey bee were domesticated over 5,000 years ago for silk and honey, respectively. The domestication of plants began around 13,000–11,000 years ago with cereals such as wheat and barley in the Middle East, alongside crops such as lentil, pea, chickpea, and flax. Beginning around 10,000 years ago, Indigenous peoples in the Americas began to cultivate peanuts, squash, maize, potatoes, cotton, and cassava. Rice was first domesticated in China some 9,000 years ago. In Africa, crops such as sorghum were domesticated. Agriculture developed in some 13 centres around the world, domesticating different crops and animals. Three groups of insects, namely ambrosia beetles, leafcutter ants, and fungus-growing termites have independently domesticated species of fungi, on which they feed. In the case of the termites, the relationship is a fully obligate symbiosis on both sides. Definitions Domestication (not to be confused with the taming of an individual animal), is from the Latin , 'belonging to the house'. The term remained loosely defined until the 21st century, when the American archaeologist Melinda A. Zeder defined it as a long-term relationship in which humans take over control and care of another organism to gain a predictable supply of a resource, resulting in mutual benefits. She noted further that it is not synonymous with agriculture since agriculture depends on domesticated organisms but does not automatically result from domestication. Michael D. Purugganan notes that domestication has been hard to define, despite the "instinctual consensus" that it means "the plants and animals found under the care of humans that provide us with benefits and which have evolved under our control." He comments that insects such as termites, ambrosia beetles, and leafcutter ants have domesticated some species of fungi, and notes further that other groups such as weeds and commensals have wrongly been called domesticated. Starting from Zeder's definition, Purugganan proposes a "broad" definition: "a coevolutionary process that arises from a mutualism, in which one species (the domesticator) constructs an environment where it actively manages both the survival and reproduction of another species (the domesticate) in order to provide the former with resources and/or services." He comments that this adds niche construction to the activities of the domesticator. Domestication syndrome is the suite of phenotypic traits that arose during the initial domestication process and which distinguish crops from their wild ancestors. It can also mean a set of differences now observed in domesticated mammals, not necessarily reflecting the initial domestication process. The changes include increased docility and tameness, coat coloration, reductions in tooth size, craniofacial morphology, ear and tail form (e.g., floppy ears), estrus cycles, levels of adrenocorticotropic hormone and neurotransmitters, prolongations in juvenile behavior, and reductions in brain size and of particular brain regions. Cause and timing The domestication of animals and plants was triggered by the climatic and environmental changes that occurred after the peak of the Last Glacial Maximum and which continue to this present day. These changes made obtaining food by hunting and gathering difficult. The first animal to be domesticated was the dog at least 15,000 years ago. The Younger Dryas 12,900 years ago was a period of intense cold and aridity that put pressure on humans to intensify their foraging strategies but did not favour agriculture. By the beginning of the Holocene 11,700 years ago, a warmer climate and increasing human populations led to small-scale animal and plant domestication and an increased supply of food. The appearance of the domestic dog in the archaeological record, at least 15,000 years ago, was followed by domestication of livestock and of crops such as wheat and barley, the invention of agriculture, and the transition of humans from foraging to farming in different places and times across the planet. For instance, small-scale trial cultivation of cereals began some 28,000 years ago at the Ohalo II site in Israel. In the Fertile Crescent 11,000–10,000 years ago, zooarchaeology indicates that goats, pigs, sheep, and taurine cattle were the first livestock to be domesticated. Two thousand years later, humped zebu cattle were domesticated in what is today Baluchistan in Pakistan. In East Asia 8,000 years ago, pigs were domesticated from wild boar genetically different from those found in the Fertile Crescent. The cat was domesticated in the Fertile Crescent, perhaps 10,000 years ago, from European wildcats, possibly to control rodents that were damaging stored food. Animals Desirable traits The domestication of vertebrate animals is the relationship between non-human vertebrates and humans who have an influence on their care and reproduction. In his 1868 book The Variation of Animals and Plants Under Domestication, Charles Darwin recognized the small number of traits that made domestic species different from their wild ancestors. He was also the first to recognize the difference between conscious selective breeding in which humans directly select for desirable traits and unconscious selection, in which traits evolve as a by-product of natural selection or from selection on other traits. There is a difference between domestic and wild populations; some of these differences constitute the domestication syndrome, traits presumed essential in the early stages of domestication, while others represent later improvement traits. Domesticated mammals in particular tend to be smaller and less aggressive than their wild counterparts; other common traits are floppy ears, a smaller brain, and a shorter muzzle. Domestication traits are generally fixed within all domesticates, and were selected during the initial episode of domestication of that animal or plant, whereas improvement traits are present only in a proportion of domesticates, though they may be fixed in individual breeds or regional populations. Certain animal species, and certain individuals within those species, make better candidates for domestication because of their behavioral characteristics: The size and organization of their social structure The availability and the degree of selectivity in their choice of mates The ease and speed with which the parents bond with their young, and the maturity and mobility of the young at birth The degree of flexibility in diet and habitat tolerance Responses to humans and new environments, including reduced flight response and reactivity to external stimuli. Mammals The beginnings of mammal domestication involved a protracted coevolutionary process with multiple stages along different pathways. There are three proposed major pathways that most mammal domesticates followed into domestication: commensals, adapted to a human niche (e.g., dogs, cats, possibly pigs) prey animals sought for food (e.g., sheep, goats, cattle, water buffalo, yak, pig, reindeer, llama and alpaca) animals targeted for draft and riding (e.g., horse, donkey, camel). Humans did not intend to domesticate mammals from either the commensal or prey pathways, or at least they did not envision a domesticated animal would result from it. In both of those cases, humans became entangled with these species as the relationship between them intensified, and humans' role in their survival and reproduction gradually led to formalized animal husbandry. Although the directed pathway for draft and riding animals proceeded from capture to taming, the other two pathways are not as goal-oriented, and archaeological records suggest that they took place over much longer time frames. Unlike other domestic species selected primarily for production-related traits, dogs were initially selected for their behaviors. The dog was domesticated long before other animals, becoming established across Eurasia before the end of the Late Pleistocene era, well before agriculture. The archaeological and genetic data suggest that long-term bidirectional gene flow between wild and domestic stocks – such as in donkeys, horses, New and Old World camelids, goats, sheep, and pigs – was common. Human selection for domestic traits likely counteracted the homogenizing effect of gene flow from wild boars into pigs, and created domestication islands in the genome. The same process may apply to other domesticated animals. The 2023 parasite-mediated domestication hypothesis suggests that endoparasites such as helminths and protozoa could have mediated the domestication of mammals. Domestication involves taming, which has an endocrine component; and parasites can modify endocrine activity and microRNAs. Genes for resistance to parasites might be linked to those for the domestication syndrome; it is predicted that domestic animals are less resistant to parasites than their wild relatives. Birds Domesticated birds principally mean poultry, raised for meat and eggs: some Galliformes (chicken, turkey, guineafowl) and Anseriformes (waterfowl: ducks, geese, and swans). Also widely domesticated are cagebirds such as songbirds and parrots; these are kept both for pleasure and for use in research. The domestic pigeon has been used both for food and as a means of communication between far-flung places through the exploitation of the pigeon's homing instinct; research suggests it was domesticated as early as 10,000 years ago. Chicken fossils in China have been dated to 7,400 years ago. The chicken's wild ancestor is Gallus gallus, the red junglefowl of Southeast Asia. The species appears to have been kept initially for cockfighting rather than for food. Invertebrates Two insects, the silkworm and the western honey bee, have been domesticated for over 5,000 years, often for commercial use. The silkworm is raised for the silk threads wound around its pupal cocoon; the western honey bee, for honey, and, from the 20th century, for pollination of crops. Several other invertebrates have been domesticated, both terrestrial and aquatic, including some such as Drosophila melanogaster fruit flies and the freshwater cnidarian Hydra for research into genetics and physiology. Few have a long history of domestication. Most are used for food or other products such as shellac and cochineal. The phyla involved are Cnidaria, Platyhelminthes (for biological pest control), Annelida, Mollusca, Arthropoda (marine crustaceans as well as insects and spiders), and Echinodermata. While many marine mollusks are used for food, only a few have been domesticated, including squid, cuttlefish and octopus, all used in research on behaviour and neurology. Terrestrial snails in the genera Helix are raised for food. Several parasitic or parasitoidal insects, including the fly Eucelatoria, the beetle Chrysolina, and the wasp Aphytis are raised for biological control. Conscious or unconscious artificial selection has many effects on species under domestication; variability can readily be lost by inbreeding, selection against undesired traits, or genetic drift, while in Drosophila, variability in eclosion time (when adults emerge) has increased. Plants Humans foraged for wild cereals, seeds, and nuts thousands of years before they were domesticated; wild wheat and barley, for example, were gathered in the Levant at least 23,000 years ago. Neolithic societies in West Asia first began to cultivate and then domesticate some of these plants around 13,000 to 11,000 years ago. The founder crops of the West Asian Neolithic included cereals (emmer, einkorn wheat, barley), pulses (lentil, pea, chickpea, bitter vetch), and flax. Other plants were independently domesticated in 13 centers of origin (subdivided into 24 areas) of the Americas, Africa, and Asia (the Middle East, South Asia, the Far East, and New Guinea and Wallacea); in some thirteen of these regions people began to cultivate grasses and grains. Rice was first cultivated in East Asia. Sorghum was widely cultivated in sub-Saharan Africa, while peanuts, squash, cotton, maize, potatoes, and cassava were domesticated in the Americas. Continued domestication was gradual and geographically diffuse – happening in many small steps and spread over a wide area – on the evidence of both archaeology and genetics. It was a process of intermittent trial and error and often resulted in diverging traits and characteristics. Whereas domestication of animals impacted most on the genes that controlled behavior, that of plants impacted most on the genes that controlled morphology (seed size, plant architecture, dispersal mechanisms) and physiology (timing of germination or ripening), as in the domestication of wheat. Wild wheat shatters and falls to the ground to reseed itself when ripe, but domesticated wheat stays on the stem for easier harvesting. This change was possible because of a random mutation in the wild populations at the beginning of wheat's cultivation. Wheat with this mutation was harvested more frequently and became the seed for the next crop. Therefore, without realizing it, early farmers selected for this mutation. The result is domesticated wheat, which relies on farmers for its reproduction and dissemination. Differences from wild plants Domesticated plants differ from their wild relatives in many ways, including lack of shattering such as of cereal ears (ripe heads), loss of fruit abscission less efficient breeding system (e.g. without normal pollinating organs, making human intervention a requirement), larger seeds with lower success in the wild, or even sterility (e.g. seedless fruits) and therefore only vegetative reproduction better palatability (e.g. higher sugar content, reduced bitterness), better smell, and lower toxicity edible part larger, e.g. cereal grains or fruits edible part more easily separated from non-edible part increased number of fruits or grains altered color, taste, and texture daylength independence determinate growth reduced or no vernalization less seed dormancy. Plant defenses against herbivory, such as thorns, spines, and prickles, poison, protective coverings, and sturdiness may have been reduced in domesticated plants. This would make them more likely to be eaten by herbivores unless protected by humans, but there is only weak support for most of this. Farmers did select for reduced bitterness and lower toxicity and for food quality, which likely increased crop palatability to herbivores as to humans. However, a survey of 29 plant domestications found that crops were as well-defended against two major insect pests (beet armyworm and green peach aphid) both chemically (e.g. with bitter substances) and morphologically (e.g. with toughness) as their wild ancestors. Changes to plant genome During domestication, crop species undergo intense artificial selection that alters their genomes, establishing core traits that define them as domesticated, such as increased grain size. Comparison of the coding DNA of chromosome 8 in rice between fragrant and non-fragrant varieties showed that aromatic and fragrant rice, including basmati and jasmine, is derived from an ancestral rice domesticate that suffered a deletion in exon 7 which altered the coding for betaine aldehyde dehydrogenase (BADH2). Comparison of the potato genome with that of other plants located genes for resistance to potato blight caused by Phytophthora infestans. In coconut, genomic analysis of 10 microsatellite loci (of noncoding DNA) found two episodes of domestication based on differences between individuals in the Indian Ocean and those in the Pacific Ocean. The coconut experienced a founder effect, where a small number of individuals with low diversity founded the modern population, permanently losing much of the genetic variation of the wild population. Population bottlenecks which reduced variation throughout the genome at some later date after domestication are evident in crops such as pearl millet, cotton, common bean and lima bean. In wheat, domestication involved repeated hybridization and polyploidy. These steps are large and essentially instantaneous changes to the genome and the epigenome, enabling a rapid evolutionary response to artificial selection. Polyploidy increases the number of chromosomes, bringing new combinations of genes and alleles, which in turn enable further changes such as by chromosomal crossover. Impact on plant microbiome The microbiome, the collection of microorganisms inhabiting the surface and internal tissue of plants, is affected by domestication. This includes changes in microbial species composition and diversity. Plant lineage, including speciation, domestication, and breeding, have shaped plant endophytes (phylosymbiosis) in similar patterns as plant genes. Fungi Several species of fungi have been domesticated for use directly as food, or in fermentation to produce foods and drugs. The cultivated mushroom Agaricus bisporus is widely grown for food. The yeast Saccharomyces cerevisiae have been used for thousands of years to ferment beer and wine, and to leaven bread. Mould fungi including Penicillium are used to mature cheeses and other dairy products, as well as to make drugs such as antibiotics. Effects On domestic animals Selection of animals for visible traits may have undesired consequences for the genetics of domestic animals. A side effect of domestication has been zoonotic diseases. For example, cattle have given humanity various viral poxes, measles, and tuberculosis; pigs and ducks have contributed influenza; and horses have brought the rhinoviruses. Many parasites, too, have their origins in domestic animals. Alongside these, the advent of domestication resulted in denser human populations, which provided ripe conditions for pathogens to reproduce, mutate, spread, and eventually find a new host in humans. On society Scholars have expressed widely differing viewpoints on domestication's effects on society. Anarcho-primitivism critiques domestication as destroying the supposed primitive state of harmony with nature in hunter-gatherer societies, and replacing it, possibly violently or by enslavement, with a social hierarchy as property and power emerged. The dialectal naturalist Murray Bookchin has argued that domestication of animals, in turn, meant the domestication of humanity, both parties being unavoidably altered by their relationship with each other. The sociologist David Nibert asserts that the domestication of animals involved violence against animals and damage to the environment. This, in turn, he argues, corrupted human ethics and paved the way for "conquest, extermination, displacement, repression, coerced and enslaved servitude, gender subordination and sexual exploitation, and hunger." On diversity Domesticated ecosystems provide food, reduce predator and natural dangers, and promote commerce, but their creation has resulted in habitat alteration or loss, and multiple extinctions commencing in the Late Pleistocene. Domestication reduces genetic diversity of the domesticated population, especially of alleles of genes targeted by selection. One reason is a population bottleneck created by artificially selecting the most desirable individuals to breed from. Most of the domesticated strain is then born from just a few ancestors, creating a situation similar to the founder effect. Domesticated populations such as of dogs, rice, sunflowers, maize, and horses have an increased mutation load, as expected in a population bottleneck where genetic drift is enhanced by the small population size. Mutations can also be fixed in a population by a selective sweep. Mutational load can be increased by reduced selective pressure against moderately harmful traits when reproductive fitness is controlled by human management. However, there is evidence against a bottleneck in crops, such as barley, maize, and sorghum, where genetic diversity slowly declined rather than showing a rapid initial fall at the point of domestication. Further, the genetic diversity of these crops was regularly replenished from the natural population. Similar evidence exists for horses, pigs, cows, and goats. Domestication by insects At least three groups of insects, namely ambrosia beetles, leafcutter ants, and fungus-growing termites, have domesticated species of fungi. Ambrosia beetles Ambrosia beetles in the weevil subfamilies Scolytinae and Platypodinae excavate tunnels in dead or stressed trees into which they introduce fungal gardens, their sole source of nutrition. After landing on a suitable tree, an ambrosia beetle excavates a tunnel in which it releases its fungal symbiont. The fungus penetrates the plant's xylem tissue, extracts nutrients from it, and concentrates the nutrients on and near the surface of the beetle gallery. Ambrosia fungi are typically poor wood degraders and instead utilize less demanding nutrients. Symbiotic fungi produce and detoxify ethanol, which is an attractant for ambrosia beetles and likely prevents the growth of antagonistic pathogens and selects for other beneficial symbionts. Ambrosia beetles mainly colonise wood of recently dead trees. Leafcutter ants The leafcutter ants are any of some 47 species of leaf-chewing ants in the genera Acromyrmex and Atta. The ants carry the discs of leaves that they have cut back to their nest, where they feed the leaf material to the fungi that they tend. Some of these fungi are not fully domesticated: the fungi farmed by Mycocepurus smithii constantly produce spores that are not useful to the ants, which eat fungal hyphae instead. The process of domestication by Atta ants, on the other hand, is complete; it took 30 million years. Fungus-growing termites Some 330 fungus-growing termite species of the subfamily Macrotermitinae cultivate Termitomyces fungi to eat; domestication occurred exactly once, 25–40 mya. The fungi, described by Roger Heim in 1942, grow on 'combs' formed from the termites' excreta, dominated by tough woody fragments. The termites and the fungi are both obligate symbionts in the relationship.
Technology
Food and health
null
142660
https://en.wikipedia.org/wiki/Bridge%20of%20Sighs
Bridge of Sighs
The Bridge of Sighs (Italian: Ponte dei Sospiri, ) is a bridge in Venice, Italy. The enclosed bridge is made of white limestone, has windows with stone bars, passes over the Rio di Palazzo, and connects the New Prison (Prigioni Nuove) to the interrogation rooms in the Doge's Palace. It was designed by Antonio Contin, whose uncle Antonio da Ponte designed the Rialto Bridge. It was built in 1600. Etymology The view from the Bridge of Sighs was the last view of Venice that convicts saw before their imprisonment. The bridge's English name was bestowed by Lord Byron in the 19th century as a translation from the Italian "Ponte dei sospiri", from the suggestion that prisoners would sigh at their final view of beautiful Venice through the window before being taken down to their cells. In culture Numerous other bridges around the world have been nicknamed after the Bridge of Sighs — see Bridge of Sighs (disambiguation). The 1861 opera Le pont des soupirs ("The Bridge of Sighs") by Jacques Offenbach has the name of the bridge as a title. The Bridge of Sighs features heavily in the plot of the 1979 film A Little Romance. One of the characters tells of a tradition that if a couple kiss in a gondola beneath the Bridge of Sighs in Venice at sunset while the church bells toll, they will be in love forever. Bridge of Sighs is the title of the second solo studio album released in April 1974 by English rock guitarist and songwriter, Robin Trower. A Bridge of Sighs is mentioned in the opening line of “Itchycoo Park” by the Small Faces. Marillion, an English progressive rock band, mentions this particular bridge in their song Jigsaw. ('We are renaissance children becalmed beneath the Bridge of Sighs'). Giles Corey, an American slowcore band, likewise mentions this bridge in their song No One Is Ever Going To Want Me. Renowned American architect H. H. Richardson used the bridge as inspiration when designing part of the Allegheny County Jail complex in Pittsburgh. It was completed in 1888 and features a similar enclosed arched walkway that connects the courthouse and jail, therefore bearing the same name. Gallery
Technology
Bridges
null
142759
https://en.wikipedia.org/wiki/Interceptor%20aircraft
Interceptor aircraft
An interceptor aircraft, or simply interceptor, is a type of fighter aircraft designed specifically for the defensive interception role against an attacking enemy aircraft, particularly bombers and reconnaissance aircraft. Aircraft that are capable of being or are employed as both "standard" air superiority fighters and as interceptors are sometimes known as fighter-interceptors. There are two general classes of interceptor: light fighters, designed for high performance over short range; and heavy fighters, which are intended to operate over longer ranges, in contested airspace and adverse meteorological conditions. While the second type was exemplified historically by specialized night fighter and all-weather interceptor designs, the integration of mid-air refueling, satellite navigation, on-board radar, and beyond visual range (BVR) missile systems since the 1960s has allowed most frontline fighter designs to fill the roles once reserved for specialized night/all-weather fighters. For daytime operations, conventional light fighters have normally filled the interceptor role. Day interceptors have been used in a defensive role since World War I, and are perhaps best known from major actions like the Battle of Britain, when the Supermarine Spitfire and Hawker Hurricane were part of a successful defensive strategy. However, dramatic improvements in both ground-based and airborne radar gave greater flexibility to existing fighters and few later designs were conceived as dedicated day interceptors. Exceptions include the Messerschmitt Me 163 Komet, which was the only rocket-powered, crewed military aircraft to see combat. To a lesser degree, the Mikoyan-Gurevich MiG-15, which had heavy armament specifically intended for anti-bomber missions, was also a specialized day interceptor. Night fighters and bomber destroyers are interceptors of the heavy type, although initially they were rarely referred to as such. In the early Cold War era the combination of jet-powered bombers and nuclear weapons created air force demand for highly capable interceptors; it is in regards to this period that the term is perhaps most recognized and used. Cold War-era interceptors became increasingly distinct from their air superiority counterparts, with the former often sacrificing range, endurance, and maneuverability for speed, rate of climb, and armament dedicated to attacking large strategic bombers. Examples of classic interceptors of this era include the Convair F-106 Delta Dart, Sukhoi Su-15, and English Electric Lightning. Through the 1960s and 1970s, the rapid improvements in design led to most air-superiority and multirole fighters, such as the Grumman F-14 Tomcat and McDonnell Douglas F-15 Eagle, having the performance to take on the point defense interception role, and the strategic threat moved from bombers to intercontinental ballistic missiles (ICBMs). Dedicated interceptor designs became increasingly rare, with the only widely used examples designed after the 1960s being the Panavia Tornado ADV, Mikoyan MiG-25, Mikoyan MiG-31, and the Shenyang J-8. History The first interceptor squadrons were formed during World War I to defend London against attacks by Zeppelins and later against fixed-wing long-range bombers. Early units generally used aircraft withdrawn from front-line service, notably the Sopwith Pup. They were told about their target's location before take-off from a command centre in the Horse Guards building. The Pup proved to have too low performance to easily intercept Gotha G.IV bombers, and the superior Sopwith Camels supplanted them. The term "interceptor" was in use by 1929. Through the 1930s, bomber aircraft speeds increased so much that conventional interceptor tactics appeared impossible. Visual and acoustic detection from the ground had a range of only a few miles, which meant that an interceptor would have insufficient time to climb to altitude before the bombers reached their targets. Standing combat air patrols were possible but only at great cost. The conclusion at the time was that "the bomber will always get through". The invention of radar made possible early, long-range detection of aircraft on the order of , both day and night and in all weather. A typical bomber might take twenty minutes to cross the detection zone of early radar systems, time enough for interceptor fighters to start up, climb to altitude and engage the bombers. Ground controlled interception required constant contact between the interceptor and the ground until the bombers became visible to the pilots and nationwide networks like the Dowding system were built in the late 1930s to coordinate these efforts. During World War II the effectiveness of interceptor aircraft meant that bombers often needed to be escorted by long range fighter aircraft. Many aircraft were able to be fitted with Aircraft interception radar, further facilitating the interception of enemy aircraft. The introduction of jet power increased flight speeds from around to around in a step and roughly doubled operational altitudes. Although radars also improved in performance, the gap between offense and defense was dramatically reduced. Large attacks could so confuse the defense's ability to communicate with pilots that the classic method of manual ground controlled interception was increasingly seen as inadequate. In the United States, this led to the introduction of the Semi-Automatic Ground Environment to computerize this task, while in the UK it led to enormously powerful radars to improve detection time. The introduction of the first useful surface to air missiles in the 1950s obviated the need for fast reaction time interceptors as the missile could launch almost instantly. Air forces increasingly turned to much larger interceptor designs, with enough fuel for longer endurance, leaving the point-defense role to the missiles. This led to the abandonment of a number of short-range designs like the Avro Arrow and Convair F-102 in favor of much larger and longer-ranged designs like the North American F-108 and MiG-25. In the 1950s and 1960’s during the Cold War, a strong interceptor force was crucial for the opposing superpowers as it was the best means to defend against an unexpected nuclear attack by strategic bombers. Hence, for a brief period of time they fared rapid development in both speed, range, and altitude. At the end of the 1960s, a nuclear attack became unstoppable with the introduction of ballistic missiles capable of approaching from outside the atmosphere at speeds as high as . The doctrine of mutually assured destruction replaced the trend of defense strengthening, making interceptors less strategically logical. The utility of interceptors waned as the role merged with that of the heavy air superiority fighter. The interceptor mission is, by its nature, a difficult one. Consider the desire to protect a single target from attack by long-range bombers. The bombers have the advantage of being able to select the parameters of the mission – attack vector, speed and altitude. This results in an enormous area from which the attack can originate. In the time it takes for the bombers to cross the distance from first detection to being on their targets, the interceptor must be able to start, take off, climb to altitude, maneuver for attack and then attack the bomber. A dedicated interceptor aircraft sacrifices the capabilities of the air superiority fighter and multirole fighter (i.e., countering enemy fighter aircraft in air combat manoeuvring), by tuning its performance for either fast climbs or high speeds. The result is that interceptors often look very impressive on paper, typically outrunning, outclimbing and outgunning slower fighter designs. However, pure interceptors fare poorly in fighter-to-fighter combat against the same "less capable" designs due to limited maneuverability especially at low altitudes and speeds. Point-defense interceptors In the spectrum of various interceptors, one design approach especially shows sacrifices necessary to achieve decisive benefit in a chosen aspect of performance. A "point defense interceptor" is of a lightweight design, intended to spend most of its time on the ground located at the defended target, and able to launch on demand, climb to altitude, manoeuvre and then attack the bomber in a very short time, before the bomber can deploy its weapons. At the end of Second World War, the Luftwaffes most critical requirement was for interceptors as the Commonwealth and American air forces pounded German targets night and day. As the bombing effort grew, notably in early 1944, the Luftwaffe introduced a rocket-powered design, the Messerschmitt Me 163 Komet, in the very-short-range interceptor role. The engine allowed about 7 minutes of powered flight, but offered such tremendous performance that they could fly right by the defending fighters. The Me 163 required an airbase, however, which were soon under constant attack. Following the Emergency Fighter Program, the Germans developed even odder designs, such as the Bachem Ba 349 Natter, which launched vertically and thus eliminated the need for an airbase. In general all these initial German designs proved difficult to operate, often becoming death traps for their pilots, and had little effect on the bombing raids. Rocket-boosted variants of both of Germany's jet fighters; the Me 262 in its "C" subtype series, all nicknamed "home protector" (Heimatschützer, in four differing formats) and the planned He 162E subtype, using one of the same BMW 003R turbojet/rocket "mixed-power" engine as the Me 262C-2b Heimatschützer II, but were never produced in quantity. In the initial stage of Cold War, bombers were expected to attack flying higher and faster, even at transonic speeds. Initial transonic and supersonic fighters had modest internal fuel tanks in their slim fuselages, but a very high fuel consumption. This led fighter prototypes emphasizing acceleration and operational ceiling, with a sacrifice on the loiter time, essentially limiting them to point defense role. Such were the mixed jet/rocket power Republic XF-91 or Saunders Roe SR.53. The Soviet and Western trials with zero-length launch were also related. None of these found practical use. Designs that depended solely on jet engines achieved more success with the F-104 Starfighter (initial A version) and the English Electric Lightning. The role of crewed point defense designs was reassigned to uncrewed interceptors—surface-to-air missiles (SAMs)—which first reached an adequate level in 1954–1957. SAM advancements ended the concept of massed high-altitude bomber operations, in favor of penetrators (and later cruise missiles) flying a combination of techniques colloquially known as "flying below the radar". By flying terrain masking low-altitude nap-of-the-earth flight profiles the effective range, and therefore reaction time, of ground-based radar was limited to at best the radar horizon. In the case of ground radar systems this can be countered by placing radar systems on mountain tops to extend the radar horizon, or through placing high performance radars in interceptors or in AWACS aircraft used to direct point defense interceptors. Area defense As capabilities continued to improve – especially through the widespread introduction of the jet engine and the adoption of high speed, low level flight profiles, the time available between detection and interception dropped. Most advanced point defence interceptors combined with long-range radars were struggling to keep the reaction time down enough to be effective. Fixed times, like the time needed for the pilot to climb into the cockpit, became an increasing portion of the overall mission time, there were few ways to reduce this. During the Cold War in times of heightened tensions, quick reaction alert (QRA) aircraft were kept piloted, fully fueled and armed, with the engines running at idle on the runway ready to take off. The aircraft being kept topped up with fuel via hoses from underground fuel tanks. If a possible intruder was identified, the aircraft would be ready to take off as soon as the external fuel lines were detached. However, keeping QRA aircraft at this state of readiness was physically and mentally draining to the pilots and was expensive in terms of fuel. As an alternative, longer-range designs with extended loiter times were considered. These area defense interceptors or area defense fighters were in general larger designs intended to stay on lengthy patrol and protect a much larger area from attack, depending on greater detection capabilities, both in the aircraft themselves and operating with AWACS, rather than high speed to reach targets. The exemplar of this concept was the Tupolev Tu-28. The later Panavia Tornado ADV was able to achieve long range in a smaller airframe through the use of more efficient engines. Rather than focusing on acceleration and climb rate, the design emphasis is on range and missile carrying capacity, which together translate into combat endurance, look-down/shoot-down radars good enough to detect and track fast moving interdictors against ground clutter, and the capability to provide guidance to air-to-air missiles (AAM) against these targets. High speed and acceleration was put into long-range and medium-range AAMs, and agility into short range dog fighting AAMs, rather than into the aircraft themselves. They were first to introduce all-weather avionics, assuring successful operations during night, rain, snow, or fog. Countries that were strategically dependent on surface fleet, most notably US and UK, maintained also fleet defense fighters, such as the F-14 Tomcat. Development Soviet Union and Russia During the Cold War, an entire military service, not just an arm of the pre-existing air force, was designated for deployment of interceptors. The aircraft of the Soviet Air Defence Forces (PVO-S) differed from those of the Soviet Air Forces (VVS) in that they were by no means small or crudely simple, but huge and refined with large, sophisticated radars; they could not take off from grass, only concrete runways; they could not be disassembled and shipped back to a maintenance center in a boxcar. Similarly, their pilots were given less training in combat maneuvers, and more in radio-directed pursuit. The Soviets' main interceptor was initially the Su-9, which was followed by the Su-15 and the MiG-25 "Foxbat". The auxiliary Tu-128, an area range interceptor, was notably the heaviest fighter aircraft ever to see service in the world. The latest and most advanced interceptor aircraft in the Soviet (now Russian) inventory is the MiG-31 "Foxhound". Improving on some of the flaws on the proceeding MiG-25, the MiG-31 has better low altitude and low speed performance, in addition to carrying an internal cannon. Russia, despite merging the PVO into the VVS, continues to maintain its dedicated MiG-31 interceptor fleet. United States In 1937, USAAC lieutenants Gordon P. Saville and Benjamin S. Kelsey devised a pair of proposals for interceptor aircraft, the first such designation in the US. One proposal was for a single-engine fighter, the other for a twin-engine. Both were required to reach an altitude of in six minutes as a defense against bomber attack. Kelsey said later that he used the interceptor designation to sidestep a hard USAAC policy restricting fighters to of armament. He wished for at least of armament so that American fighters could dominate their battles against all opponents, fighters included. The two aircraft resulting from these proposals were the single-engine Bell P-39 Airacobra and the twin-engine Lockheed P-38 Lightning. Both aircraft were successful during World War II in standard fighter roles, not specifically assigned to point defense against bombers. From 1946 to 1980 the United States maintained a dedicated Aerospace Defense Command, consisting primarily of dedicated interceptors. Many post-war designs were of limited performance, including designs like the F-86D and F-89 Scorpion. In the late 1940s ADC started a project to build a much more advanced interceptor under the 1954 interceptor effort, which eventually delivered the F-106 Delta Dart after a lengthy development process. Further replacements were studied, notably the NR-349 proposal during the 1960s, but came to nothing as the USSR strengthened their strategic force with ICBMs. Hence, the F-106 ended up serving as the primary USAF interceptor into the 1980s. As the F-106 was retired, intercept missions were assigned to the contemporary F-15 and F-16 fighters, among their other roles. The F-16, however, was originally designed for air superiority while evolving into a versatile multirole fighter. The F-15, with its Mach 2.5 maximum speed enabling it to intercept the fastest enemy aircraft (namely the MiG-25 Foxbat), is also not a pure interceptor as it has exceptional agility for dogfighting based upon the lessons learned from Vietnam; the F-15E Strike Eagle variant adds air interdiction while retaining the interception and air-to-air combat of other F-15s. Presently, the F-22 is the USA's latest combat aircraft that serves in part as an interceptor due to its Mach 2+ speed as well as supercruise capabilities, however it was designed primarily as a stealth air superiority fighter. In the 1950s, the United States Navy led an unsuccessful F6D Missileer project. Later it launched the development of a large F-111B fleet air defense fighter, but this project was cancelled too. Finally, the role was assigned to the F-14 Tomcat, carrying AIM-54 Phoenix missiles. Like the USAF's F-15, the USN's F-14 was also designed primarily as an air superiority (fighter-to-fighter combat) and F-14s served the interceptor role until it received upgrades in the 1990s for ground attack. Both the fighter and the Phoenix missile were retired in 2006. United Kingdom The British Royal Air Force operated a supersonic day fighter, the English Electric Lightning, alongside the Gloster Javelin in the subsonic night/all-weather role. Efforts to replace the Javelin with a supersonic design under Operational Requirement F.155 came to naught. The UK operated its own, highly adapted version of the McDonnell Douglas F-4 Phantom as its primary interceptor from the mid-1970s, with the air defence variant (ADV) of the Panavia Tornado being introduced in the 1980s. The Tornado was eventually replaced with a multirole design, the Eurofighter Typhoon. China The Shenyang J-8 is a high-speed, high-altitude Chinese-built single-seat interceptor. Initially designed in the early 1960s to counter US-built B-58 Hustler bombers, F-105 Thunderchief fighter-bombers and Lockheed U-2 reconnaissance planes, it still retains the ability to 'sprint' at Mach 2+ speeds, and later versions can carry medium-range PL-12/SD-10 MRAAM missiles for interception purposes. The PLAAF/PLANAF currently still operates approximately 300 or so J-8s of various configurations. Other countries Several other countries also introduced interceptor designs, although in the 1950s–1960s several planned interceptors never came to fruition, with the expectation that missiles would replace bombers. The Argentine FMA I.Ae. 37 was a prototype jet fighter developed during the 1950s. It never flew and was cancelled in 1960. The Canadian subsonic Avro Canada CF-100 Canuck served in numbers through 1950s. Its supersonic replacement, the CF-105 Arrow ("Avro Arrow"), was controversially cancelled in 1959. The Swedish Saab 35 Draken was specifically designed for intercepting aircraft passing Swedish airspace at high altitudes in the event of a war between the Soviet Union and NATO. With the advent of low flying cruise-missiles and high-altitude AA-missiles the flight profile was changed, but regained the interceptor profile with the final version J 35J.
Technology
Military aviation
null
142813
https://en.wikipedia.org/wiki/Azalea
Azalea
Azaleas ( ) are flowering shrubs in the genus Rhododendron, particularly the former sections Tsutsusi (evergreen) and Pentanthera (deciduous). Azaleas bloom in the spring (April and May in the temperate Northern Hemisphere, and October and November in the Southern Hemisphere), their flowers often lasting several weeks. Shade tolerant, they prefer living near or under trees. They are part of the family Ericaceae. Cultivation Plant enthusiasts have selectively bred azaleas for hundreds of years. This human selection has produced thousands of different cultivars which are propagated by cuttings. Azalea seeds can also be collected and germinated. Azaleas are generally slow-growing and do best in well-drained acidic soil (4.5–6.0 pH). Fertilizer needs are low. Some species need regular pruning. Azaleas are native to several continents including Asia, Europe and North America. They are planted abundantly as ornamentals in the southeastern US, southern Asia, and parts of southwest Europe. According to azalea historian Fred Galle, in the United States, Azalea indica (in this case, the group of plants called Southern indicas) was first introduced to the outdoor landscape in the 1830s at the rice plantation Magnolia-on-the-Ashley in Charleston, South Carolina. From Philadelphia, where they were grown only in greenhouses, John Grimke Drayton (Magnolia's owner) imported the plants for use in his estate garden. With encouragement from Charles Sprague Sargent from Harvard's Arnold Arboretum, Magnolia Gardens was opened to the public in 1871, following the American Civil War. Magnolia is one of the oldest public gardens in America. Since the late 19th century, in late March and early April, thousands visit to see the azaleas bloom in their full glory. Classification Native American azaleas Disease Azalea leafy gall can be particularly destructive to azalea leaves during the early spring. Hand picking infected leaves is the recommended method of control. They can also be subject to Phytophthora root rot in moist, hot conditions. Azaleas share the economically important disease Phytophthora cinnamomi with more than 3000 other plants. Pests Azaleas share the Azalea lace bug (Stephanitis pyrioides) with many other heath species. Shrewsbury & Raupp 2000 find azaleas can be protected from them by companion planting with an overstory above them. Cultural significance and symbolism In Chinese culture, the azalea is known as "thinking of home bush" (sixiang shu), and is immortalized in the poetry of Du Fu. The azalea is also one of the symbols of the city of São Paulo, Brazil. Azaleas and rhododendrons were once so infamous for their toxicity that to receive a bouquet of their flowers in a black vase was a well-known death threat. Toxicity In addition to being renowned for its beauty, the azalea is also highly toxic—it contains andromedotoxins in both its leaves and nectar, including honey from the nectar. Bees are deliberately fed on Azalea/Rhododendron nectar in some parts of Turkey, producing a mind-altering, potentially medicinal, and occasionally lethal honey known as "mad honey". Azalea is dangerous to pets, as, if consumed, the toxins within the plant tissue can cause central nervous system depression, which in turn can lead to multi-organ failure. Symptoms may include vomiting, diarrhea, seizures, laryngeal edema and heart rhythm disturbances, which can lead to complete cardiac arrest and therefore death. Acute kidney failure may also occur. Azalea festivals Japan Motoyama, Kōchi has a flower festival in which the blooming of Tsutsuji is celebrated. Tatebayashi, Gunma is famous for its Azalea Hill Park, Tsutsuji-ga-oka. Nezu Shrine in Bunkyo, Tokyo, holds a Tsutsuji Matsuri from early April until early May. Higashi Village has hosted an azalea festival each year since 1976. The village's 50,000 azalea plants draw an estimated 60,000 to 80,000 visitors each year. Korea Sobaeksan, one of the 12 well-known Sobaek Mountains, lying on the border between Chungbuk Province and Gyeongbuk has a royal azalea (Rhododendron schlippenbachii) festival held on May every year. Sobaeksan has an azalea colony dotted around Biro mountaintop, Gukmang and Yonwha early in May. When royal azaleas have turned pink in the end of May, it looks like Sobaeksan wears a pink Jeogori (Korean traditional jacket). Hong Kong The Ma On Shan Azalea Festival is held in Ma On Shan, where six native species (Rhododendron championae, Rhododendron farrerae, Rhododendron hongkongense, Rhododendron moulmainense, Rhododendron simiarum and Rhododendron simsii ) are found in the area. The festival has been held since 2004; it includes activities such as exhibitions, photo contests and carnivals. United States Many cities in the United States have festivals in the spring celebrating the blooms of the azalea, including Summerville, South Carolina; Hamilton, New Jersey; Mobile, Alabama; Jasper, Texas; Tyler, Texas; Norfolk, Virginia; Wilmington, North Carolina (North Carolina Azalea Festival); Valdosta, Georgia; Palatka, Florida (Florida Azalea Festival); Pickens, South Carolina; Muskogee, Oklahoma; Brookings, Oregon; and Nixa, Missouri. The Azalea Trail is a designated path, planted with azaleas in private gardens, through Mobile, Alabama. The Azalea Trail Run is an annual road running event held there in late March. Mobile, Alabama is also home to the Azalea Trail Maids, fifty women chosen to serve as ambassadors of the city while wearing antebellum dresses, who originally participated in a three-day festival, but now operate throughout the year. The Azalea Society of America designated Houston, Texas, an "azalea city". The River Oaks Garden Club has conducted the Houston Azalea Trail every spring since 1935. Valdosta, Georgia is called the Azalea City, as the plant grows in profusion there. The city hosts an annual Azalea Festival in March.
Biology and health sciences
Ericales
null
142818
https://en.wikipedia.org/wiki/Naloxone
Naloxone
Naloxone, sold under the brand name Narcan among others, is an opioid antagonist, a medication used to reverse or reduce the effects of opioids. For example, it is used to restore breathing after an opioid overdose. Effects begin within two minutes when given intravenously, five minutes when injected into a muscle, and ten minutes as a nasal spray. Naloxone blocks the effects of opioids for 30 to 90 minutes. Administration to opioid-dependent individuals may cause symptoms of opioid withdrawal, including restlessness, agitation, nausea, vomiting, a fast heart rate, and sweating. To prevent this, small doses every few minutes can be given until the desired effect is reached. In those with previous heart disease or taking medications that negatively affect the heart, further heart problems have occurred. It appears to be safe in pregnancy, after having been given to a limited number of women. Naloxone is a non-selective and competitive opioid receptor antagonist. It reverses the depression of the central nervous system and respiratory system caused by opioids. Naloxone was patented in 1961 and approved for opioid overdose in the United States in 1971. It is on the World Health Organization's List of Essential Medicines. Medical uses Opioid overdose Naloxone is useful in treating both acute opioid overdose and respiratory or mental depression due to opioids. Whether it is useful in those in cardiac arrest due to an opioid overdose is unclear. It is included as a part of emergency overdose response kits distributed to heroin and other opioid drug users, and to emergency responders. This has been shown to reduce rates of deaths due to overdose. A prescription for naloxone is recommended if a person is on a high dose of opioid (>100mg of morphine equivalence/day), is prescribed any dose of opioid accompanied by a benzodiazepine, or is suspected or known to use opioids nonmedically. Prescribing naloxone should be accompanied by standard education that includes preventing, identifying, and responding to an overdose; rescue breathing; and calling emergency services. Distribution of naloxone to individuals likely to encounter people who overdose is one aspect of harm reduction strategies. However, with opioids that have longer half-lives, respiratory depression returns after naloxone has worn off; therefore, adequate dosing and continuous monitoring may be necessary. Clonidine overdose Naloxone can also be used as an antidote in an overdose of clonidine, a medication that lowers blood pressure. Clonidine overdoses are of special relevance for children, in whom even small doses can cause significant harm. However, there is controversy regarding naloxone's efficacy in treating the symptoms of clonidine overdose, namely slow heart rate, low blood pressure, and confusion/somnolence. Case reports that used doses of 0.1mg/kg (maximum of 2mg/dose) repeated every 1–2 minutes (10mg total dose) have shown inconsistent benefit. As the doses used throughout the literature vary, it is difficult to form a conclusion regarding the benefit of naloxone in this setting. The mechanism for naloxone's proposed benefit in clonidine overdose is unclear. Still, it has been suggested that endogenous opioid receptors mediate the sympathetic nervous system in the brain and elsewhere in the body. Preventing recreational opioid use Naloxone is poorly absorbed when taken orally or sublingually, so it is often combined with several oral or sublingual opioid preparations, including buprenorphine and pentazocine, so that when swallowed or taken sublingually, only the non-naloxone opioid has an effect. However, if the combination is injected (such as by dissolving a pill or sublingual strip in water), the naloxone is believed to block the effect of the other opioid. This combination is used to prevent non-medical use. However, SAMHSA's clinical guidelines state that if the combination of buprenorphine and naloxone is injected by a regular user of buprenorphine or buprenorphine/naloxone, then the buprenorphine would still produce an agonist effect but the naloxone would fail to produce an antagonist effect. This is because the amount of naloxone that would be required to block the buprenorphine after injection is much larger than the amount that is contained in buprenorphine/naloxone (Suboxone) pills and strips. If someone who is not physically dependent on opioids were to inject the buprenorphine/naloxone combination, then the effects of the buprenorphine may at most be slightly lessened, but the individual would still be expected to experience euphoric effects. Other uses A 2003 meta-analysis of existing research showed naloxone to improve blood flow in patients with shock, including septic, cardiogenic, hemorrhagic, or spinal shock, but could not determine if this reduced patient deaths. Special populations Pregnancy and breastfeeding Whether naloxone is excreted in breast milk is unknown, however, it is not orally bioavailable and therefore is unlikely to affect a breastfeeding infant. Children Naloxone can be used on infants who were exposed to intrauterine opiates administered to mothers during delivery. However, there is insufficient evidence for the use of naloxone to lower cardiorespiratory and neurological depression in these infants. Infants exposed to high concentrations of opiates during pregnancy may have CNS damage in the setting of perinatal asphyxia. Naloxone has been studied to improve outcomes in this population, however the evidence is currently weak. Intravenous, intramuscular, or subcutaneous administration of naloxone can be given to children and neonates to reverse opiate effects. The American Academy of Pediatrics recommends only intravenous administration as the other two forms can cause unpredictable absorption. After a dose is given, the child should be monitored for at least 24 hours. For children with low blood pressure due to septic shock, naloxone safety and effectiveness are not established. Geriatric use For patients 65 years and older, it is unclear if there is a difference in response. However, older people often have decreased liver and kidney function which may lead to an increased level of naloxone in their body. Available forms Intravenous In hospital settings, naloxone is injected intravenously, with an onset of 1–2 minutes and a duration of up to 45 minutes. Intramuscular or subcutaneous Naloxone can also be administered via intramuscular or subcutaneous injection. The onset of naloxone provided through this route is 2 to 5 minutes with a duration of around 30–120min. Naloxone administered intramuscularly are provided through pre-filled syringes, vials, and auto-injector. A hand-held auto-injector is pocket-sized and can be used in non-medical settings such as in the home. It is designed for use by laypersons, including family members and caregivers of opioid users at risk for an opioid emergency, such as an overdose. According to the FDA's National Drug Code Directory, a generic version of the auto-injector began to be marketed at the end of 2019. Intranasal Narcan nasal spray was approved in the US in 2015 and is the first FDA-approved nasal spray for emergency treatment or suspected overdose. It was developed in a partnership between LightLake Therapeutics and the National Institute on Drug Abuse. The approval process was fast-tracked. A generic version of the nasal spray was approved in the United States in 2019, though did not come to market until 2021. In 2021, the FDA approved Kloxxado, an 8mg dose of intranasal naloxone developed by Hikma Pharmaceuticals. Citing the frequent need for multiple 4mg doses of Narcan to successfully reverse overdose, packs of Kloxxado Nasal Spray contain two pre-packaged nasal spray devices, each containing 8mg of naloxone. However, a wedge device (nasal atomizer) can also be attached to a syringe that may also be used to create a mist to deliver the drug to the nasalmucosa. This is useful near facilities where many overdoses occur that already stock injectors. Side effects Administration of naloxone to somebody who has used opioids may cause rapid-onset opioid withdrawal. Naloxone has little to no effect if opioids are not present. In people with opioids in their system, it may cause increased sweating, nausea, restlessness, trembling, vomiting, flushing, and headache, and has in rare cases been associated with heart rhythm changes, seizures, and pulmonary edema. Naloxone has been shown to block the action of pain-lowering endorphins the body produces naturally. These endorphins likely operate on the same opioid receptors that naloxone blocks. It is capable of blocking a placebo pain-lowering response if the placebo is administered together with a hidden or blind injection of naloxone. Other studies have found that placebo alone can activate the body's μ-opioid endorphin system, delivering pain relief by the same receptor mechanism as morphine. Naloxone should be used with caution in people with cardiovascular disease as well as those who are currently taking medications that could have adverse effects on the cardiovascular system such as causing low blood pressure, fluid accumulation in the lungs (pulmonary edema), and abnormal heart rhythms. There have been reports of abrupt reversals with opioid antagonists leading to pulmonary edema and ventricular fibrillation. Use of naloxone to treat people who have been using opioids recreationally may cause acute opioid withdrawal with distressing physiological symptoms such as shivering, tachycardia, and nausea; these in turn may lead to aggression and reluctance to receive further treatment. Pharmacology Pharmacodynamics Naloxone is a lipophilic compound that acts as a non-selective and competitive opioid receptor antagonist. The pharmacologically active isomer of naloxone is (−)-naloxone. Naloxone's binding affinity is highest for the μ-opioid receptor (MOR), then the δ-opioid receptor (DOR), and lowest for the κ-opioid receptor (KOR); naloxone has negligible affinity for the nociceptin receptor. If naloxone is administered in the absence of concomitant opioid use, no functional pharmacological activity occurs, except the inability of the body to combat pain naturally. In contrast to direct opiate agonists, which elicit opiate withdrawal symptoms when discontinued in opiate-tolerant people, no evidence indicates the development of tolerance or dependence on naloxone. The mechanism of action is not completely understood, but studies suggest it functions to produce withdrawal symptoms by competing for opioid receptors within the brain (a competitive antagonist, not a direct agonist), thereby preventing the action of both endogenous and xenobiotic opioids on these receptors without directly producing any effects itself. A single administration of naloxone at a relatively high dose of 2mg by intravenous injection has been found to produce brain MOR blockade of 80% at 5minutes, 47% at 2hours, 44% at 4hours, and 8% at 8hours. A low dose (2μg/kg) produced brain MOR blockade of 42% at 5minutes, 36% at 2hours, 33% at 4hours, and 10% at 8hours. Intranasal administration of naloxone via nasal spray has likewise been found to rapidly occupy brain MORs, with peak occupancy occurring at 20minutes, peak occupancies of 67% at a dose of 2mg and 85% with 4mg, and an estimated half-life of occupancy disappearance of approximately 100minutes (1.67hours). Pharmacokinetics When administered parenterally (non-orally or non-rectally, e.g., intravenously or by injection), as is most common, naloxone has a rapid distribution throughout the body. The mean serum half-life has been shown to range from 30 to 81 minutes, shorter than the average half-life of some opiates, necessitating repeat dosing if opioid receptors must be stopped from triggering for an extended period. Naloxone is primarily metabolized by the liver. Its major metabolite is naloxone-3-glucuronide, which is excreted in the urine. For people with liver diseases such as alcoholic liver disease or hepatitis, naloxone usage has not been shown to increase serum liver enzyme levels. Naloxone has low systemic bioavailability when taken by mouth due to hepatic first-pass metabolism, but it does block opioid receptors that are located in the intestine. Chemistry Naloxone, also known as N-allylnoroxymorphone or as 17-allyl-4,5α-epoxy-3,14-dihydroxymorphinan-6-one, is a synthetic morphinan derivative and was derived from oxymorphone (14-hydroxydihydromorphinone), an opioid analgesic. Oxymorphone, in turn, was derived from morphine, an opioid analgesic and naturally occurring constituent of the opium poppy. Naloxone is a racemic mixture of two enantiomers, (–)-naloxone (levonaloxone) and (+)-naloxone (dextronaloxone), only the former of which is active at opioid receptors. The drug is highly lipophilic, allowing it to rapidly penetrate the brain and to achieve a far greater brain to serum ratio than that of morphine. Opioid antagonists related to naloxone include cyprodime, nalmefene, nalodeine, naloxol, and naltrexone. History Naloxone was patented in 1961 by Mozes J. Lewenstein, Jack Fishman, and the company Sankyo. It was approved for opioid use disorder treatment in the United States in 1971. Society and culture Misinformation Naloxone has been subject to much inaccurate media reporting and many urban legends about it have become prevalent. One such myth is that naloxone makes the recipient violent. Another is that events called "Lazarus parties" have taken place, in which people reportedly took fatal overdoses in anticipation of being treated with naloxone; in reality this was a fiction spread by the police. Yet another is the claim that people have indulged in "yo-yoing", whereby they would take naloxone and opioids simultaneously to enjoy an extreme "high" and subsequent revival; the idea is scientifically nonsensical. Names Naloxone is its international nonproprietary name, British Approved Name, Dénomination Commune Française, Denominazione Comune Italiana, and Japanese Accepted Name, while naloxone hydrochloride is its United States Adopted Name and British Approved Name (Modified). The patent has expired and it is available as a generic medication. Several formulations use patented dispensers (spray mechanisms or autoinjectors), and patent disputes over the generic forms of the nasal spray were litigated between 2016 and 2020 when a judge ruled in favor of Teva, the generic manufacturer. Teva announced entry of the first generic nasal spray formulation in December 2021. Brand names of naloxone include Narcan, Kloxxado, Nalone, Evzio, Prenoxad Injection, Narcanti, Narcotan, and Zimhi, among others. Legal status and availability to law enforcement and emergency personnel Naloxone (Nyxoid) was approved for use in the European Union in September 2017. In the United States, some nasal naloxone are legally available without a prescription. As of 2019, officials in 29 states had issued standing orders to enable licensed pharmacists to provide naloxone to patients without the individual first visiting a prescriber. Prescribers working with harm reduction or low threshold treatment programs have also issued standing orders to enable these organizations to distribute naloxone to their clients. A standing order, also referred to as a "non-patient specific prescription" is written by a physician, nurse or other prescriber to authorize medicine distribution outside the doctor-patient relationship. In the case of naloxone, these orders are meant to facilitate naloxone distribution to people using opioids, and their family members and friends. Over 200 naloxone distribution programs utilize licensed prescribers to distribute the drug through such orders, or through the authority of pharmacists (as with California's legal provision, AB1535). Laws and policies in many US jurisdictions have been changed to allow wider distribution of naloxone. In addition to laws or regulations permitting distribution of medicine to at-risk individuals and families, some 36 states have passed laws that provide naloxone prescribers with immunity against both civil and criminal liabilities. While paramedics in the US have carried naloxone for decades, law enforcement officers in many states throughout the country carry naloxone to reverse the effects of heroin overdoses when reaching the location before paramedics. As of 12 July 2015, law enforcement departments in 28 US states are allowed to or required to carry naloxone to quickly respond to opioid overdoses. Programs training fire personnel in opioid overdose response using naloxone have also shown promise in the US, and efforts to integrate opioid fatality prevention into emergency response have grown due to the US overdose crisis. Following the use of the nasal spray device by police officers on Staten Island in New York, an additional 20,000 police officers will begin carrying naloxone in mid-2014. The state's Office of the Attorney General will provide US$1.2 million to supply nearly 20,000 kits. Police Commissioner William Bratton said: "Naloxone gives individuals a second chance to get help". Emergency Medical Service Providers (EMS) routinely administer naloxone, except where basic Emergency Medical Technicians are prohibited by policy or by state law. In efforts to encourage citizens to seek help for possible opioid overdoses, many states have adopted Good Samaritan laws that provide immunity against certain criminal liabilities for anybody who, in good faith, seeks emergency medical care for either themselves or someone around them who may be experiencing an opioid overdose. States including Vermont and Virginia have developed programs that mandate the prescription of naloxone when a prescription has exceeded a certain level of morphine milliequivalents per day as preventative measures against overdose. Healthcare institution-based naloxone prescription programs have also helped reduce rates of opioid overdose in North Carolina, and have been replicated in the US military. In Canada, naloxone single-use syringe kits are distributed and available at various clinics and emergency rooms. Alberta Health Services is increasing the distribution points for naloxone kits at all emergency rooms, and various pharmacies and clinics province-wide. All Edmonton Police Service and Calgary Police Service patrol cars carry an emergency single-use naloxone syringe kit. Some Royal Canadian Mounted Police patrol vehicles also carry the drug, occasionally in excess to help distribute naloxone among users and concerned family/friends. Nurses, paramedics, medical technicians, and emergency medical responders can also prescribe and distribute the drug. As of February 2016, pharmacies across Alberta and some other Canadian jurisdictions are allowed to distribute single-use take-home naloxone kits or prescribe the drug to people using opioids. Following Alberta Health Services, Health Canada reviewed the prescription-only status of naloxone, resulting in plans to remove it in 2016, making naloxone more accessible. Due to the rising number of drug deaths across the country, Health Canada proposed a change to make naloxone more widely available to Canadians in support of efforts to address the growing number of opioid overdoses. In March 2016, Health Canada did change the prescription status of naloxone, as "pharmacies are now able to proactively give out naloxone to those who might experience or witness an opioid overdose." Community access In a survey of US laypersons in December 2021, most people believed the scientifically supported idea that trained bystanders can reverse overdoses with naloxone. A survey of US naloxone prescription programs in 2010 revealed that 21 out of 48 programs reported challenges in obtaining naloxone in the months leading up to the survey, due mainly to either cost increases that outstripped allocated funding or the suppliers' inability to fill orders. The approximate cost of a 1ml ampoule of naloxone in the US is estimated to be significantly higher than in most other countries. Take-home naloxone programs for people who use opioids is underway in many North American cities. CDC estimates that the US programs for drug users and their caregivers prescribing take-home doses of naloxone and training on its use prevented 10,000 opioid overdose deaths by 2014. In Australia, some forms of naloxone are available "over the counter" in pharmacies free without a prescription under the Take Home Naloxone programme. It comes in single-use filled syringe form similar to law enforcement kits as well as nasal sprays. In 2024, those with a prescription can purchase five doses for around AU$32 or just over AU$6 per dose. In Alberta, in addition to pharmacy distribution, take-home naloxone kits are available and distributed in most drug treatment or rehabilitation centers. In the European Union, take home naloxone pilots were launched in the Channel Islands and in Berlin in the late 1990s. In 2008, the Welsh Assembly government announced its intention to establish demonstration sites for take-home naloxone, and in 2010, Scotland instituted a national naloxone program. Inspired by North American and European efforts, non-governmental organizations running programs to train drug users as overdose responders and supply them with naloxone are now operational in Russia, Ukraine, Georgia, Kazakhstan, Tajikistan, Afghanistan, China, Vietnam, and Thailand. In 2018, a maker of naloxone announced it would provide a free kit including two doses of the nasal spray, as well as educational materials, to each of the 16,568 public libraries and 2,700 YMCAs in the U.S.
Biology and health sciences
Specific drugs
Health
142821
https://en.wikipedia.org/wiki/Placebo
Placebo
A placebo ( ) is a substance or treatment which is designed to have no therapeutic value. Common placebos include inert tablets (like sugar pills), inert injections (like saline), sham surgery, and other procedures. Placebos are used in randomized clinical trials to test the efficacy of medical treatments. In a placebo-controlled clinical trial, any change in the control group is known as the placebo response, and the difference between this and the result of no treatment is the placebo effect. Placebos in clinical trials should ideally be indistinguishable from so-called verum treatments under investigation, except for the latter's particular hypothesized medicinal effect. This is to shield test participants (with their consent) from knowing who is getting the placebo and who is getting the treatment under test, as patients' and clinicians' expectations of efficacy can influence results. The idea of a placebo effect was discussed in 18th century psychology, but became more prominent in the 20th century. Modern studies find that placebos can affect some outcomes such as pain and nausea, but otherwise do not generally have important clinical effects. Improvements that patients experience after being treated with a placebo can also be due to unrelated factors, such as regression to the mean (a statistical effect where an unusually high or low measurement is likely to be followed by a less extreme one). The use of placebos in clinical medicine raises ethical concerns, especially if they are disguised as an active treatment, as this introduces dishonesty into the doctor–patient relationship and bypasses informed consent. Placebos are also popular because they can sometimes produce relief through psychological mechanisms (a phenomenon known as the "placebo effect"). They can affect how patients perceive their condition and encourage the body's chemical processes for relieving pain and a few other symptoms, but have no impact on the disease itself. Etymology The Latin term (pronounced or ) means [I] shall be pleasing. It was used as a name for the Vespers in the Office of the Dead, taken from its incipit, a quote from the Vulgate's Psalm 116:9 (Psalm 114:9 in modern bibles), , "[I] shall please the Lord in the land of the living". From that, a singer of placebo became associated with someone who falsely claimed a connection to the deceased to get a share of the funeral meal, and hence a flatterer, and so a deceptive act to please. Definitions The definition of placebo has been debated. One definition states that a treatment process is a placebo when none of the characteristic treatment factors are effective (remedial or harmful) in the patient for a given disease. In a clinical trial, a placebo response is the measured response of subjects to a placebo; the placebo effect is the difference between that response and no treatment. The placebo response may include improvements due to natural healing, declines due to natural disease progression, the tendency for people who were temporarily feeling either better or worse than usual to return to their average situations (regression toward the mean), and errors in the clinical trial records, which can make it appear that a change has happened when nothing has changed. It is also part of the recorded response to any active medical intervention. Measurable placebo effects may be either objective (e.g. lowered blood pressure) or subjective (e.g. a lowered perception of pain). Effects Placebo can improve patient-reported outcomes such as pain and nausea. A 2001 meta-analysis of the placebo effect looked at trials in 40 different medical conditions, and concluded the only one where it had been shown to have a significant effect was for pain. Another Cochrane review in 2010 suggested that placebo effects are apparent only in subjective, continuous measures, and in the treatment of pain and related conditions. The review found that placebos do not appear to affect the actual diseases, or outcomes that are not dependent on a patient's perception. The authors, Asbjørn Hróbjartsson and Peter C. Gøtzsche, concluded that their study "did not find that placebo interventions have important clinical effects in general". This interpretation has been subject to criticism, as the existence of placebo effects seems undeniable. For example, recent research has linked placebo interventions to improved motor functions in patients with Parkinson's disease. Other objective outcomes affected by placebos include immune and endocrine parameters, end-organ functions regulated by the autonomic nervous system, and sport performance. Placebos are believed to be capable of altering a person's perception of pain. According to the American Cancer Society, "A person might reinterpret a sharp pain as uncomfortable tingling." Measuring the extent of the placebo effect is difficult due to confounding factors. For example, a patient may feel better after taking a placebo due to regression to the mean (i.e. a natural recovery or change in symptoms), but this can be ruled out by comparing the placebo group with a no treatment group (as all the placebo research does). It is harder still to tell the difference between the placebo effect and the effects of response bias, observer bias and other flaws in trial methodology, as a trial comparing placebo treatment and no treatment will not be a blinded experiment. In their 2010 meta-analysis of the placebo effect, Asbjørn Hróbjartsson and Peter C. Gøtzsche argue that "even if there were no true effect of placebo, one would expect to record differences between placebo and no-treatment groups due to bias associated with lack of blinding". One way in which the magnitude of placebo analgesia can be measured is by conducting "open/hidden" studies, in which some patients receive an analgesic and are informed that they will be receiving it (open), while others are administered the same drug without their knowledge (hidden). Such studies have found that analgesics are considerably more effective when the patient knows they are receiving them. Factors influencing the power of the placebo effect A review published in JAMA Psychiatry found that, in trials of antipsychotic medications, the change in response to receiving a placebo had increased significantly between 1960 and 2013. The review's authors identified several factors that could be responsible for this change, including inflation of baseline scores and enrollment of fewer severely ill patients. Another analysis published in Pain in 2015 found that placebo responses had increased considerably in neuropathic pain clinical trials conducted in the United States from 1990 to 2013. The researchers suggested that this may be because such trials have "increased in study size and length" during this time period. Children seem to have a greater response than adults to placebos. The administration of the placebos can determine the placebo effect strength. Studies have found that taking more pills would strengthen the effect. Capsules appear to be more influential than pills, and injections are even stronger than capsules. Some studies have investigated the use of placebos where the patient is fully aware that the treatment is inert, known as an open-label placebo. Clinical trials found that open-label placebos may have positive effects in comparison to no treatment, which may open new avenues for treatments, but a review of such trials noted that they were done with a small number of participants and hence should be interpreted with "caution" until further, better-controlled trials are conducted. An updated 2021 systematic review and meta-analysis based on 11 studies also found a significant, albeit slightly smaller overall effect of open-label placebos, while noting that "research on OLPs is still in its infancy". If the person dispensing the placebo shows their care towards the patient, is friendly and sympathetic, or has a high expectation of a treatment's success, then the placebo is more effectual. In the 2022 book Epigenetics and Anticipation published by Springer, Goli integrates many of the specific and non-specific factors influencing the placebo effect in the perceived healing response formula, developed based on main placebo studies. Depression In 2008, a meta-analysis led by psychologist Irving Kirsch, analyzing data from the Food and Drug Administration (FDA), concluded that 82% of the response to antidepressants was accounted for by placebos. However, other authors expressed serious doubts about the used methods and the interpretation of the results, especially the use of 0.5 as the cut-off point for the effect size. A complete reanalysis and recalculation based on the same FDA data found that the Kirsch study had "important flaws in the calculations". The authors concluded that although a large percentage of the placebo response was due to expectancy, this was not true for the active drug. Besides confirming drug effectiveness, they found that the drug effect was not related to depression severity. Another meta-analysis found that 79% of depressed patients receiving placebo remained well (for 12 weeks after an initial 6–8 weeks of successful therapy) compared to 93% of those receiving antidepressants. In the continuation phase however, patients on placebo relapsed significantly more often than patients on antidepressants. Negative effects A phenomenon opposite to the placebo effect has also been observed. When an inactive substance or treatment is administered to a recipient who has an expectation of it having a negative impact, this intervention is known as a nocebo (Latin = "I shall harm"). A nocebo effect occurs when the recipient of an inert substance reports a negative effect or a worsening of symptoms, with the outcome resulting not from the substance itself, but from negative expectations about the treatment. Another negative consequence is that placebos can cause side-effects associated with real treatment. Withdrawal symptoms can also occur after placebo treatment. This was found, for example, after the discontinuation of the Women's Health Initiative study of hormone replacement therapy for menopause. Women had been on placebo for an average of 5.7 years. Moderate or severe withdrawal symptoms were reported by 4.8% of those on placebo compared to 21.3% of those on hormone replacement. Ethics In research trials Knowingly giving a person a placebo when there is an effective treatment available is a bioethically complex issue. While placebo-controlled trials might provide information about the effectiveness of a treatment, it denies some patients what could be the best available (if unproven) treatment. Informed consent is usually required for a study to be considered ethical, including the disclosure that some test subjects will receive placebo treatments. The ethics of placebo-controlled studies have been debated in the revision process of the Declaration of Helsinki. Of particular concern has been the difference between trials comparing inert placebos with experimental treatments, versus comparing the best available treatment with an experimental treatment; and differences between trials in the sponsor's developed countries versus the trial's targeted developing countries. Some suggest that existing medical treatments should be used instead of placebos, to avoid having some patients not receive medicine during the trial. In medical practice The practice of doctors prescribing placebos that are disguised as real medication is controversial. A chief concern is that it is deceptive and could harm the doctor–patient relationship in the long run. While some say that blanket consent, or the general consent to unspecified treatment given by patients beforehand, is ethical, others argue that patients should always obtain specific information about the name of the drug they are receiving, its side effects, and other treatment options. This view is shared by some on the grounds of patient autonomy. There are also concerns that legitimate doctors and pharmacists could open themselves up to charges of fraud or malpractice by using a placebo. Critics also argued that using placebos can delay the proper diagnosis and treatment of serious medical conditions. Despite the abovementioned issues, 60% of surveyed physicians and head nurses reported using placebos in an Israeli study, with only 5% of respondents stating that placebo use should be strictly prohibited. A British Medical Journal editorial said, "that a patient gets pain relief from a placebo does not imply that the pain is not real or organic in origin...the use of the placebo for 'diagnosis' of whether or not pain is real is misguided." A survey in the United States of more than 10,000 physicians came to the result that while 24% of physicians would prescribe a treatment that is a placebo simply because the patient wanted treatment, 58% would not, and for the remaining 18%, it would depend on the circumstances. Referring specifically to homeopathy, the House of Commons of the United Kingdom Science and Technology Committee has stated: In his 2008 book Bad Science, Ben Goldacre argues that instead of deceiving patients with placebos, doctors should use the placebo effect to enhance effective medicines. Edzard Ernst has argued similarly that "As a good doctor you should be able to transmit a placebo effect through the compassion you show your patients." In an opinion piece about homeopathy, Ernst argues that it is wrong to support alternative medicine on the basis that it can make patients feel better through the placebo effect. His concerns are that it is deceitful and that the placebo effect is unreliable. Goldacre also concludes that the placebo effect does not justify alternative medicine, arguing that unscientific medicine could lead to patients not receiving prevention advice. Placebo researcher Fabrizio Benedetti also expresses concern over the potential for placebos to be used unethically, warning that there is an increase in "quackery" and that an "alternative industry that preys on the vulnerable" is developing. Mechanisms The mechanism for how placebos could have effects is uncertain. From a sociocognitive perspective, intentional placebo response is attributed to the “ritual effect” that induces anticipation for transition to a better state. A placebo presented as a stimulant may trigger an effect on heart rhythm and blood pressure, but when administered as a depressant, the opposite effect. Psychology In psychology, the two main hypotheses of the placebo effect are expectancy theory and classical conditioning. In 1985, Irving Kirsch hypothesized that placebo effects are produced by the self-fulfilling effects of response expectancies, in which the belief that one will feel different leads a person to actually feel different. According to this theory, the belief that one has received an active treatment can produce the subjective changes thought to be produced by the real treatment. Similarly, the appearance of effect can result from classical conditioning, wherein a placebo and an actual stimulus are used simultaneously until the placebo is associated with the effect from the actual stimulus. Both conditioning and expectations play a role in placebo effect, and make different kinds of contributions. Conditioning has a longer-lasting effect, and can affect earlier stages of information processing. Those who think a treatment will work display a stronger placebo effect than those who do not, as evidenced by a study of acupuncture. Additionally, motivation may contribute to the placebo effect. The active goals of an individual changes their somatic experience by altering the detection and interpretation of expectation-congruent symptoms, and by changing the behavioral strategies a person pursues. Motivation may link to the meaning through which people experience illness and treatment. Such meaning is derived from the culture in which they live and which informs them about the nature of illness and how it responds to treatment. Placebo analgesia Functional imaging upon placebo analgesia suggests links to the activation, and increased functional correlation between this activation, in the anterior cingulate, prefrontal, orbitofrontal and insular cortices, nucleus accumbens, amygdala, the brainstem's periaqueductal gray matter, and the spinal cord. Since 1978, it has been known that placebo analgesia depends upon the release of endogenous opioids in the brain. Such analgesic placebos activation changes processing lower down in the brain by enhancing the descending inhibition through the periaqueductal gray on spinal nociceptive reflexes, while the expectations of anti-analgesic nocebos acts in the opposite way to block this. Functional imaging upon placebo analgesia has been summarized as showing that the placebo response is "mediated by 'top-down' processes dependent on frontal cortical areas that generate and maintain cognitive expectancies. Dopaminergic reward pathways may underlie these expectancies". "Diseases lacking major 'top-down' or cortically based regulation may be less prone to placebo-related improvement". Brain and body In conditioning, a neutral stimulus saccharin is paired in a drink with an agent that produces an unconditioned response. For example, that agent might be cyclophosphamide, which causes immunosuppression. After learning this pairing, the taste of saccharin by itself is able to cause immunosuppression, as a new conditioned response via neural top-down control. Such conditioning has been found to affect a diverse variety of not just basic physiological processes in the immune system but ones such as serum iron levels, oxidative DNA damage levels, and insulin secretion. Recent reviews have argued that the placebo effect is due to top-down control by the brain for immunity and pain. Pacheco-López and colleagues have raised the possibility of "neocortical-sympathetic-immune axis providing neuroanatomical substrates that might explain the link between placebo/conditioned and placebo/expectation responses". There has also been research aiming to understand underlying neurobiological mechanisms of action in pain relief, immunosuppression, Parkinson's disease and depression. Dopaminergic pathways have been implicated in the placebo response in pain and depression. Confounding factors Placebo-controlled studies, as well as studies of the placebo effect itself, often fail to adequately identify confounding factors. False impressions of placebo effects are caused by many factors including: Regression to the mean (natural recovery or fluctuation of symptoms) Additional treatments Response bias from subjects, including scaling bias, answers of politeness, experimental subordination, conditioned answers; Reporting bias from experimenters, including misjudgment and irrelevant response variables. Non-inert ingredients of the placebo medication having an unintended physical effect History The word placebo was used in a medicinal context in the late 18th century to describe a "commonplace method or medicine" and in 1811 it was defined as "any medicine adapted more to please than to benefit the patient". Although this definition contained a derogatory implication it did not necessarily imply that the remedy had no effect. It was recognized in the 18th and 19th centuries that drugs or remedies often were perceived to work best while they were still novel: Placebos have featured in medical use until well into the twentieth century. An influential 1955 study entitled The Powerful Placebo firmly established the idea that placebo effects were clinically important, and were a result of the brain's role in physical health. A 1997 reassessment found no evidence of any placebo effect in the source data, as the study had not accounted for regression to the mean. Placebo-controlled studies The placebo effect makes it more difficult to evaluate new treatments. Clinical trials control for this effect by including a group of subjects that receives a sham treatment. The subjects in such trials are blinded as to whether they receive the treatment or a placebo. If a person is given a placebo under one name, and they respond, they will respond in the same way on a later occasion to that placebo under that name but not if under another. Clinical trials are often double-blinded so that the researchers also do not know which test subjects are receiving the active or placebo treatment. The placebo effect in such clinical trials is weaker than in normal therapy since the subjects are not sure whether the treatment they are receiving is active.
Biology and health sciences
Drugs and pharmacology
null
142839
https://en.wikipedia.org/wiki/Stone%20tool
Stone tool
Stone tools have been used throughout human history but are most closely associated with prehistoric cultures and in particular those of the Stone Age. Stone tools may be made of either ground stone or knapped stone, the latter fashioned by a craftsman called a flintknapper. Stone has been used to make a wide variety of tools throughout history, including arrowheads, spearheads, hand axes, and querns. Knapped stone tools are nearly ubiquitous in pre-metal-using societies because they are easily manufactured, the tool stone raw material is usually plentiful, and they are easy to transport and sharpen. The study of stone tools is a cornerstone of prehistoric archaeology because they are essentially indestructible and therefore a ubiquitous component of the archaeological record. Ethnoarchaeology is used to further the understanding and cultural implications of stone tool use and manufacture. Knapped stone tools are made from cryptocrystalline materials such as chert, flint, radiolarite, chalcedony, obsidian, basalt, and quartzite via a splitting process known as lithic reduction. One simple form of reduction is to strike stone flakes from a nucleus (core) of material using a hammerstone or similar hard hammer fabricator. If the goal is to produce flakes, the remnant lithic core may be discarded once too little remains. In some strategies, however, a flintknapper makes a tool from the core by reducing it to a rough unifacial or bifacial preform, which is further reduced by using soft hammer flaking or by pressure flaking the edges. More complex forms of reduction may produce highly standardized blades, which can then be fashioned into a variety of tools such as scrapers, knives, sickles, and microliths. Evolution Archaeologists classify stone tools into industries (also known as complexes or technocomplexes) that share distinctive technological or morphological characteristics. In 1969 in the 2nd edition of World Prehistory, Grahame Clark proposed an evolutionary progression of flint-knapping in which the "dominant lithic technologies" occurred in a fixed sequence from Mode 1 through Mode 5. He assigned to them relative dates: Modes 1 and 2 to the Lower Palaeolithic, 3 to the Middle Palaeolithic, 4 to the Upper Paleolithic, and 5 to the Mesolithic, though there were other lithic technologies outside these Modes. Each region had its own timeline for the succession of the Modes: for example, Mode 1 was in use in Europe long after it had been replaced by Mode 2 in Africa. Clark's scheme was adopted enthusiastically by the archaeological community. One of its advantages was the simplicity of terminology; for example, the Mode 1 / Mode 2 Transition. The transitions are currently of greatest interest. Consequently, in the literature the stone tools used in the period of the Palaeolithic are divided into four "modes", each of which designates a different form of complexity, and which in most cases followed a rough chronological order. Pre-Mode I Kenya Stone tools found from 2011 to 2014 at the Lomekwi archeology site near Lake Turkana in Kenya, are dated to be 3.3 million years old, and predate the genus Homo by about one million years. The oldest known Homo fossil is about 2.4–2.3 million years old compared to the 3.3 million year old stone tools. The stone tools may have been made by Australopithecus afarensis, the species whose best fossil example is Lucy, which inhabited East Africa at the same time as the date of the oldest stone tools, a yet unidentified species, or by Kenyanthropus platyops (a 3.2 to 3.5-million-year-old Pliocene hominin fossil discovered in 1999). Dating of the tools was done by dating volcanic ash layers in which the tools were found and dating the magnetic signature (pointing north or south due to reversal of the magnetic poles) of the rock at the site. Ethiopia Grooved, cut and fractured animal bone fossils, made by using stone tools, were found in Dikika, Ethiopia near (200 yards) the remains of Selam, a young Australopithecus afarensis girl who lived about 3.3 million years ago. Mode I: The Oldowan Industry The earliest stone tools in the era of genus Homo are Mode 1 tools, and come from what has been termed the Oldowan Industry, named after the type of site (many sites, actually) found in Olduvai Gorge, Tanzania, where they were discovered in large quantities. Oldowan tools were characterised by their simple construction, predominantly using core forms. These cores were river pebbles, or rocks similar to them, that had been struck by a spherical hammerstone to cause conchoidal fractures removing flakes from one surface, creating an edge and often a sharp tip. The blunt end is the proximal surface; the sharp, the distal. Oldowan is a percussion technology. Grasping the proximal surface, the hominid brought the distal surface down hard on an object he wished to detach or shatter, such as a bone or tuber. Experiments with modern humans found that all four Oldowan knapping techniques can be invented by knapping-naive participants, and that the resulting Oldowan tools were used by the experiment participants to access a money-baited box. The earliest known Oldowan tools yet found date from 2.6 million years ago, during the Lower Palaeolithic period, and have been uncovered at Gona in Ethiopia. After this date, the Oldowan Industry subsequently spread throughout much of Africa, although archaeologists are currently unsure which Hominan species first developed them, with some speculating that it was Australopithecus garhi, and others believing that it was in fact Homo habilis. Homo habilis was the hominin who used the tools for most of the Oldowan in Africa, but at about 1.9-1.8 million years ago Homo erectus inherited them. The Industry flourished in southern and eastern Africa between 2.6 and 1.7 million years ago, but was also spread out of Africa and into Eurasia by travelling bands of H. erectus, who took it as far east as Java by 1.8 million years ago and Northern China by 1.6 million years ago. Mode II: The Acheulean Industry Eventually, more complex Mode 2 tools began to be developed through the Acheulean Industry, named after the site of Saint-Acheul in France. The Acheulean was characterised not by the core, but by the biface, the most notable form of which was the hand axe. The Acheulean first appears in the archaeological record as early as 1.7 million years ago in the West Turkana area of Kenya and contemporaneously in southern Africa. The Leakeys, excavators at Olduvai, defined a "Developed Oldowan" Period in which they believed they saw evidence of an overlap in Oldowan and Acheulean. In their species-specific view of the two industries, Oldowan equated to H. habilis and Acheulean to H. erectus. Developed Oldowan was assigned to habilis and Acheulean to erectus. Subsequent dates on H. erectus pushed the fossils back to well before Acheulean tools; that is, H. erectus must have initially used Mode 1. There was no reason to think, therefore, that Developed Oldowan had to be habilis; it could have been erectus. Opponents of the view divide Developed Oldowan between Oldowan and Acheulean. There is no question, however, that habilis and erectus coexisted, as habilis fossils are found as late as 1.4 million years ago. Meanwhile, African H. erectus developed Mode 2. In any case a wave of Mode 2 then spread across Eurasia, resulting in use of both there. H. erectus may not have been the only hominin to leave Africa; European fossils are sometimes associated with Homo ergaster, a contemporary of H. erectus in Africa. In contrast to an Oldowan tool, which is the result of a fortuitous and probably unplanned operation to obtain one sharp edge on a stone, an Acheulean tool is a planned result of a manufacturing process. The manufacturer begins with a blank, either a larger stone or a slab knocked off a larger rock. From this blank he or she removes large flakes, to be used as cores. Standing a core on edge on an anvil stone, he or she hits the exposed edge with centripetal blows of a hard hammer to roughly shape the implement. Then the piece must be worked over again, or retouched, with a soft hammer of wood or bone to produce a tool finely knapped all over consisting of two convex surfaces intersecting in a sharp edge. Such a tool is used for slicing; concussion would destroy the edge and cut the hand. Some Mode 2 tools are disk-shaped, others ovoid, others leaf-shaped and pointed, and others elongated and pointed at the distal end, with a blunt surface at the proximal end, obviously used for drilling. Mode 2 tools are used for butchering; not being composite (having no haft) they are not very effective killing instruments. The killing must have been done some other way. Mode 2 tools are larger than Oldowan. The blank was ported to serve as an ongoing source of flakes until it was finally retouched as a finished tool itself. Edges were often sharpened by further retouching. Mode III: The Mousterian Industry Eventually, the Acheulean in Europe was replaced by a lithic technology known as the Mousterian Industry, which was named after the site of Le Moustier in France, where examples were first uncovered in the 1860s. Evolving from the Acheulean, it adopted the Levallois technique to produce smaller and sharper knife-like tools as well as scrapers. Also known as the "prepared core technique", flakes are struck from worked cores and then subsequently retouched. The Mousterian Industry was developed and used primarily by the Neanderthals, a native European and Middle Eastern hominin species, but a broadly similar industry is contemporaneously widespread in Africa. Mode IV: The Aurignacian Industry The widespread use of long blades (rather than flakes) of the Upper Palaeolithic Mode 4 industries appeared during the Upper Palaeolithic between 50,000 and 10,000 years ago, although blades were produced in small quantities much earlier by Neanderthals. The Aurignacian culture seems to have been the first to rely largely on blades. The use of blades exponentially increases the efficiency of core usage compared to the Levallois flake technique, which had a similar advantage over Acheulean technology which was worked from cores. Expansion to the New World As humans spread to the Americas in the Late Pleistocene, Paleo-Indians brought with them related stone tools, which evolved separately from Old World technologies. The Clovis point is the most widespread example of Late Pleistocene points in the Americas, dating to about 13,000 years ago. Mode V: The Microlithic Industries Mode 5 stone tools involve the production of microliths, which were used in composite tools, mainly fastened to a shaft. Examples include the Magdalenian culture. Such a technology makes much more efficient use of available materials like flint, although required greater skill in manufacturing the small flakes. Mounting sharp flint edges in a wood or bone handle is the key innovation in microliths, essentially because the handle gives the user protection against the flint and also improves leverage of the device. Neolithic industries In prehistoric Japan, ground stone tools appear during the Japanese Paleolithic period, that lasted from around 40,000 BC to 14,000 BC. Elsewhere, ground stone tools became important during the Neolithic period beginning about 10,000 BC. These ground or polished implements are manufactured from larger-grained materials such as basalt, jade and jadeite, greenstone and some forms of rhyolite which are not suitable for flaking. The greenstone industry was important in the English Lake District, and is known as the Langdale axe industry. Ground stone implements included adzes, celts, and axes, which were manufactured using a labour-intensive, time-consuming method of repeated grinding against an abrasive stone, often using water as a lubricant. Because of their coarse surfaces, some ground stone tools were used for grinding plant foods and were polished not just by intentional shaping, but also by use. Manos are hand stones used in conjunction with metates for grinding corn or grain. Polishing increased the intrinsic mechanical strength of the axe. Polished stone axes were important for the widespread clearance of woods and forest during the Neolithic period, when crop and livestock farming developed on a large scale. They are distributed very widely and were traded over great distances since the best rock types were often very local. They also became venerated objects, and were frequently buried in long barrows or round barrows with their former owners. During the Neolithic period, large axes were made from flint nodules by knapping a rough shape, a so-called "rough-out". Such products were traded across a wide area. The rough-outs were then polished to give the surface a fine finish to create the axe head. Polishing increased the strength and durability of the product. There were many sources of supply, including Grimes Graves in Suffolk, Cissbury in Sussex and Spiennes near Mons in Belgium to mention but a few. In Britain, there were numerous small quarries in downland areas where flint was removed for local use, for example. Many other rocks were used to make axes from stones, including the Langdale axe industry as well as numerous other sites such as Penmaenmawr and Tievebulliagh in Co Antrim, Ulster. In Langdale, there many outcrops of the greenstone were exploited, and knapped where the stone was extracted. The sites exhibit piles of waste flakes, as well as rejected rough-outs. Polishing improved the mechanical strength of the tools, so increasing their life and effectiveness. Many other tools were developed using the same techniques. Such products were traded across the country and abroad. Aboriginal Australian use Stone axes from 35,000 years ago are the earliest known use of a stone tool in Australia. Other stone tools varied in type and use among various Aboriginal Australian peoples, dependent on geographical regions and the type and structure of the tools varied among the different cultural and linguistic groups. The locations of the various artefacts, as well as whole geologic features, demarcated territorial and cultural boundaries of various linguistic and cultural groups' lands. They developed trade networks, and showed sophistication in working many different types of stone for many different uses, including as tools, food utensils and weapons, and modified their stone tools over the millennia to adapt to changing environments. Oral traditions carried the skills down through the ages. Complex stone tools were used by the Gunditjmara of western Victoria until relatively recently. Many examples are now held in museums. Flaked stone tools were made by extracting a sharp fragment of stone from a larger piece, called a core, by hitting it with a "hammerstone". Both the flakes and the hammerstones could be used as tools. The best types of stone for these tools are hard, brittle stones, rich in silica, such as quartzite, chert, flint, silcrete and quartz (the latter particularly in the Kimberleys of Western Australia). These were quarried from bedrock or collected as pebbles from watercourses and beaches, and often carried for long distances. The flake could be used immediately for cutting or scraping, but were sometimes modified in a process called reduction to sharpen or resharpen the flake. Across northern Australia, especially in Arnhem Land, the "Leilira blade", a rectangular stone flake shaped by striking quartzite or silcrete stone, was used as a spear tip and also as a knife, sometimes long. Tasmania did not have spears or stone axes, but the peoples there used tools which were adapted to the climate and environment, such as the use of spongolite. In north-western Australia, "Kimberley point", a small triangular stone point, was created using kangaroo bone which had been shaped with stone into an awl, to make small serrations in the blade. Apart from being used as weapons and for cutting, grinding (grindstones), piercing and pounding, some stones, notably ochres, were used as pigment for painting. Modern uses Stone tools are still one of the most successful technologies used by humans. The invention of the flintlock gun mechanism in the sixteenth century produced a demand for specially shaped gunflints. The gunflint industry survived until the middle of the twentieth century in some places, including in the English town of Brandon. Threshing boards with lithic flakes are used in agriculture from Neolithic, and are still used today in the regions where agriculture has not been mechanized and industrialized. Glassy stones (flint, quartz, jasper, agate) were used with a variety of iron pyrite or marcasite stones as percussion fire starter tools. That was the most common method of producing fire in pre-industrial societies. Stones were later superseded by use of steel, ferrocerium and matches. For specialist purposes glass knives are still made and used today, particularly for cutting thin sections for electron microscopy in a technique known as microtomy. Freshly cut blades are always used since the sharpness of the edge is very great. These knives are made from high-quality manufactured glass, however, not from natural raw materials such as chert or obsidian. Surgical knives made from obsidian are still used in some delicate surgeries, as they cause less damage to tissues than surgical knives and the resulting wounds heal more quickly. In 1975, American archaeologist Don Crabtree manufactured obsidian scalpels which were used for surgery on his own body. Tool stone In archaeology, a tool stone is a type of stone that is used to manufacture stone tools.
Technology
Hand tools
null
142854
https://en.wikipedia.org/wiki/Narcissus%20%28plant%29
Narcissus (plant)
Narcissus is a genus of predominantly spring flowering perennial plants of the amaryllis family, Amaryllidaceae. Various common names including daffodil, narcissus (plural narcissi), and jonquil, are used to describe some or all members of the genus. Narcissus has conspicuous flowers with six petal-like tepals surmounted by a cup- or trumpet-shaped corona. The flowers are generally white and yellow (also orange or pink in garden varieties), with either uniform or contrasting coloured tepals and corona. Narcissi were well known in ancient civilisation, both medicinally and botanically, but were formally described by Linnaeus in his Species Plantarum (1753). The genus is generally considered to have about ten sections with approximately 36 species. The number of species has varied, depending on how they are classified, due to similarity between species and hybridisation. The genus arose some time in the Late Oligocene to Early Miocene epochs, in the Iberian peninsula and adjacent areas of southwest Europe. The exact origin of the name Narcissus is unknown, but it is often linked to a Greek word (ancient Greek ναρκῶ narkō, "to make numb") and the myth of the youth of that name who fell in love with his own reflection. In some versions of the story, Narcissus is turned in to a flower by the Gods after his death. The English word "daffodil" appears to be derived from "asphodel", with which it was commonly compared. The species are native to meadows and woods in southern Europe and North Africa with a centre of diversity in the Western Mediterranean, particularly the Iberian Peninsula. Both wild and cultivated plants have naturalised widely, and were introduced into the Far East prior to the tenth century. Narcissi tend to be long-lived bulbs, which propagate by division, but are also insect-pollinated. Known pests, diseases and disorders include viruses, fungi, the larvae of flies, mites and nematodes. Some Narcissus species have become extinct, while others are threatened by increasing urbanisation and tourism. Historical accounts suggest narcissi have been cultivated from the earliest times, but became increasingly popular in Europe after the 16th century and by the late 19th century were an important commercial crop centred primarily in the Netherlands. Today narcissi are popular as cut flowers and as ornamental plants in private and public gardens. The long history of breeding has resulted in thousands of different cultivars. For horticultural purposes, narcissi are classified into divisions, covering a wide range of shapes and colours. Like other members of their family, narcissi produce a number of different alkaloids, which provide some protection for the plant, but may be poisonous if accidentally ingested. This property has been exploited for medicinal use in traditional healing and has resulted in the production of galantamine for the treatment of Alzheimer's dementia. Long celebrated in art and literature, narcissi are associated with a number of themes in different cultures, ranging from death to good fortune, and as symbols of spring. The daffodil is the national flower of Wales and the symbol of cancer charities in many countries. The appearance of wild flowers in spring is associated with festivals in many places. Description General Narcissus is a genus of perennial herbaceous bulbiferous geophytes, which die back after flowering to an underground storage bulb. They regrow in the following year from brown-skinned ovoid bulbs with pronounced necks, and reach heights of depending on the species. Dwarf species such as N. asturiensis have a maximum height of , while Narcissus tazetta may grow as tall as . The plants are scapose, having a single central leafless hollow flower stem (scape). Several green or blue-green, narrow, strap-shaped leaves arise from the bulb. The plant stem usually bears a solitary flower, but occasionally a cluster of flowers (umbel). The flowers, which are usually conspicuous and white or yellow, sometimes both or rarely green, consist of a perianth of three parts. Closest to the stem (proximal) is a floral tube above the ovary, then an outer ring composed of six tepals (undifferentiated sepals and petals), and a central disc to conical shaped corona. The flowers may hang down (pendant), or be erect. There are six pollen bearing stamens surrounding a central style. The ovary is inferior (below the floral parts) consisting of three chambers (trilocular). The fruit consists of a dry capsule that splits (dehisces) releasing numerous black seeds. The bulb lies dormant after the leaves and flower stem die back and has contractile roots that pull it down further into the soil. The flower stem and leaves form in the bulb, to emerge the following season. Most species are dormant from summer to late winter, flowering in the spring, though a few species are autumn flowering. Specific Vegetative Bulbs The pale brown-skinned ovoid tunicate bulbs have a membranous tunic and a corky stem (base or basal) plate from which arise the adventitious root hairs in a ring around the edge, which grow up to 40 mm in length. Above the stem plate is the storage organ consisting of bulb scales, surrounding the previous flower stalk and the terminal bud. The scales are of two types, true storage organs and the bases of the foliage leaves. These have a thicker tip and a scar from where the leaf lamina became detached. The innermost leaf scale is semicircular only partly enveloping the flower stalk (semisheathed).(see Hanks Fig 1.3). The bulb may contain a number of branched bulb units, each with two to three true scales and two to three leaf bases. Each bulb unit has a life of about four years. Once the leaves die back in summer, the roots also wither. After some years, the roots shorten, pulling the bulbs deeper into the ground (contractile roots). The bulbs develop from the inside, pushing the older layers outwards which become brown and dry, forming an outer shell, the tunic or skin. Up to 60 layers have been counted in some wild species. While the plant appears dormant above the ground the flower stalk which will start to grow in the following spring, develops within the bulb surrounded by two to three deciduous leaves and their sheaths. The flower stem lies in the axil of the second true leaf. Stems The single leafless plant stem or scape, appearing from early to late spring depending on the species, bears from 1 to 20 blooms. Stem shape depends on the species, some are highly compressed with a visible seam, while others are rounded. The stems are upright and located at the centre of the leaves. In a few species such as N. hedraeanthus the stem is oblique. The stem is hollow in the upper portion but towards the bulb is more solid and filled with a spongy material. Leaves Narcissus plants have one to several basal leaves which are linear, ligulate or strap-shaped (long and narrow), sometimes channelled adaxially to semiterete, and may (pedicellate) or may not (sessile) have a petiole stalk. The leaves are flat and broad to cylindrical at the base and arise from the bulb. The emerging plant generally has two leaves, but the mature plant usually three, rarely four, and they are covered with a cutin containing cuticle, giving them a waxy appearance. Leaf colour is light green to blue-green. In the mature plant, the leaves extend higher than the flower stem, but in some species, the leaves are low-hanging. The leaf base is encased in a colorless sheath. After flowering, the leaves turn yellow and die back once the seed pod (fruit) is ripe. Jonquils usually have dark green, round, rush-like leaves. Reproductive Inflorescence The inflorescence is scapose, the single stem or scape bearing either a solitary flower or forming an umbel with up to 20 blooms. Species bearing a solitary flower include section Bulbocodium and most of section Pseudonarcissus. Umbellate species have a fleshy racemose inflorescence (unbranched, with short floral stalks) with 2 to 15 or 20 flowers, such as N. papyraceus (see illustration, left) and N. tazetta (see Table I). The flower arrangement on the inflorescence may be either with (pedicellate) or without (sessile) floral stalks. Prior to opening, the flower buds are enveloped and protected in a thin dry papery or membranous (scarious) spathe. The spathe consists of a singular bract that is ribbed, and which remains wrapped around the base of the open flower. As the bud grows, the spathe splits longitudinally. Bracteoles are small or absent. Flowers The flowers of Narcissus are hermaphroditic (bisexual), have three parts (tripartite), and are sometimes fragrant (see Fragrances). The flower symmetry is actinomorphic (radial) to slightly zygomorphic (bilateral) due to declinate-ascending stamens (curving downwards, then bent up at the tip). Narcissus flowers are characterised by their, usually conspicuous, corona (trumpet). The three major floral parts (in all species except N. cavanillesii in which the corona is virtually absent - Table I: Section Tapeinanthus) are; (i) the proximal floral tube (hypanthium), (ii) the surrounding free tepals, and (iii) the more distal corona (paraperigon, paraperigonium). All three parts may be considered to be components of the perianth (perigon, perigonium). The perianth arises above the apex of the inferior ovary, its base forming the hypanthial floral tube. The floral tube is formed by fusion of the basal segments of the tepals (proximally connate). Its shape is from an inverted cone (obconic) to funnel-shaped (funneliform) or cylindrical, and is surmounted by the more distal corona. Floral tubes can range from long and narrow sections Apodanthi and Jonquilla to rudimentary (N. cavanillesii). Surrounding the floral tube and corona and reflexed (bent back) from the rest of the perianth are the six spreading tepals or floral leaves, in two whorls which may be distally ascending, reflexed (folded back), or lanceolate. Like many monocotyledons, the perianth is homochlamydeous, which is undifferentiated into separate calyx (sepals) and corolla (petals), but rather has six tepals. The three outer tepal segments may be considered sepals, and the three inner segments petals. The transition point between the floral tube and the corona is marked by the insertion of the free tepals on the fused perianth. The corona, or paracorolla, is variously described as bell-shaped (funneliform, trumpet), bowl-shaped (cupular, crateriform, cup-shaped) or disc-shaped with margins that are often frilled, and is free from the stamens. Rarely is the corona a simple callose (hardened, thickened) ring. The corona is formed during floral development as a tubular outgrowth from stamens which fuse into a tubular structure, the anthers becoming reduced. At its base, the fragrances which attract pollinators are formed. All species produce nectar at the top of the ovary. Coronal morphology varies from the tiny pigmented disk of N. serotinus (see Table I) or the rudimentary structure in N. cavanillesii to the elongated trumpets of section Pseudonarcissus (trumpet daffodils, Table I). While the perianth may point forwards, in some species such as N. cyclamineus it is folded back (reflexed, see illustration, left), while in some other species such as N. bulbocodium (Table I), it is reduced to a few barely visible pointed segments with a prominent corona. The colour of the perianth is white, yellow or bicoloured, with the exception of the night flowering N. viridiflorus which is green. In addition the corona of N. poeticus has a red crenulate margin (see Table I). Flower diameter varies from 12 mm (N. bulbocodium) to over 125 mm (N. nobilis=N. pseudonarcissus subsp. nobilis). Flower orientation varies from pendent or deflexed (hanging down) as in N. triandrus (see illustration, left), through declinate-ascendant as in N. alpestris = N. pseudonarcissus subsp. moschatus, horizontal (patent, spreading) such as N. gaditanus or N. poeticus, erect as in N. cavanillesii, N. serotinus and N. rupicola (Table I), or intermediate between these positions (erecto-patent). The flowers of Narcissus demonstrate exceptional floral diversity and sexual polymorphism, primarily by corona size and floral tube length, associated with pollinator groups (see for instance Figs. 1 and 2 in Graham and Barrett). Barrett and Harder (2005) describe three separate floral patterns; "Daffodil" form "Paperwhite" form "Triandrus" form. The predominant patterns are the 'daffodil' and 'paperwhite' forms, while the "triandrus" form is less common. Each corresponds to a different group of pollinators (See Pollination). The "daffodil" form, which includes sections Pseudonarcissus and Bulbocodium, has a relatively short, broad or highly funnelform tube (funnel-like), which grades into an elongated corona, which is large and funnelform, forming a broad, cylindrical or trumpet-shaped perianth. Section Pseudonarcissus consists of relatively large flowers with a corolla length of around 50 mm, generally solitary but rarely in inflorescences of 2–4 flowers. They have wide greenish floral tubes with funnel-shaped bright yellow coronas. The six tepals sometimes differ in colour from the corona and may be cream coloured to pale yellow. The "paperwhite" form, including sections Jonquilla, Apodanthi and Narcissus, has a relatively long, narrow tube and a short, shallow, flaring corona. The flower is horizontal and fragrant. The "triandrus" form is seen in only two species, N. albimarginatus (a Moroccan endemic) and N. triandrus. It combines features of both the "daffodil" and "paperwhite" forms, with a well-developed, long, narrow tube and an extended bell-shaped corona of almost equal length. The flowers are pendent. Androecium There are six stamens in one to two rows (whorls), with the filaments separate from the corona, attached at the throat or base of the tube (epipetalous), often of two separate lengths, straight or declinate-ascending (curving downwards, then upwards). The anthers are basifixed (attached at their base). Gynoecium The ovary is inferior (below the floral parts) and trilocular (three chambered) and there is a pistil with a minutely three lobed stigma and filiform (thread like) style, which is often exserted (extending beyond the tube). Fruit The fruit consists of dehiscent loculicidal capsules (splitting between the locules) that are ellipsoid to subglobose (almost spherical) in shape and are papery to leathery in texture. Seeds The fruit contains numerous subglobose seeds which are round and swollen with a hard coat, sometimes with an attached elaiosome. The testa is black and the pericarp dry. Most species have 12 ovules and 36 seeds, although some species such as N. bulbocodium have more, up to a maximum of 60. Seeds take five to six weeks to mature. The seeds of sections Jonquilla and Bulbocodium are wedge-shaped and matte black, while those of other sections are ovate and glossy black. A gust of wind or contact with a passing animal is sufficient to disperse the mature seeds. Chromosomes Chromosome numbers include 2n=14, 22, 26, with numerous aneuploid and polyploid derivatives. The basic chromosome number is 7, with the exception of N. tazetta, N. elegans and N. broussonetii in which it is 10 or 11; this subgenus (Hermione) was in fact characterised by this characteristic. Polyploid species include N. papyraceus (4x=22) and N. dubius (6x=50). Phytochemistry Alkaloids As with all Amarylidaceae genera, Narcissus contains unique isoquinoline alkaloids. The first alkaloid to be identified was lycorine, from N. pseudonarcissus in 1877. These are considered a protective adaptation and are utilised in the classification of species. Nearly 100 alkaloids have been identified in the genus, about a third of all known Amaryllidaceae alkaloids, although not all species have been tested. Of the nine alkaloid ring types identified in the family, Narcissus species most commonly demonstrate the presence of alkaloids from within the Lycorine (lycorine, galanthine, pluviine) and Homolycorine (homolycorine, lycorenine) groups. Hemanthamine, tazettine, narciclasine, montanine and galantamine alkaloids are also represented. The alkaloid profile of any plant varies with time, location, and developmental stage. Narcissus also contain fructans and low molecular weight glucomannan in the leaves and plant stems. Fragrances Fragrances are predominantly monoterpene isoprenoids, with a small amount of benzenoids, although N. jonquilla has both equally represented. Another exception is N. cuatrecasasii which produces mainly fatty acid derivatives. The basic monoterpene precursor is geranyl pyrophosphate, and the commonest monoterpenes are limonene, myrcene, and trans-β-ocimene. Most benzenoids are non-methoxylated, while a few species contain methoxylated forms (ethers), e.g. N. bujei. Other ingredient include indole, isopentenoids and very small amounts of sesquiterpenes. Fragrance patterns can be correlated with pollinators, and fall into three main groups (see Pollination). Taxonomy History Early The genus Narcissus was well known to the ancient Greeks and Romans. In Greek literature Theophrastus and Dioscorides described νάρκισσος, probably referring to N. poeticus, although the exact species mentioned in classical literature cannot be accurately established. Pliny the Elder later introduced the Latin form narcissus. These early writers were as much interested in the plant's possible medicinal properties as they were in its botanical features and their accounts remained influential until at least the Renaissance (see also Antiquity). Mediaeval and Renaissance writers include Albert Magnus and William Turner, but it remained to Linnaeus to formally describe and name Narcissus as a genus in his Species Plantarum (1753) at which time there were six known species. Modern De Jussieu (1789) grouped Narcissus into a "family", which he called Narcissi. This was renamed Amaryllideae by Jaume Saint-Hilaire in 1805, corresponding to the modern Amaryllidaceae. For a while, Narcissus was considered part of Liliaceae (as in the illustration seen here of Narcissus candidissimus), but then the Amaryllidaceae were split off from it. Various authors have adopted either narrow (e.g. Haworth, Salisbury) or wide (e.g.Herbert, Spach ) interpretations of the genus. The narrow view treated many of the species as separate genera. Over time, the wider view prevailed with a major monograph on the genus being published by Baker (1875). One of the more controversial genera was Tapeinanthus, but today it is included in Narcissus. The eventual position of Narcissus within the Amaryllidaceae family only became settled in this century with the advent of phylogenetic analysis and the Angiosperm Phylogeny Group system. Within Amaryllidaceae the genus Narcissus belongs to the Narcisseae tribe, one of 13 within the Amaryllidoideae subfamily. It is one of two sister clades corresponding to genera in the Narcisseae, being distinguished from Sternbergia by the presence of a paraperigonium, and is monophyletic. Subdivision The infrageneric phylogeny of Narcissus still remains relatively unsettled, the taxonomy having proved complex and difficult to resolve, due to the diversity of the wild species, the ease with which natural hybridization occurs, and extensive cultivation and breeding accompanied by escape and naturalisation. Consequently, the number of accepted species has varied widely. De Candolle, in the first systematic taxonomy of Narcissus, arranged the species into named groups, and those names have largely endured for the various subdivisions since and bear his name as their authority. The situation was confused by the inclusion of many unknown or garden varieties, and it was not until the work of Baker that the wild species were all grouped as sections under one genus, Narcissus. A common classification system has been that of Fernandes based on cytology, as modified by Blanchard (1990) and Mathew (2002). Another is that of Meyer (1966). Fernandes proposed two subgenera based on basal chromosome numbers, and then subdivided these into ten sections as did Blanchard. Other authors (e.g. Webb) prioritised morphology over genetics, abandoning subgenera, although Blanchard's system has been one of the most influential. While infrageneric groupings within Narcissus have been relatively constant, their status (genera, subgenera, sections, subsections, series, species) has not. The most cited system is that of the Royal Horticultural Society (RHS) which simply lists ten sections. Three of these are monotypic (contain only one species), while two others contain only two species. Most species are placed in section Pseudonarcissus. Many of these subdivisions correspond roughly to the popular names for daffodil types, e.g. Trumpet Daffodils, Tazettas, Pheasant's Eyes, Hoop Petticoats, Jonquils. The most hierarchical system is that of Mathew, illustrated here - Phylogenetics The phylogenetic analysis of Graham and Barrett (2004) supported the infrageneric division of Narcissus into two clades corresponding to Fernandes' subgenera, but did not support monophyly of all sections. A later extended analysis by Rønsted et al. (2008) with additional taxa confirmed this pattern. A large molecular analysis by Zonneveld (2008) sought to reduce some of the paraphyly identified by Graham and Barrett. This led to a revision of the sectional structure. While Graham and Barrett (2004) had determined that subgenus Hermione was monophyletic, Santos-Gally et al. (2011) did not. If two species excluded in the former study are removed from the analysis, the studies are in agreement, the species in question instead forming a clade with subgenus Narcissus. Some so-called nothosections have been proposed, to accommodate natural ('ancient') hybrids (nothospecies). Species Estimates of the number of species in Narcissus have varied widely, from anywhere between 16 and almost 160, even in the modern era. Linnaeus originally included six species in 1753, by 1784 there were fourteen by 1819 sixteen, and by 1831 Adrian Haworth had described 150 species. Much of the variation lies in the definition of species. Thus, a very wide view of each species, such as Webb's results in few species, while a very narrow view such as that of Fernandes results in a larger number. Another factor is the status of hybrids, with a distinction between "ancient hybrids" and "recent hybrids". The term "ancient hybrid" refers to hybrids found growing over a large area, and therefore now considered as separate species, while "recent hybrid" refers to solitary plants found amongst their parents, with a more restricted range. Fernandes (1951) originally accepted 22 species, Webb (1980) 27. By 1968, Fernandes had 63 species, Blanchard (1990) 65 species, and Erhardt (1993) 66. In 2006 the Royal Horticultural Society's (RHS) International Daffodil Register and Classified List listed 87 species, while Zonneveld's genetic study (2008) resulted in only 36. , the World Checklist of Selected Plant Families accepts 52 species, along with at least 60 hybrids, while the RHS has 81 accepted names in its October 2014 list. Evolution Within the Narcisseae, Narcissus (western Mediterranean) diverged from Sternbergia (Eurasia) some time in the Late Oligocene to Early Miocene eras, around 29.3–18.1 Ma. Later, the genus divided into the two subgenera (Hermione and Narcissus) between 27.4 and 16.1 Ma. The divisions between the sections of Hermione then took place during the Miocene period 19.9–7.8 Ma. Narcissus appears to have arisen in the area of the Iberian peninsula, southern France and northwestern Italy. Subgenus Hermione in turn arose in the southwestern Mediterranean and Northwest Africa. Names and etymology Narcissus The derivation of the Latin is from Greek narkissos. According to Plutarch narkissos has been connected because of the plant's narcotic properties, with narkē "numbness"; it may also be connected with hell. On the other hand, its etymology is considered to be clearly Pre-Greek by Beekes. It is frequently linked to the myth of Narcissus, who became so obsessed with his own reflection in water that he drowned and the narcissus plant sprang from where he died. There is no evidence for the flower being named after Narcissus. Narcissus poeticus, which grows in Greece, has a fragrance that has been described as intoxicating. Pliny wrote that the plant was named for its fragrance ( narkao, "I grow numb" ), rather than Narcissus. Furthermore, there were accounts of narcissi growing long before the story of Narcissus appeared (see Greek culture). It has also been suggested that narcissi bending over streams represent the youth admiring his reflection. Linnaeus used the Latin name "narcissus" for the plant but was preceded by others such as Matthias de l'Obel (1591) and Clusius (1576). The name Narcissus was not uncommon for men in Roman times. The plural form of the common name "narcissus" has been the cause of some confusion. Dictionaries list "narcissi", "narcissuses" and "narcissus". However, texts on usage such as Garner and Fowler state that "narcissi" is the preferred form. The common name narcissus should not be capitalised. Daffodil The name "daffodil" is derived from "affodell", a variant of asphodel. The narcissus was frequently referred to as the asphodel (see Antiquity). Asphodel in turn appears to come from the Greek "asphodelos" (). The reason for the introduction of the initial "d" is not known. From at least the 16th century, "daffadown dilly" and "daffydowndilly" have appeared as alternative names. Other names include "Lent lily". In other languages The Hokkien name for Narcissus, chúi-sian, can be literally translated as "water fairy", where chúi () refers to water and sian () refers to immortals. It is the official provincial flower of Fujian. Distribution and habitat Distribution Although the family Amaryllidaceae are predominantly tropical or subtropical as a whole, Narcissus occurs primarily in Mediterranean region, with a centre of diversity in the Iberian Peninsula (Spain and Portugal). A few species extend the range into southern France, Italy, the Balkans (N. poeticus, N. serotinus, N. tazetta), and the Eastern Mediterranean (N. serotinus) including Israel (N. tazetta). The occurrence of N. tazetta in western and central Asia as well as East Asia are considered introductions, albeit ancient (see Eastern cultures). While the exact northern limit of the natural range is unknown, the occurrences of wild N. pseudonarcissus in Great Britain, middle and northern Europe are similarly considered ancient introductions. While the Amaryllidaceae are not native to North America, Narcissus grows well in USDA hardiness zones 3B through 10, which encompass most of the United States and Canada. N. elegans occurs on the Northwest African Coast (Morocco and Libya), as well as the coastline of Corsica, Sardinia and Italy, and N. bulbocodium between Tangier and Algiers and Tangier to Marrakech, but also on the Iberian Peninsula. N. serotinus is found along the entire Mediterranean coast. N. tazetta occurs as far east as Iran and Kashmir. Since this is one of the oldest species found in cultivation, it is likely to have been introduced into Kashmir. N. poeticus and N. pseudonarcissus have the largest distribution ranges. N. poeticus ranges from the Pyrenees along the Romanian Carpathians to the Black Sea and along the Dalmatian coast to Greece. N. pseudonarcissus ranges from the Iberian Peninsula, via the Vosges Mountains to northern France and Belgium, and the United Kingdom where there are still wild stocks in Southern Scotland. The only occurrence in Luxembourg is located near Lellingen, in the municipality of Kiischpelt. In Germany it is found mainly in the nature reserve at Perlenbach-Fuhrtsbachtal and the Eifel National Park, where in the spring at Monschau the meadows are teeming with yellow blooms. One of the most easterly occurrences can be found at Misselberg near Nassau on the Lahn. However, unlike the above examples, most species have very restricted endemic ranges which may overlap resulting in natural hybrids. For instance in the vicinity of the Portuguese city of Porto where both N. pseudonarcissus and N. triandrus occur there are found various intersections of the two species while in a small area along part of the Portuguese Mondego river are found intersectional hybrids between N. scaberulus and N. triandrus. The biogeography demonstrates a phylogenetic association, for instance subgenus Hermione having a lowland distribution, but subgenus Narcissus section Apodanthi being montane and restricted to Morocco, Spain and Portugal. The remaining sections within subgenus Narcissus include both lowland and mountain habitats. Section Pseudonarcissus, although widely naturalised, is endemic to the Baetic Ranges of the southeastern Iberian Peninsula. Habitats Their native habitats are very varied, with different elevations, bioclimatic areas and substrates, being found predominantly in open spaces ranging from low marshes to rocky hillsides and montane pastures, and including grassland, scrub, woods, river banks and rocky crevices. Although requirements vary, overall there is a preference for acidic soils, although some species will grow on limestone. Narcissus scaberulus will grow on granite soils where it is moist in the growing season but dry in the summer, while Narcissus dubius thrives best in regions with hot and dry summers. The Pseudonarcissus group in their natural habitat prefers humid situations such as stream margins, springs, wet pastures, clearings of forests or shrublands with humid soils, and moist hillsides. These habitats tend to be discontinuous in the Mediterranean mountains, producing discrete isolated populations. In Germany, which has relatively little limestone, Narcissus pseudonarcissus grows in small groups on open mountain meadows or in mixed forests of fir, beech, oak, alder, ash and birch trees with well-drained soil. Ecology Life cycle Narcissus are long-lived perennial geophytes with winter-growing and summer-dormant bulbs that are mainly synanthous (leaves and flowers appearing at the same time). While most species flower in late winter to spring, five species are autumn flowering (N. broussonetii, N. cavanillesii, N. elegans, N. serotinus, N. viridiflorus). By contrast, these species are hysteranthous (leaves appear after flowering). Flower longevity varies by species and conditions, ranging from 5–20 days. After flowering leaf and root senescence sets in, and the plant appears to be 'dormant' until the next spring, conserving moisture. However, the dormant period is also one of considerable activity within the bulb primordia. It is also a period during which the plant bulb may be susceptible to predators . Like many bulb plants from temperate regions, a period of exposure to cold is necessary before spring growth can begin. This protects the plant from growth during winter when intense cold may damage it. Warmer spring temperatures then initiate growth from the bulb. Early spring growth confers a number of advantages, including relative lack of competition for pollinators, and lack of deciduous shading. The exception to requiring cold temperatures to initiate flowering is N. tazetta. Plants may spread clonally through the production of daughter bulbs and division, producing clumps. Narcissus species hybridise readily, although the fertility of the offspring will depend on the parental relationship. Pollination The flowers are insect-pollinated, the major pollinators being bees, butterflies, flies, and hawkmoths, while the highly scented night-flowering N. viridiflorus is pollinated by crepuscular moths. Pollination mechanisms fall into three groups corresponding to floral morphology (see Description - Flowers). 'Daffodil' form. Pollinated by bees seeking pollen from anthers within the corona. The broad perianth allows bees (Bombus, Anthophora, Andrena) to completely enter the flower in their search for nectar and/or pollen. In this type, the stigma lies in the mouth of the corona, extending beyond the six anthers, whose single whorl lies well within the corona. The bees come into contact with the stigma before their legs, thorax and abdomen contact the anthers, and this approach herkogamy causes cross pollination. 'Paperwhite' form. These are adapted to long-tongued Lepidoptera, particularly sphingid moths such as Macroglossum, Pieridae and Nymphalidae, but also some long-tongued bees, and flies, all of which are primarily seeking nectar. The narrow tube admits only the insect's proboscis, while the short corona serves as a funnel guiding the tip of the proboscis into the mouth of the perianth tube. The stigma is placed either in the mouth of the tube, just above two whorls of three anthers, or hidden well below the anthers. The pollinators then carry pollen on their probosci or faces. The long-tongued bees cannot reach the nectar at the tube base and so collect just pollen. 'Triandrus' form. Pollinated by long-tongued solitary bees (Anthophora, Bombus), which forage for both pollen and nectar. The large corona allows the bees to crawl into the perianth but then the narrow tube prevents further progress, causing them to probe deeply for nectar. The pendant flowers prevent pollination by Lepidoptera. In N. albimarginatus there may be either a long stigma with short and mid-length anthers or a short stigma and long anthers (dimorphism). In N. triandrus there are three patterns of sexual organs (trimophism) but all have long upper anthers but vary in stigma position and the length of the lower anthers. Allogamy (outcrossing) on the whole is enforced through a late-acting (ovarian) self-incompatibility system, but some species such as N. dubius and N. longispathus are self-compatible producing mixtures of selfed and outcrossed seeds. Pests and diseases Diseases of Narcissus are of concern because of the economic consequences of losses in commercial cultivation. Pests include viruses, bacteria, and fungi as well as arthropods and gastropods. For control of pests, see Commercial uses. Viruses Aphids such as Macrosiphum euphorbiae can transmit viral diseases which affect the colour and shape of the leaves, as can nematodes. Up to twenty-five viruses have been described as being able to infect narcissi. These include the Narcissus common latent virus (NCLV, Narcissus mottling-associated virus), Narcissus latent virus (NLV, Narcissus mild mottle virus) which causes green mottling near leaf tips, Narcissus degeneration virus (NDV), Narcissus late season yellows virus (NLSYV) which occurs after flowering, streaking the leaves and stems, Narcissus mosaic virus, Narcissus yellow stripe virus (NYSV, Narcissus yellow streak virus), Narcissus tip necrosis virus (NTNV) which produces necrosis of leaf tips after flowering and Narcissus white streak virus (NWSV). Less host specific viruses include Raspberry ringspot virus, Nerine latent virus (NeLV) =Narcissus symptomless virus, Arabis mosaic virus (ArMV), Broad Bean Wilt Viruses (BBWV) Cucumber mosaic virus (CMV), Tomato black ring virus (TBRV), Tomato ringspot virus (TomRSV) and Tobacco rattle virus (TRV). Of these viruses the most serious and prevalent are NDV, NYSV and NWSV. NDV is associated with chlorotic leaf striping in N. tazetta. Infection with NYSV produces light or grayish-green, or yellow stripes or mottles on the upper two-thirds of the leaf, which may be roughened or twisted. The flowers which may be smaller than usual may also be streaked or blotched. NWSV produces greenish-purple streaking on the leaves and stem turning white to yellow, and premature senescence reducing bulb size and yield. These viruses are primarily diseases of commercial nurseries. The growth inhibition caused by viral infection can cause substantial economic damage. Bacteria Bacterial disease is uncommon in Narcissus but includes Pseudomonas (bacterial streak) and Pectobacterium carotovorum sp. carotovorum (bacterial soft rot). Fungi More problematic for non-commercial plants is the fungus, Fusarium oxysporum f. sp. narcissi, which causes basal rot (rotting of the bulbs and yellowing of the leaves). This is the most serious disease of Narcissus. Since the fungus can remain in the soil for many years it is necessary to remove infected plants immediately, and to avoid planting further narcissi at that spot for a further five years. Not all species and cultivars are equally susceptible. Relatively resistant forms include N. triandrus, N. tazetta and N. jonquilla. Another fungus which attacks the bulbs, causing narcissus smoulder, is Botrytis narcissicola (Sclerotinia narcissicola) and other species of Botrytis, including Botrytis cinerea, particularly if improperly stored. Copper sulfate is used to combat the disease, and infected bulbs are burned. Blue mould rot of bulbs may be caused by infection with species of Penicillium, if they have become damaged either through mechanical injury or infestation by mites (see below). Species of Rhizopus (e.g. Rhizopus stolonifer, Rhizopus nigricans) cause bulb soft rot and Sclerotinia bulborum, black slime disease. A combination of both Peyronellaea curtisii (Stagonosporopsis curtisii) and Botrytis narcissicola causes neck rot in the bulbs. Fungi affecting the roots include Nectria radicicola (Cylindrocarpon destructans), a cause of root rot and Rosellinia necatrix causing white root rot, while others affect root and bulb, such as Aspergillus niger (black mold), and species of Trichoderma, including T. viride and T. harzianum (=T. narcissi) responsible for green mold. Other fungi affect the remainder of the plant. Another Botrytis fungus, Botrytis polyblastis (Sclerotinia polyblastis) causes brown spots on the flower buds and stems (narcissus fire), especially in damp weather and is a threat to the cut flower industry. Ramularia vallisumbrosae is a leaf spot fungus found in warmer climates, causing narcissus white mould disease. Peyronellaea curtisii, the Narcissus leaf scorch, also affects the leaves as does its synanamorph, Phoma narcissi (leaf tip blight). Aecidium narcissi causes rust lesions on leaves and stems. Animals Arthropods that are Narcissus pests include insects such as three species of fly that have larvae that attack the plants, the narcissus bulb fly Merodon equestris, and two species of hoverflies, the lesser bulb flies Eumerus tuberculatus and Eumerus strigatus. The flies lay their eggs at the end of June in the ground around the narcissi, a single female fly being able to lay up to fifty eggs. The hatching larvae then burrow through the soil towards the bulbs and consume their interiors. They then overwinter in the empty bulb shell, emerging in April to pupate in the soil, from which the adult fly emerges in May. The larvae of some moths such as Korscheltellus lupulina (the common swift moth) attack Narcissus bulbs. Other arthropods include Mites such as Steneotarsonemus laticeps (Bulb scale mite), Rhizoglyphus and Histiostoma infest mainly stored bulbs and multiply particularly at high ambient temperature, but do not attack planted bulbs. Planted bulbs are susceptible to nematodes, the most serious of which is Ditylenchus dipsaci (Narcissus eelworm), the main cause of basal plate disease in which the leaves turn yellow and become misshapen. Infested bulbs have to be destroyed; where infestation is heavy avoiding planting further narcissi for another five years. Other nematodes include Aphelenchoides subtenuis, which penetrates the roots causing basal plate disease and Pratylenchus penetrans (lesion nematode) the main cause of root rot in narcissi. Other nematodes such as the longodorids (Longidorus spp. or needle nematodes and Xiphinema spp. or dagger nematodes) and the stubby-root nematodes or trichodorids (Paratrichodorus spp. and Trichodorus spp.) can also act as vectors of virus diseases, such as TBRV and TomRSV, in addition to causing stunting of the roots. Gastropods such as snails and slugs also cause damage to growth. Conservation Many of the smallest species have become extinct, requiring vigilance in the conservation of the wild species. Narcissi are increasingly under threat by over-collection and threats to their natural habitats by urban development and tourism. N. cyclamineus has been considered to be either extinct or exceedingly rare but is not currently considered endangered, and is protected. The IUCN Red List describes five species as 'Endangered' (Narcissus alcaracensis, Narcissus bujei, Narcissus longispathus, Narcissus nevadensis, Narcissus radinganorum). In 1999 three species were considered endangered, five as vulnerable and six as rare. In response, a number of species have been granted protected species status and protected areas (meadows) have been established such as the Negraşi Daffodil Meadow in Romania, or Kempley Daffodil Meadow in the UK. These areas often host daffodil festivals in the spring. Cultivation History Of all the flowering plants, the bulbous have been the most popular for cultivation. Of these, narcissi are one of the most important spring flowering bulb plants in the world. Indigenous in Europe, the wild populations of the parent species had been known since antiquity. Narcissi have been cultivated from at least as early as the sixteenth century in the Netherlands, when large numbers of bulbs where imported from the field, particularly Narcissus hispanicus, which soon became nearly extinct in its native habitat of France and Spain, though still found in the southern part of that country. The only large-scale production at that time related to the double narcissus "Van Sion" and cultivars of N. tazetta imported in 1557. Cultivation is also documented in Britain at this time, although contemporary accounts show it was well known as a favourite garden and wild flower long before that and was used in making garlands. This was a period when the development of exotic formal gardens and parks was becoming popular, particularly in what is known as the "Oriental period" (1560–1620). In his Hortus Medicus (1588), the first catalogue of a German garden's plants, Joachim Camerarius the Younger states that nine different types of daffodils were represented in his garden in Nuremberg. After his death in 1598, his plants were moved by Basilius Besler to the gardens they had designed at Willibaldsburg, the bishop's palace at Eichstätt, Upper Bavaria. That garden is described in Besler's Hortus Eystettensis (1613) by which time there were 43 different types present. Another German source at this time was Peter Lauremberg who gives an account of the species known to him and their cultivation in his Apparatus plantarius: de plantis bulbosis et de plantis tuberosis (1632). While Shakespeare's daffodil is the wild or true English daffodil (N. pseudonarcissus), many other species were introduced, some of which escaped and naturalised, particularly N. biflorus (a hybrid) in Devon and the west of England. Gerard, in his extensive discussion of daffodils, both wild and cultivated ("bastard daffodils") described twenty four species in London gardens (1597), ("we have them all and every one of them in our London gardens, in great abundance", p. 114). In the early seventeenth century, Parkinson helped to ensure the popularity of the daffodil as a cultivated plant by describing a hundred different varieties in his Paradisus Terrestris (1629), and introducing the great double yellow Spanish daffodil (Pseudonarcissus aureus Hispanicus flore pleno or Parkinson's Daffodil, see illustration) to England. Although not achieving the sensationalism of tulips, daffodils and narcissi have been much celebrated in art and literature . The largest demand for narcissi bulbs were large trumpet daffodils, N. poeticus and N. bulbocodium, and Istanbul became important in the shipping of bulbs to western Europe. By the early baroque period both tulips and narcissi were an important component of the spring garden. By 1739 a Dutch nursery catalogue listed 50 different varieties. In 1757 Hill gave an account of the history and cultivation of the daffodil in his edited version of the works of Thomas Hale, writing "The garden does not afford, in its Kind, a prettier plant than this; nor do we know one that has been so early, or so honorably mention'd by all Kinds of Writers" (see illustration). Interest grew further when varieties that could be grown indoors became available, primarily the bunch flowered (multiple flower heads) N. tazetta (Polyanthus Narcissus). However interest varied by country. Maddock (1792) does not include narcissi in his list of the eight most important cultivated flowering plants in England, whereas in the Netherlands van Kampen (1760) stated that N. tazetta (Narcisse à bouquet) is the fifth most important – "Le Narcisse à bouquet est la premiere fleur, après les Jacinthes, les Tulipes les Renoncules, et les Anemones, (dont nous avons déja parlé,) qui merite nôtre attention". Similarly Philip Miller, in his Gardeners Dictionary (1731–1768) refers to cultivation in Holland, Flanders and France, but not England, because it was too difficult, a similar observation was made by Sir James Justice at this time. However, for most species of Narcissus Lauremberg's dictum Magna cura non indigent Narcissi was much cited. Narcissi became an important horticultural crop in Western Europe in the latter part of the nineteenth century, beginning in England between 1835 and 1855 and the end of the century in the Netherlands. By the beginning of the twentieth century 50 million bulbs of N. Tazetta "Paperwhite" were being exported annually from the Netherlands to the United States. With the production of triploids such as "Golden Spur", in the late nineteenth century, and in the beginning of the twentieth century, tetraploids like "King Alfred" (1899), the industry was well established, with trumpet daffodils dominating the market. The Royal Horticultural Society has been an important factor in promoting narcissi, holding the first Daffodil Conference in 1884, while the Daffodil Society, the first organisation dedicated to the cultivation of narcissi was founded in Birmingham in 1898. Other countries followed and the American Daffodil Society which was founded in 1954 publishes The Daffodil Journal quarterly, a leading trade publication. Narcissi are now popular as ornamental plants for gardens, parks and as cut flowers, providing colour from the end of winter to the beginning of summer in temperate regions. They are one of the most popular spring flowers and one of the major ornamental spring flowering bulb crops, being produced both for their bulbs and cut flowers, though cultivation of private and public spaces is greater than the area of commercial production. Over a century of breeding has resulted in thousands of varieties and cultivars being available from both general and specialist suppliers. They are normally sold as dry bulbs to be planted in late summer and autumn. They are one of the most economically important ornamental plants. Plant breeders have developed some daffodils with double, triple, or ambiguously multiple rows and layers of segments. Many of the breeding programs have concentrated on the corona (trumpet or cup), in terms of its length, shape, and colour, and the surrounding perianth or even as in varieties derived from N. poeticus a very reduced form. In gardens While some wild narcissi are specific in terms of their ecological requirements, most garden varieties are relatively tolerant of soil conditions, however very wet soils and clay soils may benefit from the addition of sand to improve drainage. The optimum soil is a neutral to slightly acid pH of 6.5–7.0. Bulbs offered for sale are referred to as either 'round' or 'double nose'. Round bulbs are circular in cross section and produce a single flower stem, while double nose bulbs have more than one bulb stem attached at the base and produce two or more flower stems, but bulbs with more than two stems are unusual. Planted narcissi bulbs produce daughter bulbs in the axil of the bulb scales, leading to the dying off the exterior scales. To prevent planted bulbs forming more and more small bulbs, they can be dug up every 5–7 years, and the daughters separated and replanted separately, provided that a piece of the basal plate, where the rootlets are formed, is preserved. For daffodils to flower at the end of the winter or early spring, bulbs are planted in autumn (September–November). This plant does well in ordinary soil but flourishes best in rich soil. Daffodils like the sun but also accept partial shade exposure. Narcissi are well suited for planting under small thickets of trees, where they can be grouped as 6–12 bulbs. They also grow well in perennial borders, especially in association with day lilies which begin to form their leaves as the narcissi flowers are fading. A number of wild species and hybrids such as "Dutch Master", "Golden Harvest", "Carlton", "Kings Court" and "Yellow Sun" naturalise well in lawns, but it is important not to mow the lawn till the leaves start to fade, since they are essential for nourishing the bulb for the next flowering season. Blue Scilla and Muscari which also naturalise well in lawns and flower at the same time as narcissus, make an attractive contrast to the yellow flowers of the latter. Unlike tulips, narcissi bulbs are not attractive to rodents and are sometimes planted near tree roots in orchards to protect them. Propagation The commonest form of commercial propagation is by twin-scaling, in which the bulbs are cut into many small pieces but with the two scales still connected by a small fragment of the basal plate. The fragments are disinfected and placed in nutrient media. Some 25–35 new plants can be produced from a single bulb after four years. Micropropagation methods are not used for commercial production but are used for establishing commercial stock. Breeding For commercial use, varieties with a minimum stem length of are sought, making them ideal for cut flowers. Florists require blooms that only open when they reach the retail outlet. For garden plants the objectives are to continually expand the colour palette and to produce hardy forms, and there is a particular demand for miniature varieties. The cultivars so produced tend to be larger and more robust than the wild types. The main species used in breeding are N. bulbocodium, N. cyclamineus, N. jonquilla, N. poeticus, N. pseudonarcissus, N. serotinus and N. tazetta. Narcissus pseudonarcissus gave rise to trumpet cultivars with coloured tepals and corona, while its subspecies N. pseudonarcissus subsp. bicolor was used for white tepaled varieties. To produce large cupped varieties, N. pseudonarcissus was crossed with N. poeticus, and to produce small cupped varieties back crossed with N. poeticus. Multiheaded varieties, often called "Poetaz" are mainly hybrids of N. poeticus and N. tazetta. Classification For horticultural purposes, all Narcissus cultivars are split into 13 divisions as first described by Kington (1998), for the Royal Horticultural Society (RHS), based partly upon flower form (shape and length of corona), number of flowers per stem, flowering period and partly upon genetic background. Division 13, which includes wild daffodils, is the exception to this scheme. The classification is a useful tool for planning planting. Most commercially available narcissi come from Divisions 1 (Trumpet), 2 (Large cupped) and 8 (Tazetta). Growers register new daffodil cultivars by name and colour with the Royal Horticultural Society, which is the international registration authority for the genus. Their International Daffodil Register is regularly updated with supplements available online and is searchable. The most recent supplement (2014) is the sixth (the fifth was published in 2012). More than 27,000 names were registered as of 2008, and the number has continued to grow. Registered daffodils are given a division number and colour code such as 5 W-W ("Thalia"). In horticultural usage it is common to also find an unofficial Division 14: Miniatures, which although drawn from the other 13 divisions, have their miniature size in common. Over 140 varieties have gained the Royal Horticultural Society's Award of Garden Merit (See List of Award of Garden Merit narcissus). Colour code Daffodil breeding has introduced a wide range of colours, in both the outer perianth tepal segment and the inner corona. In the registry, daffodils are coded by the colours of each of these two parts. Thus "Geranium", Tazetta (Division 8) as illustrated here with a white outer perianth and orange corona is classified as 8 W-O. Toxicity Pharmacology All Narcissus species contain the alkaloid poison lycorine, mostly in the bulb but also in the leaves. Members of the monocot subfamily Amaryllidoideae present a unique type of alkaloids, the norbelladine alkaloids, which are 4-methylcatechol derivatives combined with tyrosine. They are responsible for the poisonous properties of a number of the species. Over 200 different chemical structures of these compounds are known, of which 79 or more are known from Narcissus alone. The toxic effects of ingesting Narcissus products for both humans and animals (such as cattle, goats, pigs, and cats) have long been recognised and they have been used in suicide attempts. Ingestion of N. pseudonarcissus or N. jonquilla is followed by salivation, acute abdominal pains, nausea, vomiting, and diarrhea, then neurological and cardiac events, including trembling, convulsions, and paralysis. Death may result if large quantities are consumed. The toxicity of Narcissus varies with species, N. poeticus being more toxic than N. pseudonarcissus, for instance. The distribution of toxins within the plant also varies, for instance, there is a five times higher concentration of alkaloid in the stem of N. papyraceus than in the bulb, making it dangerous to herbivores more likely to consume the stem than the bulb, and is part of the plant's defence mechanisms. The distribution of alkaloids within tissues may also reflect defence against parasites. The bulbs can also be toxic to other nearby plants, including roses, rice, and cabbages, inhibiting growth. For instance placing cut flowers in a vase alongside other flowers shortens the life of the latter. Poisoning Many cases of poisoning or death have occurred when narcissi bulbs have been mistaken for leeks or onions and cooked and eaten. Recovery is usually complete in a few hours without any specific intervention. In more severe cases involving ingestion of large quantities of bulbs, activated carbon, salts and laxatives may be required, and for severe symptoms intravenous atropine and emetics or stomach pumping may be indicated. However, ingestion of large quantities accidentally is unusual because of a strong unpleasant taste. When narcissi were compared with a number of other plants not normally consumed by animals, narcissi were the most repellent, specifically N. pseudonarcissus. Consequently, narcissus alkaloids have been used as repellents and may also discourage fungi, molds, and bacteria. On 1 May 2009, a number of schoolchildren fell ill at Gorseland Primary School in Martlesham Heath, Suffolk, England, after a daffodil bulb was added to soup during a cookery class. Topical effects One of the most common dermatitis problems for flower pickers, packers, florists, and gardeners, "daffodil itch", involves dryness, fissures, scaling, and erythema in the hands, often accompanied by subungual hyperkeratosis (thickening of the skin beneath the nails). It is blamed on exposure to calcium oxalate, chelidonic acid or alkaloids such as lycorine in the sap, either due to a direct irritant effect or an allergic reaction. It has long been recognised that some cultivars provoke dermatitis more readily than others. N. pseudonarcissus and the cultivars "Actaea", "Camparelle", "Gloriosa", "Grande Monarque", "Ornatus", "Princeps" and "Scilly White" are known to do so. If bulb extracts come into contact with wounds, both central nervous system and cardiac symptoms may result. The scent can also cause toxic reactions such as headaches and vomiting from N. bulbocodium. Uses Traditional medicine Despite the lethal potential of Narcissus alkaloids, they have been used for centuries as traditional medicines for a variety of complaints, including cancer. Plants thought to be N. poeticus and N. tazetta are described in the Bible in the treatment for what is thought to be cancer. In the Classical Greek world Hippocrates (ca. B.C. 460–370) recommended a pessary prepared from narcissus oil for uterine tumors, a practice continued by Pedanius Dioscorides (ca. A.D. 40–90) and Soranus of Ephesus (A.D. 98–138) in the first and second centuries A.D., while the Roman Pliny the Elder (A.D. 23–79), advocated topical use. The bulbs of N. poeticus contain the antineoplastic agent narciclasine. This usage is also found in later Arabian, North African, Central American and Chinese medicine during the Middle Ages. In China N. tazetta var. chinensis was grown as an ornamental plant but the bulbs were applied topically to tumors in traditional folk medicine. These bulbs contain pretazettine, an active antitumor compound. Narcissus products have received a variety of other uses. The Roman physician Aulus Cornelius Celsus listed narcissus root in De Medicina among medical herbs, described as emollient, erodent, and "powerful to disperse whatever has collected in any part of the body". N. tazetta bulbs were used in Turkey as a remedy for abscesses in the belief they were antiphlogistic and analgesic. Other uses include the application to wounds, strains, painful joints, and various local ailments as an ointment called 'Narcissimum'. Powdered flowers have also been used medically, as an emetic, a decongestant and for the relief of dysentery, in the form of a syrup or infusion. The French used the flowers as an antispasmodic, the Arabs the oil for baldness and also an aphrodisiac. In the eighteenth century the Irish herbal of John K'Eogh recommended pounding the roots in honey for use on burns, bruises, dislocations and freckles, and for drawing out thorns and splinters. N. tazetta bulbs have also been used for contraception, while the flowers have been recommended for hysteria and epilepsy. In the traditional Japanese medicine of kampo, wounds were treated with narcissus root and wheat flour paste; the plant, however, does not appear in the modern kampo herb list. There is also a long history of the use of Narcissus as a stimulant and to induce trance like states and hallucinations. Sophocles referred to the narcissus as the "Chaplet of the infernal Gods", a statement frequently wrongly attributed to Socrates (see Antiquity). Biological properties Extracts of Narcissus have demonstrated a number of potentially useful biological properties including antiviral, prophage induction, antibacterial, antifungal, antimalarial, insecticidal, cytotoxic, antitumor, antimitotic, antiplatelet, hypotensive, emetic, acetylcholine esterase inhibitory, antifertility, antinociceptive, chronotropic, pheromone, plant growth inhibitor, and allelopathic. An ethanol extract of Narcissus bulbs was found effective in one mouse model of nociception, para-benzoquinone induced abdominal constriction, but not in another, the hot plate test. Most of these properties are due to alkaloids, but some are also due to mannose-binding lectins. The most-studied alkaloids in this group are galantamine (galanthamine), lycorine, narciclasine, and pretazettine. It is likely that the traditional use of narcissi for the treatment of cancer was due to the presence of isocarbostyril constituents such as narciclasine, pancratistatin and their congeners. N. poeticus contains about 0.12g of narciclasine per kg of fresh bulbs. Acetylcholine esterase inhibition has attracted the most interest as a possible therapeutic intervention, with activity varying by a thousandfold between species, and the greatest activity seen in those that contain galantamine or epinorgalanthamine. The rodent repellant properties of Narcissus alkaloids have been utilised in horticulture to protect more vulnerable bulbs. Therapeutics Of all the alkaloids, only galantamine has made it to therapeutic use in humans, as the drug galantamine for Alzheimer's disease. Galantamine is an acetylcholine esterase inhibitor which crosses the blood brain barrier and is active within the central nervous system. Daffodils are grown commercially near Brecon in Powys, Wales, to produce galantamine. Commercial uses Throughout history the scent of narcissi has been an important ingredient of perfumes, a quality that comes from essential oils rather than alkaloids. Narcissi are also an important horticultural crop, and source of cut flowers (floriculture). The Netherlands, which is the most important source of flower bulbs worldwide is also a major centre of narcissus production. Of 16,700 hectares (ha) under cultivation for flower bulbs, narcissi account for about 1,800 hectares. In the 1990s narcissus bulb production was at 260 million, sixth in size after tulips, gladioli, irises, crocuses and lilies and in 2012 it was ranked third. About two-thirds of the area under cultivation is dedicated to about 20 of the most popular varieties. In the 2009/2010 season, 470 cultivars were produced on 1578 ha. By far the largest area cultivated is for the miniature 'Tête-à-Tête', followed at some distance by 'Carlton'. The largest production cultivars are shown in Table II. "Carlton" and "Ice Follies" (Division 2: Large cup) have a long history of cultivation, together with "Dutch Master" and "Golden Harvest" (1: yellow). "Carlton" and "Golden Harvest" were introduced in 1927, and "Ice Follies" in 1953. "Carlton", with over 9 billion bulbs (350 000 tons), is among the more numerous individual plants produced in the world. The other major areas of production are the United States, Israel which exported 25 million N. tazetta cultivar bulbs in 2003, and the United Kingdom. In the United Kingdom a total of 4100 ha were planted with bulbs, of which 3800 ha were Narcissi, the UK's most important bulb crop, much of which is for export, making this the largest global production centre, about half of the total production area. While some of the production is for forcing, most is for dry bulb production. Bulb production and forcing occurs in the East, while production in the south west is mainly for outdoor flower production. The farm gate value was estimated at £10m in 2007. Production of both bulbs and cut flowers takes place in open fields in beds or ridges, often in the same field, allowing adaptation to changing market conditions. Narcissi grow best in mild maritime climates. Compared to the United Kingdom, the harsher winters in the Netherlands require covering the fields with straw for protection. Areas with higher rainfall and temperatures are more susceptible to diseases that attack crops. Production is based on a 1 (UK) or 2 (Netherlands) year cycle. Optimal soil pH is 6.0–7.5. Prior to planting disinfection by hot water takes place, such as immersion at 44.4 °C for three hours. Bulbs are harvested for market in the summer, sorted, stored for 2–3 weeks, and then further disinfected by a hot (43.5 °C) bath. This eliminates infestations by narcissus fly and nematodes. The bulbs are then dried at a high temperature, and then stored at 15.5 °C. The initiation of new flower development in the bulb takes place in late spring before the bulbs are lifted, and is completed by mid summer while the bulbs are in storage. The optimal temperature for initiation is 20 °C followed by cooling to 13 °C. Traditionally, sales took place in the daffodil fields prior to harvesting the bulbs, but today sales are handled by Marketing Boards although still before harvesting. In the Netherlands there are special exhibition gardens for major buyers to view flowers and order bulbs, some larger ones may have more than a thousand narcissus varieties on display. While individuals can visit these gardens they cannot buy bulbs at retail, which are only available at wholesale, usually at a minimum of several hundredweight. The most famous display is at Keukenhof, although only about 100 narcissus varieties are on display there. Forcing There is also a market for forced blooms, both as cut flowers and potted flowers through the winter from Christmas to Easter, the long season requiring special preparation by growers. Cut flowers For cut flowers, bulbs larger than 12 cm in size are preferred. To bloom in December, bulbs are harvested in June to July, dried, stored for four days at 34 °C, two weeks at 30 and two weeks at 17–20 °C and then placed in cold storage for precooling at 9 degrees for about 15–16 weeks. The bulbs are then planted in light compost in crates in a greenhouse for forcing at 13 °C–15 °C and the blooms appear in 19–30 days. Potted flowers For potted flowers a lower temperature is used for precooling (5 °C for 15 weeks), followed by 16 °C–18 °C in a greenhouse. For later blooming (mid- and late-forcing), bulbs are harvested in July to August and the higher temperatures are omitted, being stored a 17–20 °C after harvesting and placed in cold storage at 9 °C in September for 17–18 (cut flowers) or 14–16 (potted flowers) weeks. The bulbs can then be planted in cold frames, and then forced in a greenhouse according to requirements. N. tazetta and its cultivars are an exception to this rule, requiring no cold period. Often harvested in October, bulbs are lifted in May and dried and heated to 30 °C for three weeks, then stored at 25 °C for 12 weeks and planted. Flowering can be delayed by storing at 5 °C–10 °C. Culture Symbols The daffodil is the national flower of Wales, associated with Saint David's Day (March 1). The narcissus is also a national flower symbolising the new year or Nowruz in the Kurdish culture. In the West the narcissus is perceived as a symbol of vanity, in the East as a symbol of wealth and good fortune , while in Persian literature, the narcissus is a symbol of beautiful eyes. In western countries the daffodil is also associated with spring festivals such as Lent and its successor Easter. In Germany the wild narcissus, N. pseudonarcissus, is known as the Osterglocke or "Easter bell." In the United Kingdom the daffodil is sometimes referred to as the Lenten lily. Although prized as an ornamental flower, some people consider narcissi unlucky, because they hang their heads implying misfortune. White narcissi, such as N triandrus "Thalia", are especially associated with death, and have been called grave flowers. In Ancient Greece narcissi were planted near tombs, and Robert Herrick describes them as portents of death, an association which also appears in the myth of Persephone and the underworld . Art Antiquity The decorative use of narcissi dates as far back as ancient Egyptian tombs, and frescoes at Pompeii. They are mentioned in the King James Version of the Bible as the Rose of Sharon and make frequent appearances in classical literature. Greek culture The narcissus appears in two Graeco-Roman myths, that of the youth Narcissus who was turned into the flower of that name, and of the Goddess Persephone snatched into the Underworld by the god Hades while picking the flowers. The narcissus is considered sacred to both Hades and Persephone, and grows along the banks of the river Styx in the underworld. The Greek poet Stasinos mentioned them in the Cypria amongst the flowers of Cyprus. The legend of Persephone comes to us mainly in the seventh century BC Homeric Hymn To Demeter, where the author describes the narcissus, and its role as a lure to trap the young Persephone. The flower, she recounts to her mother was the last flower she reached for before being seized. Other Greek authors making reference to the narcissus include Sophocles and Plutarch. Sophocles, in Oedipus at Colonus utilises narcissus in a symbolic manner, implying fertility, allying it with the cults of Demeter and her daughter Kore (Persephone), and by extension, a symbol of death. Jebb comments that it is the flower of imminent death with its fragrance being narcotic, emphasised by its pale white colour. Just as Persephone reaching for the flower heralded her doom, the youth Narcissus gazing at his own reflection portended his own death. Plutarch refers to this in his Symposiacs as numbing the nerves causing a heaviness in the limbs. He refers to Sophocles' "crown of the great Goddesses", which is the source of the English phrase "Chaplet of the infernal Gods" incorrectly attributed to Socrates. A passage by Moschus, describes fragrant narcissi. Homer in his Odyssey described the underworld as having Elysian meadows carpeted with flowers, thought to be narcissus, as described by Theophrastus. A similar account is provided by Lucian describing the flowers in the underworld. The myth of the youth Narcissus is also taken up by Pausanias. He believed that the myth of Persephone long antedated that of Narcissus, and hence discounted the idea the flower was named after the youth. Roman culture Virgil, the first known Roman writer to refer to the narcissus, does so in several places, for instance twice in the Georgics. Virgil refers to the cup shaped corona of the narcissus flower, allegedly containing the tears of the self-loving youth Narcissus. Milton makes a similar analogy "And Daffodillies fill their Cups with Tears". Virgil also mentions narcissi three times in the Eclogues. The poet Ovid also dealt with the mythology of the narcissus. In his Metamorphoses, he recounts the story of the youth Narcissus who, after his death, is turned into the flower, and it is also mentioned in Book 5 of his poem Fasti. This theme of metamorphosis was broader than just Narcissus; for instance see crocus, laurel and hyacinth. Western culture Although there is no clear evidence that the flower's name derives directly from the Greek myth, this link between the flower and the myth became firmly part of western culture. The narcissus or daffodil is the most loved of all English plants, and appears frequently in English literature. Many English writers have referred to the cultural and symbolic importance of Narcissus). No flower has received more poetic description except the rose and the lily, with poems by authors from John Gower, Shakespeare, Milton (see Roman culture, above), Wordsworth, Shelley and Keats. Frequently the poems deal with self-love derived from Ovid's account. Gower's reference to the yellow flower of the legend has been assumed to be the daffodil or Narcissus, though as with all references in the older literature to the flower that sprang from the youth's death, there is room for some debate as to the exact species of flower indicated, some preferring Crocus. Spenser announces the coming of the Daffodil in Aprill of his Shepheardes Calender (1579). Shakespeare, who frequently uses flower imagery, refers to daffodils twice in The Winter's Tale and also The Two Noble Kinsmen. Robert Herrick alludes to their association with death in a number of poems. Among the English romantic movement writers none is better known than William Wordsworth's short 1804 poem I Wandered Lonely as a Cloud which has become linked in the popular mind with the daffodils that form its main image. Wordsworth also included the daffodil in other poems. Yet the description given of daffodils by his sister, Dorothy is just as poetic, if not more so, just that her poetry was prose and appears almost an unconscious imitation of the first section of the Homeric Hymn to Demeter (see Greek culture, above). Among their contemporaries, Keats refers to daffodils among those things capable of bringing "joy for ever". More recently A. E. Housman, using one of the daffodil's more symbolic names (see Symbols), wrote The Lent Lily in A Shropshire Lad, describing the traditional Easter death of the daffodil. In Black Narcissus, Rumer Godden describes the disorientation of English nuns in the Indian Himalayas, and gives the plant name an unexpected twist, alluding both to narcissism and the effect of the perfume Narcisse Noir (Caron) on others. The novel was later adapted into the 1947 British film of the same name. The narcissus also appears in German literature such as that of Paul Gerhardt. In the visual arts, narcissi are depicted in three different contexts, mythological (Narcissus, Persephone), floral art, or landscapes. The Narcissus story has been popular with painters and the youth is frequently depicted with flowers to indicate this association. The Persephone theme is also typified by Waterhouse in his Narcissus, the floral motif by van Scorel and the landscape by Van Gogh's Undergrowth. Narcissi first started to appear in western art in the late Middle Ages, in panel paintings, particularly those depicting crucifixion. For instance that of the Westfälischer Meister in Köln in the Wallraf-Richartz-Museum, Cologne, where daffodils symbolise not only death but also hope in the resurrection, because they are perennial and bloom at Easter. Eastern cultures In Chinese culture Narcissus tazetta subsp. chinensis (Chinese sacred lilies), which can be grown indoors, is widely used as an ornamental plant. It was probably introduced to China by Arab traders travelling the Silk Road prior to the Song dynasty for medicinal use. Spring-flowering, they became associated with Chinese New Year, signifying good fortune, prosperity and good luck and there are many legends in Chinese culture associated with Narcissus. In contrast to the West, narcissi have not played a significant part in Chinese Garden art, however, Zhao Mengjian in the Southern Song dynasty was noted for his portrayal of narcissi. Narcissus bulb carving and cultivation has become an art akin to Japanese bonsai. The Japanese novel Narcissu contains many references to the narcissus, where the main characters set out for the famed narcissus fields on Awaji Island. Islamic culture Narcissi are one of the most popular garden plants in Islamic culture. Prior to the Arab conquest of Persia, the Persian ruler Khosrau I () is said to have not been able to tolerate them at feasts because they reminded him of eyes, an association that persists to this day. The Persian phrase (, literally "a reddish-blue narcissus") is a well-known metonymy for the "eye(s) of a mistress" in the classical poetries of the Persian, Urdu, Ottoman Turkish, Azerbaijani and Chagatai languages; to this day also the vernacular names of some narcissus cultivars (for example, Shahla-ye Shiraz and Shahla-ye Kazerun). As described by the poet Ghalib (1797–1869), "God has given the eye of the narcissus the power of seeing". The eye imagery is also found in a number of poems by Abu Nuwas. Another poet who refers to narcissi, is Rumi. Even the prophet Mohammed is said to have praised the narcissus, though some of the sayings that were cited as proof are considered "weak" records. Popular culture The word "daffodil" has been used widely in popular culture, from Dutch cars (DAF Daffodil) to films (Daffodils) to slurs against homosexuals and cross-dressers (as in the film J. Edgar, when Hoover's mother explains why real-life cross-dresser Barton Pinkus was called "Daffy" (short for "Daffodil" and the equivalent of a pansy), and admonishes, "I'd rather have a dead son than a daffodil for a son". Festivals In some areas where narcissi are prevalent, their blooming in spring is celebrated in festivals. For instance, the slopes around Montreux, Switzerland and its associated riviera come alive with blooms each May (May Snow) at the annual Narcissi Festival. Festivals are also held in many other countries. Cancer Various cancer charities around the world, such as the American Cancer Society, Cancer Society, Cancer Council, Irish Cancer Society, and Marie Curie in the UK use the daffodil as a fundraising symbol on "Daffodil Days".
Biology and health sciences
Monocots
null
142867
https://en.wikipedia.org/wiki/Ferret
Ferret
The ferret (Mustela furo) is a small, domesticated species belonging to the family Mustelidae. The ferret is most likely a domesticated form of the wild European polecat (Mustela putorius), as evidenced by the ferret's ability to interbreed with European polecats and produce hybrid offspring. Physically, ferrets resemble other mustelids because of their long, slender bodies. Including their tail, the average length of a ferret is about ; they weigh between ; and their fur can be black, brown, white, or a mixture of those colours. The species is sexually dimorphic, with males being considerably larger than females. Ferrets may have been domesticated since ancient times, but there is widespread disagreement because of the sparseness of written accounts and the inconsistency of those which survive. Contemporary scholarship agrees that ferrets were bred for sport, hunting rabbits in a practice known as rabbiting. In North America, the ferret has become an increasingly prominent choice of household pet, with over five million in the United States alone. The legality of ferret ownership varies by location. In New Zealand and some other countries, restrictions apply due to the damage done to native fauna by feral colonies of polecat–ferret hybrids. The ferret has also served as a fruitful research animal, contributing to research in neuroscience and infectious disease, especially influenza. The domestic ferret is often confused with the black-footed ferret (Mustela nigripes), a species native to North America. Etymology The name "ferret" is derived from the Latin , meaning "little thief", a likely reference to the common ferret penchant for secreting away small items. In Old English (Anglo-Saxon), the animal was called . The word seems to appear in Middle English in the 14th century from the Latin, with the modern spelling of "ferret" by the 16th century. The Greek word íktis, Latinized as ictis occurs in a play written by Aristophanes, The Acharnians, in 425 BC. Whether this was a reference to ferrets, polecats, or the similar Egyptian mongoose is uncertain. A male ferret is called a hob; a female ferret is a jill. A spayed female is a sprite, a neutered male is a gib, and a vasectomised male is known as a hoblet. Ferrets under one year old are known as kits. A group of ferrets is known as a "business", or historically as a "busyness". Other purported collective nouns, including "besyness", "fesynes", "fesnyng" and "feamyng", appear in some dictionaries, but are almost certainly ghost words. Biology Characteristics Ferrets have a typical mustelid body-shape, being long and slender. Their average length is about including a tail. Their pelage has various colorations including brown, black, white or mixed. They weigh between and are sexually dimorphic as the males are substantially larger than females. The average gestation period is 42 days and females may have two or three litters each year. The litter size is usually between three and seven kits which are weaned after three to six weeks and become independent at three months. They become sexually mature at approximately 6 months and the average life span is 7 to 10 years. Ferrets are induced ovulators and can copulate for longer than one hour. Behavior Ferrets spend 14–18 hours a day asleep and are most active around the hours of dawn and dusk, meaning they are crepuscular. If they are caged, they should be taken out daily to exercise and satisfy their curiosity; they need at least an hour and a place to play. Unlike their polecat ancestors, which are solitary animals, most ferrets will live happily in social groups. They are territorial, like to burrow, and prefer to sleep in an enclosed area. Like many other mustelids, ferrets have scent glands near their anus, the secretions from which are used in scent marking. Ferrets can recognize individuals from these anal gland secretions, as well as the sex of unfamiliar individuals. Ferrets may also use urine marking for mating and individual recognition. As with skunks, ferrets can release their anal gland secretions when startled or scared, but the smell is much less potent and dissipates rapidly. Most pet ferrets in the US are sold descented (with the anal glands removed). In many other parts of the world, including the UK and other European countries, de-scenting is considered an unnecessary mutilation. If excited, they may perform a behavior called the "weasel war dance", characterized by frenzied sideways hops, leaps and bumping into nearby objects. Despite its common name, it is not aggressive but is a joyful invitation to play. It is often accompanied by a unique soft clucking noise, commonly referred to as "dooking". When scared, ferrets will hiss; when upset, they squeak softly. Diet Ferrets are obligate carnivores. The natural diet of their wild ancestors consisted of whole small prey, including meat, organs, bones, skin, feathers and fur. Ferrets have short digestive systems and a quick metabolism, so they need to eat frequently. Prepared dry foods consisting almost entirely of meat (including high-grade cat food, although specialized ferret food is increasingly available and preferable) provide the most nutritional value. Some ferret owners feed pre-killed or live prey (such as mice and rabbits) to their ferrets to more closely mimic their natural diet. Ferret digestive tracts lack a cecum and the animal is largely unable to digest plant matter. Before much was known about ferret physiology, many breeders and pet stores recommended food like fruit in the ferret diet, but it is now known that such foods are inappropriate, and may in fact have negative consequences for ferret health. Ferrets imprint on their food at around six months old. This can make introducing new foods to an older ferret a challenge, and even simply changing brands of kibble may meet with resistance from a ferret that has never eaten the food as a kit. It is therefore advisable to expose young ferrets to as many different types and flavors of appropriate food as possible. Dentition Ferrets have four types of teeth (the number includes maxillary (upper) and mandibular (lower) teeth) with a dental formula of : Twelve small incisor teeth (only long) located between the canines in the front of the mouth. These are used for grooming. Four canines used for killing prey. Twelve premolar teeth that the ferret uses to chew food—located at the sides of the mouth, directly behind the canines. The ferret uses these teeth to cut through flesh, using them in a scissors action to cut the meat into digestible chunks. Six molars (two on top and four on the bottom) at the far back of the mouth are used to crush food. Health Ferrets are known to suffer from several distinct health problems. Among the most common are cancers affecting the adrenal glands, pancreas and lymphatic system. Adrenal disease, a growth of the adrenal glands that can be either hyperplasia or cancer, is most often diagnosed by signs like unusual hair loss, increased aggression, and difficulty urinating or defecating. Treatment options include surgery to excise the affected glands, melatonin or deslorelin implants, and hormone therapy. The causes of adrenal disease speculated to include unnatural light cycles, diets based around processed ferret foods, and prepubescent neutering. It has also been suggested that there may be a hereditary component to adrenal disease. Insulinoma, a type of cancer of the islet cells of the pancreas, is the most common form of cancer in ferrets. It is most common in ferrets between the ages of 4 and 5 years old. Lymphoma is the most common malignancy in ferrets. Ferret lymphosarcoma occurs in two forms—juvenile lymphosarcoma, a fast-growing type that affects ferrets younger than two years, and adult lymphosarcoma, a slower-growing form that affects ferrets four to seven years old. Viral diseases include canine distemper, influenza and ferret systemic coronavirus. A high proportion of ferrets with white markings which form coat patterns known as a blaze, badger, or panda coat, such as a stripe extending from their face down the back of their head to their shoulder blades, or a fully white head, have a congenital deafness (partial or total) which is similar to Waardenburg syndrome in humans. Ferrets without white markings, but with premature graying of the coat, are also more likely to have some deafness than ferrets with solid coat colors which do not show this trait. Most albino ferrets are not deaf; if deafness does occur in an albino ferret, this may be due to an underlying white coat pattern which is obscured by the albinism. Health problems can occur in unspayed females when not being used for breeding. Similar to domestic cats, ferrets can also suffer from hairballs and dental problems. Ferrets will also often chew on and swallow foreign objects which can lead to bowel obstruction. History of domestication In common with most domestic animals, the original reason for ferrets being domesticated by human beings is uncertain, but it may have involved hunting. According to phylogenetic studies, the ferret was domesticated from the European polecat (Mustela putorius), and likely descends from a North African lineage of the species. Analysis of mitochondrial DNA suggests that ferrets were domesticated around 2,500 years ago. It has been claimed that the ancient Egyptians were the first to domesticate ferrets, but as no mummified remains of a ferret have yet been found, nor any hieroglyph of a ferret, and no polecat now occurs wild in the area, that idea seems unlikely. The American Society of Mammalogists classifies M. furo as a distinct species. Ferrets were probably used by the Romans for hunting. Genghis Khan, ruler of the Mongol Empire, is recorded as using ferrets in a gigantic hunt in 1221 that aimed to purge an entire region of wild animals. Colonies of feral ferrets have established themselves in areas where there is no competition from similarly sized predators, such as in the Shetland Islands and in remote regions in New Zealand. Where ferrets coexist with polecats, hybridization is common. It has been claimed that New Zealand has the world's largest feral population of ferret–polecat hybrids. In 1877, farmers in New Zealand demanded that ferrets be introduced into the country to control the rabbit population, which was also introduced by humans. Five ferrets were imported in 1879, and in 1882–1883, 32 shipments of ferrets were made from London, totaling 1,217 animals. Only 678 landed, and 198 were sent from Melbourne, Australia. On the voyage, the ferrets were mated with the European polecat, creating a number of hybrids that were capable of surviving in the wild. In 1884 and 1886, close to 4,000 ferrets and ferret hybrids, 3,099 weasels and 137 stoats were turned loose. Concern was raised that these animals would eventually prey on indigenous wildlife once rabbit populations dropped, and this is exactly what happened to New Zealand's bird species which previously had had no mammalian predators. Ferreting For millennia, the main use of ferrets was for hunting, or "ferreting". With their long, lean build and inquisitive nature, ferrets are very well equipped for getting down holes and chasing rodents, rabbits and moles out of their burrows. The Roman historians Pliny and Strabo record that Caesar Augustus sent "" from Libya to the Balearic Islands to control rabbit plagues there in 6 BC; it is speculated that "" could refer to ferrets, mongooses, or polecats. In England, in 1390, a law was enacted restricting the use of ferrets for hunting to the relatively wealthy: Ferrets were first introduced into the American continents in the 17th century, and were used extensively from 1860 until the start of World War II to protect grain stores in the American West from rodents. They are still used for hunting in some countries, including the United Kingdom, where rabbits are considered a pest by farmers. The practice is illegal in several countries where it is feared that ferrets could unbalance the ecology. In 2009 in Finland, where ferreting was previously unknown, the city of Helsinki began to use ferrets to restrict the city's rabbit population to a manageable level. Ferreting was chosen because in populated areas it is considered to be safer and less ecologically damaging than shooting the rabbits. As pets In the United States, ferrets were relatively rare pets until the 1980s. A government study by the California State Bird and Mammal Conservation Program estimated that by 1996 about 800,000 domestic ferrets were being kept as pets in the United States. Regulation Australia: It is illegal to keep ferrets as pets in Queensland and the Northern Territory; in the Australian Capital Territory a licence is required. Brazil: Ferrets are allowed only if they are given a microchip identification tag and sterilized. New Zealand: It has been illegal to sell, distribute or breed ferrets in New Zealand since 2002 unless certain conditions are met. United States: Ferrets were once banned in many US states, but most of these laws were rescinded in the 1980s and 1990s as they became popular pets. Illegal: Ferrets are illegal in California under Fish and Game Code Section 2118; and the California Code of Regulations, although it is not illegal for veterinarians in the state to treat ferrets kept as pets. "Ferrets are strictly prohibited as pets under Hawaii law because they are potential carriers of the rabies virus"; the territory of Puerto Rico has a similar law. Ferrets are restricted by some municipalities, such as New York City, which renewed its ban in 2015. They are also prohibited on many military bases. A permit to own a ferret is needed in other areas, including Rhode Island. Illinois and Georgia do not require a permit to merely possess a ferret, but a permit is required to breed ferrets. It was once illegal to own ferrets in Dallas, Texas, but the current Dallas City Code for Animals includes regulations for the vaccination of ferrets. Pet ferrets are legal in Wisconsin, however legality varies by municipality. The city of Oshkosh, Wisconsin, for example, classifies ferrets as a wild animal and subsequently prohibits them from being kept within the city limits. Also, an import permit from the state department of agriculture is required to bring one into the state. Under common law, ferrets are deemed "wild animals" subject to strict liability for injuries they cause, but in several states statutory law has overruled the common law, deeming ferrets "domestic". Japan: In Hokkaido prefecture, ferrets must be registered with the local government. In other prefectures, no restrictions apply. South Africa: In the Western Cape province, a permit is required to buy, sell, or possess a ferret. Other uses Because they share many anatomical and physiological features with humans, ferrets are extensively used as experimental subjects in biomedical research. Fields such as virology, reproductive physiology, anatomy, endocrinology and neuroscience all rely on ferrets for studies into cardiovascular disease, nutrition, respiratory diseases such as SARS and human influenza, airway physiology, cystic fibrosis and gastrointestinal disease. Ferrets are a particularly important animal model for human influenza, and have been used to study the 2009 H1N1 (swine flu) virus. Ferrets inoculated intra-nasally with human naso-pharyngeal washes develop an influenza transmissible to other cage mates and human investigators. A very small experimental study of ferrets found that a nasal spray effectively blocked the transmission of the SARS-CoV-2 coronavirus that causes COVID-19. In the UK, ferret racing is often a feature of rural fairs or festivals, with people placing small bets on ferrets that run set routes through pipes and wire mesh. Although financial bets are placed, the event is primarily for entertainment purposes as opposed to 'serious' betting sports such as horse or greyhound racing. Terminology and coloring Most ferrets are either albinos, with white fur and pink eyes, or display the typical dark masked sable coloration of their wild polecat ancestors. In recent years fancy breeders have produced a wide variety of colors and patterns. Color refers to the color of the ferret's guard hairs, undercoat, eyes and nose; pattern refers to the concentration and distribution of color on the body, mask and nose, as well as white markings on the head or feet when present. Some national organizations, such as the American Ferret Association, have attempted to classify these variations in their showing standards. There are four basic colors. The sable (including chocolate and dark brown), albino, dark-eyed white (DEW, also known as black-eyed white or BEW) and silver. All the other colors of a ferret are variations on one of these four categories. Waardenburg-like coloring Ferrets with a white stripe on their face or a fully white head, primarily blazes, badgers and pandas, almost certainly carry a congenital defect which shares some similarities to Waardenburg syndrome. This causes, among other things, a cranial deformation in the womb which broadens the skull, white face markings, and also partial or total deafness. It is estimated as many as 75 percent of ferrets with these Waardenburg-like colorings are deaf. White ferrets were favored in the Middle Ages for the ease in seeing them in thick undergrowth. Leonardo da Vinci's painting Lady with an Ermine is likely mislabelled; the animal is probably a ferret, not a stoat (for which "ermine" is an alternative name for the animal in its white winter coat). Similarly, the ermine portrait of Queen Elizabeth I shows her with her pet ferret, which has been decorated with painted-on heraldic ermine spots. The Ferreter's Tapestry is a 15th-century tapestry from Burgundy, France, now part of the Burrell Collection housed in the Glasgow Museum and Art Galleries. It shows a group of peasants hunting rabbits with nets and white ferrets. This image was reproduced in Renaissance Dress in Italy 1400–1500, by Jacqueline Herald, Bell & Hyman. Gaston Phoebus' Book of the Hunt was written in approximately 1389 to explain how to hunt different kinds of animals, including how to use ferrets to hunt rabbits. Illustrations show how multicolored ferrets that were fitted with muzzles were used to chase rabbits out of their warrens and into waiting nets. Import restrictions Australia – Ferrets cannot be imported into Australia. A report drafted in August 2000 seems to be the only effort made to date to change the situation. Canada – Ferrets brought from anywhere except the US require a Permit to Import from the Canadian Food Inspection Agency Animal Health Office. Ferrets from the US require only a vaccination certificate signed by a veterinarian. Ferrets under three months old are not subject to any import restrictions. European Union – , dogs, cats and ferrets can travel freely within the European Union under the pet passport scheme. To cross a border within the EU, ferrets require at minimum an EU PETS passport and an identification microchip (though some countries will accept a tattoo instead). Vaccinations are required; most countries require a rabies vaccine, and some require a distemper vaccine and treatment for ticks and fleas 24 to 48 hours before entry. Ferrets occasionally need to be quarantined before entering the country. PETS travel information is available from any EU veterinarian or on government websites. New Zealand – New Zealand has banned the import of ferrets into the country. United Kingdom – The UK accepts ferrets under the EU's PETS travel scheme. Ferrets must be microchipped, vaccinated against rabies, and documented. They must be treated for ticks and tapeworms 24 to 48 hours before entry. They must also arrive via an authorized route. Ferrets arriving from outside the EU may be subject to a six-month quarantine.
Biology and health sciences
Mustelidae
Animals
142905
https://en.wikipedia.org/wiki/Sea%20turtle
Sea turtle
Sea turtles (superfamily Chelonioidea), sometimes called marine turtles, are reptiles of the order Testudines and of the suborder Cryptodira. The seven existing species of sea turtles are the flatback, green, hawksbill, leatherback, loggerhead, Kemp's ridley, and olive ridley. Six of the seven sea turtle species, all but the flatback, are present in U.S. waters, and are listed as endangered and/or threatened under the Endangered Species Act. All but the flatback turtle are listed as threatened with extinction globally on the IUCN Red List of Threatened Species. The flatback turtle is found only in the waters of Australia, Papua New Guinea, and Indonesia. Sea turtles can be categorized as hard-shelled (cheloniid) or leathery-shelled (dermochelyid). The only dermochelyid species of sea turtle is the leatherback. Description For each of the seven species of sea turtles, females and males are the same size. As adults, it is possible to tell male turtles from female turtles by their long tails with a cloacal opening near the tip. Adult female sea turtles have shorter tails, with a cloacal opening near the base. Hatchling and sub-adult turtles do not exhibit sexual dimorphism; it is not possible to determine their sex by looking at them. In general, sea turtles have a more fusiform body plan than their terrestrial or freshwater counterparts. This tapering at both ends reduces volume and means that sea turtles cannot retract their head and limbs into their shells for protection, unlike many other turtles and tortoises. However, the streamlined body plan reduces friction and drag in the water and allows sea turtles to swim more easily and swiftly. The leatherback sea turtle is the largest sea turtle, reaching 1.4 to more than 1.8 m (4.6 to 5.9 ft) in length and weighing between 300 and 640 kg (661 to 1,411 lbs). Other sea turtle species are smaller, ranging from as little as 60 cm (2 ft) long in the case of the Kemp's ridley, which is the smallest sea turtle species, to 120 cm (3.9 ft) long in the case of the green turtle, the second largest. The skulls of sea turtles have cheek regions that are enclosed in bone. Although this condition appears to resemble that found in the earliest known fossil reptiles (anapsids), it is possible it is a more recently evolved trait in sea turtles, placing them outside the anapsids. Taxonomy and evolution Sea turtles, along with other turtles and tortoises, are part of the order Testudines. All species except the leatherback sea turtle are in the family Cheloniidae. The superfamily name Chelonioidea and family name Cheloniidae are based on the Ancient Greek word for tortoise: (). The leatherback sea turtle is the only extant member of the family Dermochelyidae. Fossil evidence of marine turtles goes back to the Late Jurassic (150 million years ago) with genera such as Plesiochelys, from Europe. In Africa, the first marine turtle is Angolachelys, from the Turonian of Angola. A lineage of unrelated marine testudines, the pleurodire (side-necked) bothremydids, also survived well into the Cenozoic. Other pleurodires are also thought to have lived at sea, such as Araripemys and extinct pelomedusids. Modern sea turtles are not descended from more than one of the groups of sea-going turtles that have existed in the past; they instead constitute a single radiation that became distinct from all other turtles at least 110 million years ago. Their closest extant relatives are in fact the snapping turtles (Chelydridae), musk turtles (Kinosternidae), and hickatee (Dermatemyidae) of the Americas, which alongside the sea turtles constitute the clade Americhelydia. The oldest possible representative of the lineage (Panchelonioidea) leading to modern sea turtles was possibly Desmatochelys padillai from the Early Cretaceous. Desmatochelys was a protostegid, a lineage that would later give rise to some very large species but went extinct at the end of the Cretaceous. Presently thought to be outside the crown group that contains modern sea turtles (Chelonioidea), the exact relationships of protostegids to modern sea turtles are still debated due to their primitive morphology; they may be the sister group to the Chelonoidea, or an unrelated turtle lineage that convergently evolved similar adaptations. The earliest "true" sea turtle that is known from fossils is Nichollsemys from the Early Cretaceous (Albian) of Canada. In 2022, the giant fossil species Leviathanochelys was described from Spain. This species inhabited the oceans covering Europe in the Late Cretaceous and rivaled the concurrent giant protostegids such as Archelon and Protostega as one of the largest turtles to ever exist. Unlike the protostegids, which have an uncertain relationship to modern sea turtles, Leviathanochelys is thought to be a true sea turtle of the superfamily Chelonioidea. Sea turtles' limbs and brains have evolved to adapt to their diets. Their limbs originally evolved for locomotion, but more recently evolved to aid them in feeding. They use their limbs to hold, swipe, and forage their food. This helps them eat more efficiently. Cladogram Below is a cladogram showing the phylogenetic relationships of living and extinct sea turtles in the Chelonioidea based on Evers et al. (2019): An alternate phylogeny was proposed by Castillo-Visa et al. (2022): Distribution and habitat Sea turtles can be found in all oceans except for the polar regions. The flatback sea turtle is found solely on the northern coast of Australia. The Kemp's ridley sea turtle is found solely in the Gulf of Mexico and along the East Coast of the United States. Sea turtles are generally found in the waters over continental shelves. During the first three to five years of life, sea turtles spend most of their time in the pelagic zone floating in seaweed mats. Green sea turtles in particular are often found in Sargassum mats, in which they find food, shelter and water. Once the sea turtle has reached adulthood it moves closer to the shore. Females will come ashore to lay their eggs on sandy beaches during the nesting season. Sea turtles migrate to reach their spawning beaches, which are limited in numbers. Living in the ocean therefore means they usually migrate over large distances. All sea turtles have large body sizes, which is helpful for moving large distances. Large body sizes also offer good protection against the large predators (notably sharks) found in the ocean. In 2020, diminished human activity resulting from the COVID-19 virus caused an increase in sea turtle nesting. Some areas in Thailand saw an abnormally high number of nests, and Florida experienced a similar phenomenon. Less plastic and light pollution could explain these observations. Life cycle Sea turtles are thought to reach sexual maturity from about 10−20 years old depending on species and methodology. However, reliable estimates are difficult to ascertain. Mature sea turtles may migrate thousands of miles to reach breeding sites. After mating at sea, adult female sea turtles return to land to lay their eggs. Different species of sea turtles exhibit various levels of philopatry. In the extreme case, females return to the same beach where they hatched. This can take place every two to four years in maturity. The mature nesting female hauls herself onto the beach, nearly always at night, and finds suitable sand in which to create a nest. Using her hind flippers, she digs a circular hole deep. After the hole is dug, the female then starts filling the nest with her clutch of soft-shelled eggs. Depending on the species, a typical clutch may contain 50–350 eggs. After laying, she re-fills the nest with sand, re-sculpting and smoothing the surface, and then camouflaging the nest with vegetation until it is relatively undetectable visually. She may also dig decoy nests. The whole process takes 30 to 60 minutes. She then returns to the ocean, leaving the eggs untended. Females may lay 1–8 clutches in a single season. Female sea turtles alternate between mating in the water and laying their eggs on land. Most sea turtle species nest individually. But ridley sea turtles come ashore en masse, known as an arribada (arrival). With the Kemp's ridley sea turtle this occurs during the day. Sea turtles have temperature-dependent sex determination. Warmer temperatures produce female hatchlings, while cooler temperatures produce male hatchlings. The eggs will incubate for 50–60 days. The eggs in one nest hatch together over a short period of time. The baby sea turtles break free of the egg shell, dig through the sand, and crawl into the sea. Most species of sea turtles hatch at night. However, the Kemp's ridley sea turtle commonly hatches during the day. Sea turtle nests that hatch during the day are more vulnerable to predators, and may encounter more human activity on the beach. Larger hatchlings have a higher probability of survival than smaller individuals, which can be explained by the fact that larger offspring are faster and thus less exposed to predation. Predators can only functionally intake so much; larger individuals are not targeted as often. A study conducted on this topic shows that body size is positively correlated with speed, so larger baby sea turtles are exposed to predators for a shorter amount of time. The fact that there is size dependent predation on chelonians has led to the evolutionary development of large body sizes. In 1987, Carr discovered that the young of green and loggerhead sea turtles spent a great deal of their pelagic lives in floating sargassum mats. Within these mats, they found ample shelter and food. In the absence of sargassum, young sea turtles feed in the vicinity of upwelling "fronts". In 2007, Reich determined that green sea turtle hatchlings spend the first three to five years of their lives in pelagic waters. In the open ocean, pre-juveniles of this particular species were found to feed on zooplankton and smaller nekton before they are recruited into inshore seagrass meadows as obligate herbivores. Physiology Osmoregulation Sea turtles maintain an internal environment that is hypotonic to the ocean. To maintain hypotonicity they must excrete excess salt ions. Like other marine reptiles, sea turtles rely on a specialized gland to rid the body of excess salt, because reptilian kidneys cannot produce urine with a higher ion concentration than sea water. All species of sea turtles have a lachrymal gland in the orbital cavity, capable of producing tears with a higher salt concentration than sea water. Leatherback sea turtles face an increased osmotic challenge compared to other species of sea turtle, since their primary prey are jellyfish and other gelatinous plankton, whose fluids have the same concentration of salts as sea water. The much larger lachrymal gland found in leatherback sea turtles may have evolved to cope with the higher intake of salts from their prey. A constant output of concentrated salty tears may be required to balance the input of salts from regular feeding, even considering leatherback sea turtle tears can have a salt ion concentration almost twice that of other species of sea turtle. Hatchlings depend on drinking sea water immediately upon entering the ocean to replenish water lost during the hatching process. Salt gland functioning begins quickly after hatching, so that the young sea turtles can establish ion and water balance soon after entering the ocean. Survival and physiological performance hinge on immediate and efficient hydration following emergence from the nest. Thermoregulation All sea turtles are poikilotherms. However, leatherback sea turtles (family Dermochelyidae) are able to maintain a body temperature warmer than the ambient water by thermoregulation through the trait of gigantothermy. Green sea turtles in the relatively cooler Pacific are known to haul themselves out of the water on remote islands to bask in the sun. This behavior has only been observed in a few locations, including the Galapagos, Hawaii, Europa Island, and parts of Australia. Diving physiology Sea turtles are air-breathing reptiles that have lungs, so they regularly surface to breathe. Sea turtles spend a majority of their time underwater, so they must be able to hold their breath for long periods. Dive duration largely depends on activity. A foraging sea turtle may typically spend 5–40 minutes underwater while a sleeping sea turtle can remain underwater for 4–7 hours. Remarkably, sea turtle respiration remains aerobic for the vast majority of voluntary dive time. When a sea turtle is forcibly submerged (e.g. entangled in a trawl net) its diving endurance is substantially reduced, so it is more susceptible to drowning. When surfacing to breathe, a sea turtle can quickly refill its lungs with a single explosive exhalation and rapid inhalation. Their large lungs permit rapid exchange of oxygen and avoid trapping gases during deep dives. Cold-stunning is a phenomenon that occurs when sea turtles enter cold ocean water (), which causes the turtles to float to the surface and therefore makes it impossible for them to swim. Fluorescence Gruber and Sparks (2015) have observed the first fluorescence in a marine tetrapod (four-limbed vertebrates). Sea turtles are the first biofluorescent reptile found in the wild. According to Gruber and Sparks (2015), fluorescence is observed in an increasing number of marine creatures (cnidarians, ctenophores, annelids, arthropods, and chordates) and is now also considered to be widespread in cartilaginous and ray-finned fishes. The two marine biologists accidentally made the observation in the Solomon Islands on a hawksbill sea turtle, one of the rarest and most endangered sea turtle species in the ocean, during a night dive aimed to film the biofluorescence emitted by small sharks and coral reefs. The role of biofluorescence in marine organisms is often attributed to a strategy for attracting prey or perhaps a way to communicate. It could also serve as a way of defense or camouflage for the sea turtle hiding during night amongst other fluorescent organisms like corals. Fluorescent corals and sea creatures are best observed during night dives with a blue LED light and with a camera equipped with an orange optical filter to capture only the fluorescence light. Sensory modalities Navigation Below the surface, the sensory cues available for navigation change dramatically. Light availability decreases quickly with depth, and is refracted by the movement of water when present, celestial cues are often obscured, and ocean currents cause continuous drift. Most sea turtle species migrate over significant distances to nesting or foraging grounds, some even crossing entire ocean basins. Passive drifting within major current systems, such as those in the North Atlantic Gyre, can result in ejection well outside of the temperature tolerance range of a given species, causing heat stress, hypothermia, or death. In order to reliably navigate within strong gyre currents in the open ocean, migrating sea turtles possess both a bicoordinate magnetic map and magnetic compass sense, using a form of navigation termed Magnetoreception. Specific migratory routes have been shown to vary between individuals, making the possession of both a magnetic map and compass sense advantageous for sea turtles. A bicoordinate magnetic map gives sea turtles the ability to determine their position relative to a goal with both latitudinal and longitudinal information, and requires the detection and interpretation of more than one magnetic parameter going in opposite directions to generate, such as Magnetic field intensity and Inclination angle. A magnetic compass sense allows sea turtles to determine and maintain a specific magnetic heading or orientation. These magnetic senses are thought to be inherited, as hatchling sea turtles swim in directions that would keep them on course when exposed to the magnetic field signatures of various locations along their species' migratory routes. Natal homing behavior is well described in sea turtles, and genetic testing of turtle populations at different nesting sites has shown that magnetic field is a more reliable indicator of genetic similarity than physical distance between sites. Additionally, nesting sites have been recorded to "drift" along with isoline shifts in the magnetic field. Magnetoreception is thought to be the primary navigation tool used by nesting sea turtles in returning to natal beaches. There are three major theories explaining natal site learning: inherited magnetic information, socially facilitated migration, and geomagnetic imprinting. Some support has been found for geomagnetic imprinting, including successful experiments transplanting populations of sea turtles by relocating them prior to hatching, but the exact mechanism is still not known. Ecology Diet The loggerhead, Kemp's ridley, olive ridley, and hawksbill sea turtles are omnivorous their entire life. Omnivorous turtles may eat a wide variety of plant and animal life including decapods, seagrasses, seaweed, sponges, mollusks, cnidarians, Echinoderms, worms and fish. However, some species specialize on certain prey. The diet of green sea turtles changes with age. Juveniles are omnivorous, but as they mature they become exclusively herbivorous. This diet shift has an effect on the green sea turtle's morphology. Green sea turtles have a serrated jaw that is used to eat sea grass and algae. Leatherback sea turtles feed almost exclusively on jellyfish and help control jellyfish populations. Hawksbill sea turtles principally eat sponges, which constitute 70–95% of their diets in the Caribbean. Larynx mechanisms There was little information regarding the sea turtle's larynx. Sea turtles, like other turtle species, lack an epiglottis to cover the larynx entrance. Key findings from an experiment reveal the following in regards to the larynx morphology: a close apposition between the linguolaryngeal cleft's smooth mucosal walls and the laryngeal folds, a dorsal part of the glottis, the glottal mucosa attached to the arytenoid cartilage, and the way the hyoid sling is arranged and the relationship between the compressor laryngis muscle and cricoid cartilage. The glottal opening and closing mechanisms have been examined. During the opening stage, two abductor artytenoideae muscles swing arytenoid cartilages and the glottis walls. As a result, the glottis profile is transformed from a slit to a triangle. In the closing stage, the tongue is drawn posteriorly due to the close apposition of the glottis walls and linguolaryngeal cleft walls and hyoglossal sling contractions. Relationship with humans Sea turtles are caught worldwide, although it is illegal to hunt most species in many countries. A great deal of intentional sea turtle harvests worldwide are for food. Many parts of the world have long considered sea turtles to be fine dining. In England during the 1700s, sea turtles were consumed as a delicacy to near extinction, often as turtle soup. Ancient Chinese texts dating to the 5th century B.C.E. describe sea turtles as exotic delicacies. Many coastal communities around the world depend on sea turtles as a source of protein, often harvesting several sea turtles at once and keeping them alive on their backs until needed. Coastal peoples gather sea turtle eggs for consumption. To a much lesser extent, some species are targeted for their shells. Tortoiseshell, a traditional decorative ornamental material used in Japan and China, comes from the carapace scutes of the hawksbill sea turtle. Ancient Greeks and ancient Romans processed sea turtle scutes (primarily from the hawksbill sea turtle) for various articles and ornaments used by their elites, such as combs and brushes. The skin of the flippers is prized for use as shoes and assorted leather goods. In various West African countries, sea turtles are harvested for traditional medicinal use. The Moche people of ancient Peru worshipped the sea and its animals. They often depicted sea turtles in their art. J. R. R. Tolkien's poem "Fastitocalon" echoes a second-century Latin tale in the Physiologus of the Aspidochelone ("round-shielded turtle"); it is so large that sailors mistakenly land and light a fire on its back, and are drowned when it dives. Beach towns, such as Tortuguero, Costa Rica, have transitioned from a tourism industry that made profits from selling sea turtle meat and shells to an ecotourism-based economy. Tortuguero is considered to be the founding location of sea turtle conservation. In the 1960s the cultural demand for sea turtle meat, shells, and eggs was quickly killing the once-abundant sea turtle populations that nested on the beach. The Caribbean Conservation Corporation began working with villagers to promote ecotourism as a permanent substitute to sea turtle hunting. Sea turtle nesting grounds became sustainable. Tourists love to come and visit the nesting grounds, although it causes a lot of stress to the sea turtles because all of the eggs can get damaged or harmed. Since the creation of a sea turtle ecotourism-based economy, Tortugero annually houses thousands of tourists who visit the protected beach that hosts sea turtle walks and nesting grounds. Walks to observe the nesting sea turtles require a certified guide and this controls and minimises disturbance of the beaches. It also gives the locals a financial interest in conservation and the guides now defend the sea turtles from threats such as poaching; efforts in Costa Rica's Pacific Coast are facilitated by a nonprofit organization, Sea Turtles Forever. Thousands of people are involved in sea turtle walks, and substantial revenues accrue from the fees paid for the privilege. In other parts of the world where sea turtle breeding sites are threatened by human activity, volunteers often patrol beaches as a part of conservation activities, which may include relocating sea turtle eggs to hatcheries, or assisting hatching sea turtles in reaching the ocean. Locations in which such efforts exist include the east coast of India, São Tomé and Príncipe, Sham Wan in Hong Kong, and the coast of Florida. Importance to ecosystems Sea turtles play key roles in two habitat types: oceans and beaches/dunes. In the oceans, sea turtles, especially green sea turtles, are among the very few creatures (manatees are another) that eat sea grass. Sea grass needs to be constantly cut short to help it grow across the sea floor. Sea turtle grazing helps maintain the health of the sea grass beds. Sea grass beds provide breeding and developmental grounds for numerous marine animals. Without them, many marine species humans harvest would be lost, as would the lower levels of the food chain. The reactions could result in many more marine species eventually becoming endangered or extinct. Sea turtles use beaches and sand dunes as to lay their eggs. Such coastal environments are nutrient-poor and depend on vegetation to protect against erosion. Eggs, hatched or unhatched, and hatchlings that fail to make it into the ocean are nutrient sources for dune vegetation and therefore protecting these nesting habitats for sea turtles, forming a positive feedback loop. Sea turtles also maintain a symbiotic relationship with yellow tang, in which the fish will eat algae growing on the shell of a sea turtle. Conservation status and threats The IUCN Red List classifies three species of sea turtle as either "endangered" or "critically endangered". An additional three species are classified as "vulnerable". The flatback sea turtle is considered as "data deficient", meaning that its conservation status is unclear due to lack of data. All species of sea turtle are listed in CITES Appendix I, restricting international trade of sea turtles and sea turtle products. However, the usefulness of global assessments for sea turtles has been questioned, particularly due to the presence of distinct genetic stocks and spatially separated regional management units (RMUs). Each RMU is subject to a unique set of threats that generally cross jurisdictional boundaries, resulting in some sub-populations of the same species' showing recovery while others continue to decline. This has triggered the IUCN to conduct threat assessments at the sub-population level for some species recently. These new assessments have highlighted an unexpected mismatch between where conservation relevant science has been conducted on sea turtles, and where there is the greatest need for conservation. For example, as at August 2017, about 69% of studies using stable isotope analysis to understand the foraging distribution of sea turtles have been conducted in RMUs listed as "least concern" by the IUCN. Additionally, all populations of sea turtles that occur in United States waters are listed as threatened or endangered by the US Endangered Species Act (ESA). The US listing status of the loggerhead sea turtle is under review as of 2012. *The ESA manages sea turtles by population and not by species. Management In the Caribbean, researchers are having some success in assisting a comeback. In September 2007, Corpus Christi, Texas, wildlife officials found 128 Kemp's ridley sea turtle nests on Texas beaches, a record number, including 81 on North Padre Island (Padre Island National Seashore) and four on Mustang Island. Wildlife officials released 10,594 Kemp's ridley sea turtle hatchlings along the Texas coast in recent years. The Philippines has had several initiatives dealing with the issue of sea turtle conservation. In 2007, the province of Batangas declared the catching and eating of sea turtles (locally referred to as Pawikans) illegal. However, the law seems to have had little effect as sea turtle eggs are still in demand in Batangan markets. In September 2007, several Chinese poachers were apprehended off the Turtle Islands in the country's southernmost province of Tawi-Tawi. The poachers had collected more than a hundred sea turtles, along with 10,000 sea turtle eggs. Evaluating the progress of conservation programs is difficult, because many sea turtle populations have not been assessed adequately. Most information on sea turtle populations comes from counting nests on beaches, but this does not provide an accurate picture of the whole sea turtle population. A 2010 United States National Research Council report concluded that more detailed information on sea turtles' life cycles, such as birth rates and mortality, is needed. Nest relocation may not be a useful conservation technique for sea turtles. In one study on the freshwater Arrau turtle (Podocnemis expansa) researchers examined the effects of nest relocation. They discovered that clutches of this freshwater turtle that were transplanted to a new location had higher mortality rates and more morphological abnormalities compared to non-transplanted clutches. However, in a study of loggerhead sea turtles (Caretta caretta), Dellert et al. found that relocating nests at risk of inundation increased the success of eggs and hatchlings and decreased the risk of inundation. Predators and disease Most sea turtle mortality happens early in life. Sea turtles usually lay around 100 eggs at a time, but on average only one of the eggs from the nest will survive to adulthood. Raccoons, foxes, and seabirds may raid nests or hatchlings may be eaten within minutes of hatching as they make their initial run for the ocean. Once in the water, they are susceptible to seabirds, large fish and even other sea turtles. Adult sea turtles have few predators. Large aquatic carnivores such as sharks and crocodiles are their biggest threats; however, reports of terrestrial predators attacking nesting females are not uncommon. Jaguars have been reported to smash into sea turtle shells with their paws, and scoop out the flesh. Fibropapillomatosis disease causes tumors in sea turtles. While many of the things that endanger sea turtles are natural predators, increasingly many threats to the sea turtle species have arrived with the ever-growing presence of humans. Bycatch One of the most significant and contemporary threats to sea turtles comes from bycatch due to imprecise fishing methods. Long-lining has been identified as a major cause of accidental sea turtle deaths. There is also a black-market demand for tortoiseshell for both decoration and supposed health benefits. Sea turtles must surface to breathe. Caught in a fisherman's net, they are unable to surface and thus drown. In early 2007, almost a thousand sea turtles were killed inadvertently in the Bay of Bengal over the course of a few months after netting. However, some relatively inexpensive changes to fishing techniques, such as slightly larger hooks and traps from which sea turtles can escape, can dramatically cut the mortality rate. Turtle excluder devices (TEDs) have reduced sea turtle bycatch in shrimp nets by 97 percent. Beach development Light pollution from beach development is a threat to baby sea turtles; the glow from city sources can cause them to head into traffic instead of the ocean. There has been some movement to protect these areas. On the east coast of Florida, parts of the beach known to harbor sea turtle nests are protected by fences. Conservationists have monitored hatchings, relocating lost baby sea turtles to the beach. Hatchlings find their way to the ocean by crawling towards the brightest horizon and can become disoriented along the coastline. Lighting restrictions can prevent lights from shining on the beach and confusing hatchlings. Sea turtle-safe lighting uses red or amber LED light, invisible to sea turtles, in place of white light. Poaching Another major threat to sea turtles is the black-market trade in eggs and meat. This is a problem throughout the world, but especially a concern in China, the Philippines, India, Indonesia and the coastal nations of Latin America. Estimates reach as high as 35,000 sea turtles killed a year in Mexico and the same number in Nicaragua. Conservationists in Mexico and the United States have launched "Don't Eat Sea Turtle" campaigns in order to reduce this trade in sea turtle products. These campaigns have involved figures such as Dorismar, Los Tigres del Norte and Maná. Sea turtles are often consumed during the Catholic season of Lent, even though they are reptiles, not fish. Consequently, conservation organizations have written letters to the Pope asking that he declare sea turtles meat. Marine debris Another danger to sea turtles comes from marine debris, especially plastics, such as in the Great Pacific Garbage Patch, which may be mistaken for jellyfish, and abandoned fishing nets in which they can become entangled. Sea turtles in all types are being endangered by the way humans use plastic. Recycling is known of and people recycle but not everyone does. The amount of plastic in the oceans and beaches is growing every day. The littering of plastic is 80% of the amount. When turtles hatch from their eggs on the beach, they are already endangered with plastic. Turtles have to find the ocean by themselves and on their journey from land to sea, they encounter a lot of plastic. Some even get trapped in the plastic and die from lack of resources and from the sun being too hot. Sea turtles eat plastic bags because they confuse them with their actual diet, jellyfish, algae and other components. The consumption of plastic is different for every breed of sea turtle, but when they ingest the plastic, it can clog their intestines and cause internal bleeding which will eventually kill them. In 2015, an olive ridley sea turtle was found with a plastic drinking straw lodged inside its nose. The video of Nathan J. Robinson has helped raise considerable awareness about the threat posed by plastic pollution to sea turtles. The research into turtle consumption of plastic is growing. A laboratory of Exeter and Plymouth Marine tested 102 turtles and found plastic in every one of their stomachs. The researchers found more than 800 pieces of plastic in those 102 turtles. That was 20 times more than what was found in the last research. Those researchers stated that the most common things found were cigarette buds, tire, plastic in many forms and fishing material. The chemicals in the plastic that sea life eats damages their internal organs and can also clog their airway. The chemicals in the plastic that they eat is also a leading cause of the death of the turtles. If the turtles are close to laying eggs, the chemicals that they ingested from the plastic can seep into their eggs and affect their offspring. It is unlikely for the baby sea turtles to survive with those chemicals in their system. There is a large quantity of plastic in the ocean, 80% of which comes from landfills; the ratio of plankton to plastic in the ocean is one to six. The Great Pacific Garbage Patch is a swirl of garbage in the Pacific Ocean that is deep and contains 3.5 million tons of garbage. This is also known as the "plastic island". Climate change Climate change may also cause a threat to sea turtles. Since sand temperature at nesting beaches defines the sex of a sea turtle while developing in the egg, there is concern that rising temperatures may produce too many females. However, more research is needed to understand how climate change might affect sea turtle gender distribution and what other possible threats it may pose. Studies have shown that climate change in the world is making sea turtles gender change. The study that was in January 2018 Current Biology "Environmental Warning and Feminization of One of the Largest Sea Turtle Populations in the World", showed how baby sea turtles were being born female a lot more than being born male. Scientists took blood samples from many baby sea turtles near the Great Barrier Reef. Prior to this study, the ratio of male to female was pretty normal. There was a little more female than there was male but it was enough to keep reproduction and life cycle normal. The study showed that there was 99% more female sea turtles then male. The temperature of the sand has a big impact on the sex of the sea turtle. This is not common with other animals but it is with sea turtles. Warmer or hot sand usually makes the sea turtle female and the cooler the sand usually makes male. Climate change has made the temperatures much hotter than they should be. The temperature of the sand gets hotter every time it is time for sea turtles to lay their eggs. With that, adaption to the sand should occur but it would take generations for them to adapt to that one temperature. It would be hard because the temperature of the sand is always changing. The sand temperature is not the only thing that impacts sea turtles. The rise of the sea levels messes with their memory. They have an imprinted map in their memory that shows where they usually give birth and go after they do. With the rise in water levels, that map is getting messed up and is hard for them to get back to where they started. It is also taking away their beaches that they lay their eggs on. Climate change also has an impact on the number of storms and the severity of them. Storms can wipe out the sea turtles nesting ground and take out the eggs that already laid. The rising level of water is also a way for the nesting grounds to disappear. Sea turtles maps and their nesting grounds getting destroyed is harmful to them. That is because with their maps being messed up and not being able to lay eggs where they usually do makes it hard for them to find a new place to nest. They usually stick to a schedule and the messing up of a schedule messes them up. The temperature of the ocean is also rising. This impacts their diet and what they can eat. Coral reefs are majorly impacted by the rising temperatures and a lot of sea turtles diet is coral reefs or in the coral reef. Most animals that live in coral reefs need the reefs to survive. With the reefs dying, the sea life around it also does, impacting many animals. Oil spills Sea turtles are very vulnerable to oil pollution, both because of the oil's tendency to linger on the water's surface, and because oil can affect them at every stage of their life cycle. Oil can poison the sea turtles upon entering their digestive system. Sea turtles have a cycle that they follow from birth. The cycle depends on the sex of the turtle but they follow it all the way through life. They start by hatching on the beach, they reach the water then move out to find food. They then start their breeding migration and then mate with another turtle. For females, they make their way to the beach to start it all over again. With males, they go back to feeding after mating and doing that over again. Oil spills can affect this cycle majorly. If the female was to go and lay eggs and ingest oil, the chemicals from the oil can get passed on to the offspring and will be hard for them to survive. The diet of the sea turtles can also be impacted by oil. If the things that they eat has oil on it or has ingested oil, it can get into their system and start attacking the insides of the turtle. Rehabilitation Injured sea turtles are rescued and rehabilitated (and, if possible, released back to the ocean) by professional organizations, such as the Gumbo Limbo Nature Center in Boca Raton, Florida, the Karen Beasley Sea Turtle Rescue and Rehabilitation Center in Surf City, North Carolina, and Sea Turtles 911 in Hainan, China. One rescued sea turtle, named Nickel for the coin that was found lodged in her throat, lives at the Shedd Aquarium in Chicago. Symbiosis with barnacles Sea turtles are believed to have a commensal relationship with some barnacles, in which the barnacles benefit from growing on sea turtles without harming them. Barnacles are small, hard-shelled crustaceans found attached to multiple different substrates below or just above the ocean. The adult barnacle is a sessile organism; however, in its larval stage it is planktonic and can move about the water column. The larval stage chooses where to settle and ultimately the habitat for its full adult life, which is typically between 5 and 10 years. However, estimates of age for a common sea turtle barnacle species, Chelonibia testudinaria, suggest that this species lives for at least 21 months, with individuals older than this uncommon. Chelonibia barnacles have also been used to distinguish between the foraging areas of sea turtle hosts. By analyzing stable isotope ratios in barnacle shell material, scientist can identify differences in the water (temperature and salinity) that different hosts have been swimming through, and thus differentiate between the home areas of host sea turtles. A favorite settlement for barnacle larvae is the shell or skin around the neck of sea turtles. The larvae glue themselves to the chosen spot, a thin layer of flesh is wrapped around them and a shell is secreted. Many species of barnacles can settle on any substrate; however, some species of barnacles have an obligatory commensal relationship with specific animals, which makes finding a suitable location harder. Around 29 species of "turtle barnacles" have been recorded. However, it is not solely on sea turtles that barnacles can be found; other organisms also serve as a barnacle's settlements. These organisms include mollusks, whales, decapod crustaceans, manatees and several other groups related to these species. Sea turtle shells are an ideal habitat for adult barnacles for three reasons. Sea turtles tend to live long lives, greater than 70 years, so barnacles do not have to worry about host death. However, mortality in sea turtle barnacles is often driven by their host shedding the scutes on which the barnacle is attached, rather than the death of the sea turtle itself. Secondly, barnacles are suspension feeders. Sea turtles spend most of their lives swimming and following ocean currents and as water runs along the back of the sea turtle's shell it passes over the barnacles, providing an almost constant water flow and influx of food particles. Lastly, the long distances and inter-ocean travel these sea turtles swim throughout their lifetime offers the perfect mechanism for dispersal of barnacle larvae. Allowing the barnacle species to distribute themselves throughout global waters is a high fitness advantage of this commensalism. This relationship, however, is not truly commensal. While the barnacles are not directly parasitic to their hosts, they have negative effects to the sea turtles on which they choose to reside. The barnacles add extra weight and drag to the sea turtle, increasing the energy it needs for swimming and affecting its ability to capture prey, with the effect increasing with the quantity of barnacles affixed to its back.
Biology and health sciences
Reptiles
null
142981
https://en.wikipedia.org/wiki/UNIVAC%20I
UNIVAC I
The UNIVAC I (Universal Automatic Computer I) was the first general-purpose electronic digital computer design for business application produced in the United States. It was designed principally by J. Presper Eckert and John Mauchly, the inventors of the ENIAC. Design work was started by their company, Eckert–Mauchly Computer Corporation (EMCC), and was completed after the company had been acquired by Remington Rand (which later became part of Sperry, now Unisys). In the years before successor models of the UNIVAC I appeared, the machine was simply known as "the UNIVAC". The first UNIVAC was accepted by the United States Census Bureau on March 31, 1951, and was dedicated on June 14 that year. The fifth machine (built for the U.S. Atomic Energy Commission) was used by CBS to predict the result of the 1952 presidential election. With a sample of a mere 5.5% of the voter turnout, it famously predicted an Eisenhower landslide. History Development and design In early 1946, months after the completion of ENIAC, the University of Pennsylvania adopted a new patent policy, which would've required Eckert and Mauchly to assign all their patents to the university if they stayed beyond spring of that year. Unable to reach an agreement with the university, the duo left the Moore School of Electrical Engineering in March 1946, along with much of the senior engineering staff. Simultaneously, the duo founded the Electronic Control Company (later renamed the Eckert-Mauchly Computer Corporation) in Philadelphia. When the duo was given a $300,000 deposit for research by the United States Census Bureau, the conception of the UNIVAC I began in April 1946, a month after they founded their company. Later in August of that year, during the last of the Moore School Lectures, the Moore School team members were proposing new technological designs for the EDVAC computer (which was also in development at the time) and its stored program concept. They were also simultaneously conceiving ideas for a potential successor model to the EDVAC, which were under the working titles of "Parallel-Type EDVAC," "Statistical EDVAC," and simply, "EDVAC II." In April 1947, Eckert and Mauchly created the tentative instruction code, C-1, for their potential successor model to the EDVAC, which was the earliest document on the programming of an electronic digital computer intended for commercial use. A month later, they renamed their next project to "the UNIVAC." Later in October of that year, the duo drafted , which was a mercury acoustic delay-line electronic memory system. The patent was eventually accepted in February 1953 as the "first device to gain widespread acceptance as a reliable computer memory system." Meanwhile, in November 1947, the Electronic Control Company began advertising the UNIVAC I (which wasn't shown as it wasn't fully conceptualized at that point). In 1948, the company, renamed the Eckert-Mauchly Computer Corporation, secured a contract with the United States Census Bureau to begin construction on the UNIVAC I. At the same time, Harry Straus, impressed with the development of the duo's next invention, convinced the directors of American Totalisator to invest $500,000 to shore up the financially troubled Eckert-Mauchly Computer Corporation. In early 1949, Betty Holberton, one of the developers of the project, made the UNIVAC Instructions Code C-10, the first software to allow a computer to be operated by keyboarded commands rather than dials and switches. At the same time, Grace Hopper left the Harvard Computation Laboratory to join the EMCC as a senior mathematician and programmer to help develop the UNIVAC I. Later in June of that year, Mauchly conceived Short Code—the first high-level programming language for an electronic computer—to be used with the BINAC. The Short Code was later tested on the UNIVAC I in early 1950. Meanwhile, in September 1949, by the time the BINAC was delivered to Northrop Aircraft, Eckert and Mauchly received six new orders for the UNIVAC I, so they decided to focus on finishing the UNIVAC I. Unfortunately for them, a month later, Harry Straus was killed when his twin-engine airplane crashed, causing American Totalisator to withdraw their promise of financial support. This was quickly undone when Remington Rand bought the duo's company in February 1950 to help finish construction on the UNIVAC I. The company then became Remington Rand's "Eckert-Mauchly Division." Construction of the UNIVAC I was completed by December 1950, and it was later delivered to the United States Census Bureau in March 1951 so data could be processed more quickly and accurately. Market positioning The UNIVAC I was the first American computer designed at the outset for business and administrative use with fast execution of relatively simple arithmetic and data transport operations, as opposed to the complex numerical calculations required of scientific computers. As such, the UNIVAC competed directly against punch-card machines, though the UNIVAC originally could neither read nor punch cards. That shortcoming hindered sales to companies concerned about the high cost of manually converting large quantities of existing data stored on cards. This was corrected by adding offline card processing equipment, the UNIVAC Tape to Card converter, to transfer data between cards and UNIVAC magnetic tapes. However, the early market share of the UNIVAC I was lower than the Remington Rand Company wished. To promote sales, the company partnered with CBS to have UNIVAC I predict the result of the 1952 United States presidential election live on television. The machine predicted that Dwight D. Eisenhower would win in a landslide over Adlai Stevenson at a chance of 100 to 1, receive 32,915,949 votes and win the Electoral College 438–93. It was opposed to the final Gallup Poll, which had predicted that Eisenhower would win in a close contest. The CBS crew was so certain that UNIVAC was wrong that they believed it was not working, so they changed a certain "national trend factor" from 40% to 4% to obtain what appeared more correct 268–263, and released that for the television. It was soon noticed that the prediction assuming 40% was closer to truth, so they changed it back. On election night, Eisenhower received 34,075,029 votes in a 442–89 Electoral College victory. UNIVAC had a margin of error of 3.5% of Eisenhower's popular vote tally and was within four votes of his electoral vote total. The prediction and its use in CBS's election coverage gave rise to a greater public awareness of computing technology, while computerized predictions became a widely used part of election night broadcasts. Installations The first contracts were with government agencies such as the Census Bureau, the U.S. Air Force, and the U.S. Army Map Service. Contracts were also signed by the ACNielsen Company, and the Prudential Insurance Company. Following the sale of Eckert–Mauchly Computer Corporation to Remington Rand in 1950, due to the cost overruns on the project, Remington Rand convinced Nielsen and Prudential to cancel their contracts. The first sale, to the Census Bureau, was marked with a formal ceremony on March 31, 1951, at the Eckert–Mauchly Division's factory at 3747 Ridge Avenue, Philadelphia. The machine was not actually shipped until the following December, because, as the sole fully set-up model, it was needed for demonstration purposes, and the company was apprehensive about the difficulties of dismantling, transporting, and reassembling the delicate machine. As a result, the first installation was with the second computer, delivered to the Pentagon in June 1952. UNIVAC installations, 1951–1954 Originally priced at US$159,000, the UNIVAC I rose in price until they were between $1,250,000 and $1,500,000. A total of 46 systems were eventually built and delivered. The UNIVAC I was too expensive for most universities, and Sperry Rand, unlike companies such as IBM, was not strong enough financially to afford to give many away. However, Sperry Rand donated UNIVAC I systems to Harvard University (1956), the University of Pennsylvania (1957), and Case Institute of Technology in Cleveland, Ohio (1957). The UNIVAC I at Case was still operable in 1965 but had been supplanted by a UNIVAC 1107. A few UNIVAC I systems stayed in service long after they were made obsolete by advancing technology. The Census Bureau used its two systems until 1963, amounting to 12 and 9 years of service, respectively. Sperry Rand itself used two systems in Buffalo, New York until 1968. The insurance company Life and Casualty of Tennessee used its system until 1970, totalling over 13 years of service. Technical description Major physical features UNIVAC I used 6,103 vacuum tubes, weighed , consumed 125 kW, and could perform about 1,905 operations per second running on a 2.25 MHz clock. The Central Complex alone (i.e. the processor and memory unit) was 4.3 m by 2.4 m by 2.6 m high. The complete system occupied more than 35.5 m2 (382 ft2) of floor space. Main memory details The main memory consisted of 1000 words of 12 characters each. When representing numbers, they were written as 11 decimal digits plus sign. The 1000 words of memory consisted of 100 channels of 10-word mercury delay-line registers. The input/output buffers were 60 words each, consisting of 12 channels of 10-word mercury delay-line registers. There are six channels of 10-word mercury delay-line registers as spares. With modified circuitry, seven more channels control the temperature of the seven mercury tanks, and one more channel is used for the 10-word "Y" register. The total of 126 mercury channels is contained in the seven mercury tanks mounted on the backs of sections MT, MV, MX, NT, NV, NX, and GV. Each mercury tank is divided into 18 mercury channels. Each 10-word mercury delay-line channel is made up of three sections: A channel in a column of mercury, with receiving and transmitting quartz piezo-electric crystals mounted at opposite ends. An intermediate frequency chassis, connected to the receiving crystal, containing amplifiers, detector, and compensating delay, mounted on the shell of the mercury tank. A recirculation chassis, containing cathode follower, pulse former and retimer, modulator, which drives the transmitting crystal, and input, clear, and memory-switch gates, mounted in the sections adjacent to the mercury tanks. Instructions and data Instructions were six alphanumeric characters, packed two instructions per word. The addition time was 525 microseconds and the multiplication time was 2150 microseconds. A non-standard modification called "Overdrive" did exist, that allowed for three four-character instructions per word under some circumstances. (Ingerman's simulator for the UNIVAC, referenced below, also makes this modification available.) Digits were represented internally using excess-3 ("XS3") binary-coded decimal (BCD) arithmetic with six bits per digit using the same value as the digits of the alphanumeric character set (and one parity bit per digit for error checking), allowing 11-digit signed magnitude numbers. But with the exception of one or two machine instructions, UNIVAC was considered by programmers to be a decimal machine, not a binary machine, and the binary representation of the characters was irrelevant. If a non-digit character was encountered in a position during an arithmetic operation the machine passed it unchanged to the output, and any carry into the non-digit was lost. (Note, however, that a peculiarity of UNIVAC I's addition/subtraction circuitry was that the "ignore", space, and minus characters were occasionally treated as numeric, with values of –3, –2, and –1, respectively, and the apostrophe, ampersand, and left parenthesis were occasionally treated as numeric, with values 10, 11, and 12.) Input/output Besides the operator's console, the only I/O devices connected to the UNIVAC I were up to 10 UNISERVO tape drives, a Remington Standard electric typewriter and a Tektronix oscilloscope. The UNISERVO was the first commercial computer tape drive commercially sold. It used data density 128 bits per inch (with real transfer rate 7,200 characters per second) on magnetically plated phosphor bronze tapes. The UNISERVO could also read and write UNITYPER created tapes at 20 bits per inch. The UNITYPER was an offline typewriter to tape device, used by programmers and for minor data editing. Backward and forward tape read and write operations were possible on the UNIVAC and were fully overlapped with instruction execution, permitting high system throughput in typical sort/merge data processing applications. Large volumes of data could be submitted as input via magnetic tapes created on offline card to tape system and made as output via a separate offline tape to printer system. The operators console had three columns of decimal coded switches that allowed any of the 1000 memory locations to be displayed on the oscilloscope. Since the mercury delay-line memory stored bits in a serial format, a programmer or operator could monitor any memory location continuously and with sufficient patience, decode its contents as displayed on the scope. The on-line typewriter was typically used for announcing program breakpoints, checkpoints, and for memory dumps. Operations A typical UNIVAC I installation had several ancillary devices. There were: The UNIPRINTER read metal UNIVAC magnetic tape using a tape reader and typed the data at 10 characters per second using a modified Remington typewriter. The UNIVAC Card to Tape converter read punched cards at 240 cards per minute and wrote their data on metal UNIVAC magnetic tape using a UNISERVO tape drive. A tape-to-card converter, that read a magnetic tape and produced punched cards. UNIVAC did not provide an operating system. Operators loaded on a UNISERVO a program tape which could be loaded automatically by processor logic. The appropriate source and output data tapes would be mounted and the program started. Results tapes then went to the offline printer or typically for data processing into short-term storage to be updated with the next set of data produced on the offline card to tape unit. The mercury delay-line memory tank temperature was very closely controlled as the speed of sound in mercury varies with temperature. In the event of a power failure, many hours could elapse before the temperature stabilized. Reliability Eckert and Mauchly were uncertain about the reliability of digital logic circuits—little was known about them at the time. The UNIVAC had been designed with parallel computation circuits and a statistical comparison of the results. In practice, however, only failing components, i.e., the vacuum tubes, yielded comparison faults, as the circuit designs as such proved very reliable. A regimen was established to ensure the reliability of the fragile vacuum tubes, the choke point of the entire operation. Prior to use large lots of the predominant tube type 25L6 were burned in and thoroughly tested. (Often half of any given production lot would be thrown away.) Technicians would then install a tested and burned-in tube in an easily diagnosed location such as the memory recirculate amplifiers. Then, when further proven aged and proven reliable, this "golden" tube was sent to stock to be pulled out for difficult-to-diagnose logic positions. Furthermore, it took approximately 30 minutes to turn on the computer—all cathode heater power was stepped up gradually in order to reduce the in-rush current the concominant thermal stress on the tubes. As a result of these measures, uptimes (MTBF) of many days to weeks were eventually obtained on the processor. (The UNISERVO did not have vacuum columns but rather springs and strings to buffer the tape from the reels to the capstan. These mechanical components then became the most frequent source of failures.)
Technology
Early computers
null
143023
https://en.wikipedia.org/wiki/Equatorial%20bulge
Equatorial bulge
An equatorial bulge is a difference between the equatorial and polar diameters of a planet, due to the centrifugal force exerted by the rotation about the body's axis. A rotating body tends to form an oblate spheroid rather than a sphere. On Earth The planet Earth has a rather slight equatorial bulge; its equatorial diameter is about greater than its polar diameter, with a difference of about of the equatorial diameter. If Earth were scaled down to a globe with an equatorial diameter of , that difference would be only . While too small to notice visually, that difference is still more than twice the largest deviations of the actual surface from the ellipsoid, including the tallest mountains and deepest oceanic trenches. Earth's rotation also affects the sea level, the imaginary surface used as a reference frame from which to measure altitudes. This surface coincides with the mean water surface level in oceans, and is extrapolated over land by taking into account the local gravitational potential and the centrifugal force. The difference of the radii is thus about . An observer standing at sea level on either pole, therefore, is closer to Earth's center than if standing at sea level on the Equator. As a result, the highest point on Earth, measured from the center and outwards, is the peak of Mount Chimborazo in Ecuador rather than Mount Everest. But since the ocean also bulges, like Earth and its atmosphere, Chimborazo is not as high above sea level as Everest is. Similarly the lowest point on Earth, measured from the center and outwards, is the Litke Deep in the Arctic Ocean rather than Challenger Deep in the Pacific Ocean. But since the ocean also flattens, like Earth and its atmosphere, Litke Deep is not as low below sea level as Challenger Deep is. More precisely, Earth's surface is usually approximated by an ideal oblate ellipsoid, for the purposes of defining precisely the latitude and longitude grid for cartography, as well as the "center of the Earth". In the WGS-84 standard Earth ellipsoid, widely used for map-making and the GPS system, Earth's radius is assumed to be to the Equator and to either pole, meaning a difference of between the radii or between the diameters, and a relative flattening of 1/298.257223563. The ocean surface is much closer to this standard ellipsoid than the solid surface of Earth is. The equilibrium as a balance of energies Gravity tends to contract a celestial body into a sphere, the shape for which all the mass is as close to the center of gravity as possible. Rotation causes a distortion from this spherical shape; a common measure of the distortion is the flattening (sometimes called ellipticity or oblateness), which can depend on a variety of factors including the size, angular velocity, density, and elasticity. A way for one to get a feel for the type of equilibrium involved is to imagine someone seated in a spinning swivel chair and holding a weight in each hand; if the individual pulls the weights inward towards them, work is being done and their rotational kinetic energy increases. The increase in rotation rate is so strong that at the faster rotation rate the required centripetal force is larger than with the starting rotation rate. Something analogous to this occurs in planet formation. Matter first coalesces into a slowly rotating disk-shaped distribution, and collisions and friction convert kinetic energy to heat, which allows the disk to self-gravitate into a very oblate spheroid. As long as the proto-planet is still too oblate to be in equilibrium, the release of gravitational potential energy on contraction keeps driving the increase in rotational kinetic energy. As the contraction proceeds, the rotation rate keeps going up, hence the required force for further contraction keeps going up. There is a point where the increase of rotational kinetic energy on further contraction would be larger than the release of gravitational potential energy. The contraction process can only proceed up to that point, so it halts there. As long as there is no equilibrium there can be violent convection, and as long as there is violent convection friction can convert kinetic energy to heat, draining rotational kinetic energy from the system. When the equilibrium state has been reached then large scale conversion of kinetic energy to heat ceases. In that sense the equilibrium state is the lowest state of energy that can be reached. The Earth's rotation rate is still slowing down, though gradually, by about two thousandths of a second per rotation every 100 years. Estimates of how fast the Earth was rotating in the past vary, because it is not known exactly how the moon was formed. Estimates of the Earth's rotation 500 million years ago are around 20 modern hours per "day". The Earth's rate of rotation is slowing down mainly because of tidal interactions with the Moon and the Sun. Since the solid parts of the Earth are ductile, the Earth's equatorial bulge has been decreasing in step with the decrease in the rate of rotation. Effect on gravitational acceleration Because of a planet's rotation around its own axis, the gravitational acceleration is less at the equator than at the poles. In the 17th century, following the invention of the pendulum clock, French scientists found that clocks sent to French Guiana, on the northern coast of South America, ran slower than their exact counterparts in Paris. Measurements of the acceleration due to gravity at the equator must also take into account the planet's rotation. Any object that is stationary with respect to the surface of the Earth is actually following a circular trajectory, circumnavigating the Earth's axis. Pulling an object into such a circular trajectory requires a force. The acceleration that is required to circumnavigate the Earth's axis along the equator at one revolution per sidereal day is 0.0339 m/s2. Providing this acceleration decreases the effective gravitational acceleration. At the Equator, the effective gravitational acceleration is 9.7805 m/s2. This means that the true gravitational acceleration at the Equator must be 9.8144 m/s2 (9.7805 + 0.0339 = 9.8144). At the poles, the gravitational acceleration is 9.8322 m/s2. The difference of 0.0178 m/s2 between the gravitational acceleration at the poles and the true gravitational acceleration at the Equator is because objects located on the Equator are about further away from the center of mass of the Earth than at the poles, which corresponds to a smaller gravitational acceleration. In summary, there are two contributions to the fact that the effective gravitational acceleration is less strong at the equator than at the poles. About 70% of the difference is contributed by the fact that objects circumnavigate the Earth's axis, and about 30% is due to the non-spherical shape of the Earth. The diagram illustrates that on all latitudes the effective gravitational acceleration is decreased by the requirement of providing a centripetal force; the decreasing effect is strongest on the Equator. Effect on satellite orbits The fact that the Earth's gravitational field slightly deviates from being spherically symmetrical also affects the orbits of satellites through secular orbital precessions. They depend on the orientation of the Earth's symmetry axis in the inertial space, and, in the general case, affect all the Keplerian orbital elements with the exception of the semimajor axis. If the reference z axis of the coordinate system adopted is aligned along the Earth's symmetry axis, then only the longitude of the ascending node Ω, the argument of pericenter ω and the mean anomaly M undergo secular precessions. Such perturbations, which were earlier used to map the Earth's gravitational field from space, may play a relevant disturbing role when satellites are used to make tests of general relativity because the much smaller relativistic effects are qualitatively indistinguishable from the oblateness-driven disturbances. Formulation The flattening for the equilibrium configuration of a self-gravitating spheroid, composed of uniform density incompressible fluid, rotating steadily about some fixed axis, for a small amount of flattening, is approximated by: where is the universal gravitational constant, is the mean radius, and are respectively the equatorial and polar radius, is the rotation period and is the angular velocity, is the body density and is the total body mass. A related quantity is the body's second dynamic form factor, : with J2 for Earth, where is the central body's oblateness, is central body's equatorial radius ( for Earth), is the central body's rotation rate ( for Earth), is the product of the universal constant of gravitation and the central body's mass ( for Earth). Typical values Real flattening is smaller due to mass concentration in the center of celestial bodies.
Physical sciences
Earth science basics: General
Earth science
143115
https://en.wikipedia.org/wiki/Astrophotography
Astrophotography
Astrophotography, also known as astronomical imaging, is the photography or imaging of astronomical objects, celestial events, or areas of the night sky. The first photograph of an astronomical object (the Moon) was taken in 1840, but it was not until the late 19th century that advances in technology allowed for detailed stellar photography. Besides being able to record the details of extended objects such as the Moon, Sun, and planets, modern astrophotography has the ability to image objects outside of the visible spectrum of the human eye such as dim stars, nebulae, and galaxies. This is accomplished through long time exposure as both film and digital cameras can accumulate and sum photons over long periods of time or using specialized optical filters which limit the photons to a certain wavelength. Photography using extended exposure-times revolutionized the field of professional astronomical research, recording hundreds of thousands of new stars, and nebulae invisible to the human eye. Specialized and ever-larger optical telescopes were constructed as essentially big cameras to record images on photographic plates. Astrophotography had an early role in sky surveys and star classification but over time it has used ever more sophisticated image sensors and other equipment and techniques designed for specific fields. Since almost all observational astronomy today uses photography, the term "astrophotography" usually refers to its use in amateur astronomy, seeking aesthetically pleasing images rather than scientific data. Amateurs use a wide range of special equipment and techniques. Methods With a few exceptions, astronomical photography employs long exposures since both film and digital imaging devices can accumulate light photons over long periods of time. The amount of light hitting the film or detector is also increased by increasing the diameter of the primary optics (the objective) being used. Urban areas produce light pollution so equipment and observatories doing astronomical imaging are often located in remote locations to allow long exposures without the film or detectors being swamped with stray light. Since the Earth is constantly rotating, telescopes and equipment are rotated in the opposite direction to follow the apparent motion of the stars overhead (called diurnal motion). This is accomplished by using either equatorial or computer-controlled altazimuth telescope mounts to keep celestial objects centered while Earth rotates. All telescope mount systems suffer from induced tracking errors due to imperfect motor drives, the mechanical sag of the telescope, and atmospheric refraction. Tracking errors are corrected by keeping a selected aiming point, usually a guide star, centered during the entire exposure. Sometimes (as in the case of comets) the object to be imaged is moving, so the telescope has to be kept constantly centered on that object. This guiding is done through a second co-mounted telescope called a "guide scope" or via some type of "off-axis guider", a device with a prism or optical beam splitter that allows the observer to view the same image in the telescope that is taking the picture. Guiding was formerly done manually throughout the exposure with an observer standing at (or riding inside) the telescope making corrections to keep a cross hair on the guide star. Since the advent of computer-controlled systems, this is accomplished by an automated system in professional and even amateur equipment. Astronomical photography was one of the earliest types of scientific photography and almost from its inception it diversified into subdisciplines that each have a specific goal including star cartography, astrometry, stellar classification, photometry, spectroscopy, polarimetry, and the discovery of astronomical objects such as asteroids, meteors, comets, variable stars, novae, and even unknown planets. These often require specialized equipment such as telescopes designed for precise imaging, for wide field of view (such as Schmidt cameras), or for work at specific wavelengths of light. Astronomical CCD cameras may cool the sensor to reduce thermal noise and to allow the detector to record images in other spectra such as in infrared astronomy. Specialized filters are also used to record images in specific wavelengths. History The development of astrophotography as a scientific tool was pioneered in the mid-19th century for the most part by experimenters and amateur astronomers, or so-called "gentleman scientists" (although, as in other scientific fields, these were not always men). Because of the very long exposures needed to capture relatively faint astronomical objects, many technological problems had to be overcome. These included making telescopes rigid enough so they would not sag out of focus during the exposure, building clock drives that could rotate the telescope mount at a constant rate, and developing ways to accurately keep a telescope aimed at a fixed point over a long period of time. Early photographic processes also had limitations. The daguerreotype process was far too slow to record anything but the brightest objects, and the wet plate collodion process limited exposures to the time the plate could stay wet. The first known attempt at astronomical photography was by Louis Jacques Mandé Daguerre, inventor of the daguerreotype process which bears his name, who attempted in 1839 to photograph the Moon. Tracking errors in guiding the telescope during the long exposure meant the photograph came out as an indistinct fuzzy spot. John William Draper, New York University Professor of Chemistry, physician and scientific experimenter managed to make the first successful photograph of the Moon a year later on March 23, 1840, taking a 20-minute-long daguerreotype image using a reflecting telescope. The Sun may have been first photographed in an 1845 daguerreotype by the French physicists Léon Foucault and Hippolyte Fizeau. A failed attempt to obtain a photograph of a Total Eclipse of the Sun was made by the Italian physicist, Gian Alessandro Majocchi during an eclipse of the Sun that took place in his home city of Milan, on July 8, 1842. He later gave an account of his attempt and the Daguerreotype photographs he obtained, in which he wrote: The Sun's solar corona was first successfully imaged during the Solar eclipse of July 28, 1851. Dr. August Ludwig Busch, the Director of the Königsberg Observatory gave instructions for a local daguerreotypist named Johann Julius Friedrich Berkowski to image the eclipse. Busch himself was not present at Königsberg (now Kaliningrad, Russia), but preferred to observe the eclipse from nearby Rixhoft. The telescope used by Berkowski was attached to Königsberg heliometer and had an aperture of only , and a focal length of . Commencing immediately after the beginning of totality, Berkowski exposed a daguerreotype plate for 84 seconds in the focus of the telescope, and on developing an image of the corona was obtained. He also exposed a second plate for about 40 to 45 seconds but was spoiled when the Sun broke out from behind the Moon. More detailed photographic studies of the Sun were made by the British astronomer Warren De la Rue starting in 1861. The first photograph of a star other than the Sun was a daguerreotype of the star Vega by astronomer William Cranch Bond and daguerreotype photographer and experimenter John Adams Whipple, on July 16 and 17, 1850 with Harvard College Observatory's 15 inch Great refractor. In 1863 the English chemist William Allen Miller and English amateur astronomer Sir William Huggins used the wet collodion plate process to obtain the first ever photographic spectrogram of a star, Sirius and Capella. In 1872 American physician Henry Draper, the son of John William Draper, recorded the first spectrogram of a star (Vega) to show absorption lines. Astronomical photography did not become a serious research tool until the late 19th century, with the introduction of dry plate photography. It was first used by Sir William Huggins and his wife Margaret Lindsay Huggins, in 1876, in their work to record the spectra of astronomical objects. In 1880, Henry Draper used the new dry plate process with photographically corrected refracting telescope made by Alvan Clark to make a 51-minute exposure of the Orion Nebula, the first photograph of a nebula ever made. A breakthrough in astronomical photography came in 1883, when amateur astronomer Andrew Ainslie Common used the dry plate process to record several images of the same nebula in exposures up to 60 minutes with a reflecting telescope that he constructed in the backyard of his home in Ealing, outside London. These images for the first time showed stars too faint to be seen by the human eye. The first all-sky photographic astrometry project, Astrographic Catalogue and Carte du Ciel, was started in 1887. It was conducted by 20 observatories all using special photographic telescopes with a uniform design called normal astrographs, all with an aperture of around and a focal length of , designed to create images with a uniform scale on the photographic plate of approximately 60 arcsecs/mm while covering a 2° × 2° field of view. The attempt was to accurately map the sky down to the 14th magnitude but it was never completed. The beginning of the 20th century saw the worldwide construction of refracting telescopes and sophisticated large reflecting telescopes specifically designed for photographic imaging. Towards the middle of the century, giant telescopes such as the Hale Telescope and the Samuel Oschin telescope at Palomar Observatory were pushing the limits of film photography. Some progress was made in the field of photographic emulsions and in the techniques of forming gas hypersensitization, cryogenic cooling, and light amplification, but starting in the 1970s after the invention of the CCD, photographic plates were gradually replaced by electronic imaging in professional and amateur observatories. CCD's are far more light sensitive, do not drop off in sensitivity over long exposures the way film does ("reciprocity failure"), have the ability to record in a much wider spectral range, and simplify storage of information. Telescopes now use many configurations of CCD sensors including linear arrays and large mosaics of CCD elements equivalent to 100 million pixels, designed to cover the focal plane of telescopes that formerly used photographic plates. The late 20th century saw advances in astronomical imaging take place in the form of new hardware, with the construction of giant multi-mirror and segmented mirror telescopes. It would also see the introduction of space-based telescopes, such as the Hubble Space Telescope. Operating outside the atmosphere's turbulence, scattered ambient light and the vagaries of weather allows the Hubble Space Telescope, with a mirror diameter of , to record stars down to the 30th magnitude, some 100 times dimmer than what the 5-meter Mount Palomar Hale Telescope could record in 1949. Amateur astrophotography Astrophotography is a popular hobby among photographers and amateur astronomers. Techniques ranges from basic film and digital cameras on tripods up to methods and equipment geared toward advanced imaging. Amateur astronomers and amateur telescope makers also use homemade equipment and modified devices. Media Images are recorded on many types of media and imaging devices including single-lens reflex cameras, 35 mm film, 120 film, digital single-lens reflex cameras, simple amateur-level, and professional-level commercially manufactured astronomical CCD and CMOS cameras, video cameras, and even off-the-shelf webcams used for Lucky imaging. The conventional over-the-counter film has long been used for astrophotography. Film exposures range from seconds to over an hour. Commercially available color film stock is subject to reciprocity failure over long exposures, in which sensitivity to light of different wavelengths appears to drop off at different rates as the exposure time increases, leading to a color shift in the image and reduced sensitivity over all as a function of time. This is compensated for, or at least reduced, by cooling the film (see Cold camera photography). This can also be compensated for by using the same technique used in professional astronomy of taking photographs at different wavelengths that are then combined to create a correct color image. Since the film is much slower than digital sensors, tiny errors in tracking can be corrected without much noticeable effect on the final image. Film astrophotography is becoming less popular due to the lower ongoing costs, greater sensitivity, and the convenience of digital photography. Since the late 1990s amateurs have been following the professional observatories in the switch from film to digital CCDs for astronomical imaging. CCDs are more sensitive than film, allowing much shorter exposure times, and have a linear response to light. Images can be captured in many short exposures to create a synthetic long exposure. Digital cameras also have minimal or no moving parts and the ability to be operated remotely via an infrared remote or computer tethering, limiting vibration. Simple digital devices such as webcams can be modified to allow access to the focal plane and even (after the cutting of a few wires), for long exposure photography. Digital video cameras are also used. There are many techniques and pieces of commercially manufactured equipment for attaching digital single-lens reflex (DSLR) cameras and even basic point and shoot cameras to telescopes. Consumer-level digital cameras suffer from image noise over long exposures, so there are many techniques for cooling the camera, including cryogenic cooling. Astronomical equipment companies also now offer a wide range of purpose-built astronomical CCD cameras complete with hardware and processing software. Many commercially available DSLR cameras have the ability to take long time exposures combined with sequential (time-lapse) images allowing the photographer to create a motion picture of the night sky. CMOS cameras are increasingly replacing CCD cameras in the amateur sector. Modern CMOS sensors offer higher quantum efficiency, lower thermal and read noise and faster readout speeds than commercially available CCD sensors. Post-processing Both digital camera images and scanned film images are usually adjusted in image processing software to improve the image in some way. Images can be brightened and manipulated in a computer to adjust color and increase the contrast. More sophisticated techniques involve capturing multiple images (sometimes thousands) to composite together in an additive process to sharpen images to overcome atmospheric seeing, negating tracking issues, bringing out faint objects with a poor signal-to-noise ratio, and filtering out light pollution. Digital camera images may also need further processing to reduce the image noise from long exposures, including subtracting a “dark frame” and a processing called image stacking or "Shift-and-add". Commercial, freeware and free software packages are available specifically for astronomical photographic image manipulation. "Lucky imaging" is a secondary technique that involves taking a video of an object rather than standard long exposure photos. Software can then select the highest quality images which can then be stacked. Color and brightness Astronomical pictures, like observational astronomy and photography from space exploration, show astronomical objects and phenomena in different colors and brightness, and often as composite images. This is done to highlight different features or reflect different conditions, and makes the note of these conditions necessary. Images attempting to reproduce the true color and appearance of an astronomical object or phenomenon need to consider many factors, including how the human eye works. Particularly under different atmospheric conditions images need to evaluate several factors to produce analyzable or representative images, like images of space missions from the surface of Mars, Venus or Titan. Hardware Astrophotographic hardware among non-professional astronomers varies widely since the photographers themselves range from general photographers shooting some form of aesthetically pleasing images to very serious amateur astronomers collecting data for scientific research. As a hobby, astrophotography has many challenges that have to be overcome that differ from conventional photography and from what is normally encountered in professional astronomy. Since most people live in urban areas, equipment often needs to be portable so that it can be taken far away from the lights of major cities or towns to avoid urban light pollution. Urban astrophotographers may use special light-pollution or narrow-band filters and advanced computer processing techniques to reduce ambient urban light in the background of their images. They may also stick to imaging bright targets like the Sun, Moon and planets. Another method used by amateurs to avoid light pollution is to set up, or rent time, on a remotely operated telescope at a dark sky location. Other challenges include setup and alignment of portable telescopes for accurate tracking, working within the limitations of “off the shelf” equipment, the endurance of monitoring equipment, and sometimes manually tracking astronomical objects over long exposures in a wide range of weather conditions. Some camera manufacturers modify their products to be used as astrophotography cameras, such as Canon's EOS 60Da, based on the EOS 60D but with a modified infrared filter and a low-noise sensor with heightened hydrogen-alpha sensitivity for improved capture of red hydrogen emission nebulae. There are also cameras specifically designed for amateur astrophotography based on commercially available imaging sensors. They may also allow the sensor to be cooled to reduce thermal noise in long exposures, provide raw image readout, and to be controlled from a computer for automated imaging. Raw image readout allows later better image processing by retaining all the original image data which along with stacking can assist in imaging faint deep sky objects. With very low light capability, a few specific models of webcams are popular for solar, lunar, and planetary imaging. Mostly, these are manually focused cameras containing a CCD sensor instead of the more common CMOS. The lenses of these cameras are removed and then these are attached to telescopes to record images, videos, or both. In newer techniques, videos of very faint objects are taken and the sharpest frames of the video are 'stacked' together to obtain a still image of respectable contrast. The Philips PCVC 740K and SPC 900 are among the few webcams liked by astrophotographers. Any smartphone that allows long exposures can be used for this purpose, but some phones have a specific mode for astrophotography that will stitch together multiple exposures. Equipment setups Fixed or tripod The most basic types of astronomical photographs are made with standard cameras and photographic lenses mounted in a fixed position or on a tripod. Foreground objects or landscapes are sometimes composed in the shot. Objects imaged are constellations, interesting planetary configurations, meteors, and bright comets. Exposure times must be short (under a minute) to avoid having the stars point image become an elongated line due to the Earth's rotation. Camera lens focal lengths are usually short, as longer lenses will show image trailing in a matter of seconds. A rule of thumb called the 500 rule states that, to keep stars point-like, Maximum exposure time in seconds = regardless of aperture or ISO setting. For example, with a 35 mm lens on an APS-C sensor, the maximum time is ≈ 9.5 s. A more accurate calculation takes into account pixel pitch and declination. Allowing the stars to intentionally become elongated lines in exposures lasting several minutes or even hours, called “star trails”, is an artistic technique sometimes used. Tracking mounts Telescope mounts that compensate for the Earth's rotation are used for longer exposures without objects being blurred. They include commercial equatorial mounts and homemade equatorial devices such as barn door trackers and equatorial platforms. Mounts can suffer from inaccuracies due to backlash in the gears, wind, and imperfect balance, and so a technique called auto guiding is used as a closed feedback system to correct for these inaccuracies. Tracking mounts can come in two forms; single axis and dual axis. Single axis mounts are often known as star trackers. Star trackers have a single motor which drives the right ascension axis. This allows the mount to compensate for the Earth's rotation. Star trackers rely on the user ensuring the mount is polar aligned with high accuracy, as it is unable correct in the secondary declination axis, limiting exposure times. Dual axis mounts use two motors to drive both the right ascension and the declination axis together. This mount will compensate for the Earth's rotation by driving the right ascension axis, similar to a star tracker. However using an auto-guiding system, the secondary declination axis can also be driven, compensating for errors in polar alignment, allowing for significantly longer exposure times. "Piggyback" photography Piggyback astronomical photography is a method where a camera/lens is mounted on an equatorially mounted astronomical telescope. The telescope is used as a guide scope to keep the field of view centered during the exposure. This allows the camera to use a longer exposure and/or a longer focal length lens or even be attached to some form of photographic telescope co-axial with the main telescope. Telescope focal plane photography In this type of photography, the telescope itself is used as the "lens" collecting light for the film or CCD of the camera. Although this allows for the magnification and light-gathering power of the telescope to be used, it is one of the most difficult astrophotography methods. This is because of the difficulties in centering and focusing sometimes very dim objects in the narrow field of view, contending with magnified vibration and tracking errors, and the added expense of equipment (such as sufficiently sturdy telescope mounts, camera mounts, camera couplers, off-axis guiders, guide scopes, illuminated cross-hairs, or auto-guiders mounted on primary telescope or the guide-scope.) There are several different ways cameras (with removable lenses) are attached to amateur astronomical telescopes including: Prime focus – In this method the image produced by the telescope falls directly on the film or CCD with no intervening optics or telescope eyepiece. Positive projection – A method in which the telescope eyepiece (eyepiece projection) or a positive lens (placed after the focal plane of the telescope objective) is used to project a much more magnified image directly onto the film or CCD. Since the image is magnified with a narrow field of view this method is generally used for lunar and planetary photography. Negative projection – This method, like positive projection, produces a magnified image. A negative lens, usually a Barlow or a photographic teleconverter, is placed in the light cone before the focal plane of the telescope objective. Compression – Compression uses a positive lens (also called a focal reducer), placed in the converging cone of light before the focal plane of the telescope objective, to reduce overall image magnification. It is used on very long focal length telescopes, such as Maksutovs and Schmidt–Cassegrains, to obtain a wider field of view, or to reduce the focal ratio of the setup thereby increasing the speed of the system. When the camera lens is not removed (or cannot be removed) a common method used is afocal photography, also called afocal projection. In this method, both the camera lens and the telescope eyepiece are attached. When both are focused at infinity the light path between them is parallel (afocal), allowing the camera to basically photograph anything the observer can see. This method works well for capturing images of the moon and brighter planets, as well as narrow field images of stars and nebulae. Afocal photography was common with early 20th-century consumer-level cameras since many models had non-removable lenses. It has grown in popularity with the introduction of point and shoot digital cameras since most models also have non-removable lenses. Filters Filters can be categorised into two classes; broadband and narrowband. Broadband filters allow a wide range of wavelengths to pass through, removing small amounts of light pollution. Narrowband filters only allow light from very specific wavelengths to pass through, blocking out the vast majority of the spectrum. Astronomical filters usually come as sets and are manufactured to specific standards, in order to allow different observatories to make observations at the same standard. A common filter standard in the astronomy community is the Johnson Morgan UVB, designed to match a CCD’s color response to that of photographic film. However there are over 200 standards available. Remote Telescope Fast Internet access in the last part of the 20th century, and advances in computer-controlled telescope mounts and CCD cameras, allows use of 'Remote Telescopes' for amateur astronomers not aligned with major telescope facilities to partake in research and deep-sky imaging. This enables the imager to control a telescope far away in a dark location. The observers can image through the telescopes using CCD cameras. Imaging can be done regardless of the location of the user or the telescopes they wish to use. The digital data collected by the telescope is then transmitted and displayed to the user by means of the Internet. An example of a digital remote telescope operation for public use via the Internet is The Bareket Observatory. Gallery
Physical sciences
Astronomy basics
Astronomy
143129
https://en.wikipedia.org/wiki/Ethylene%20glycol
Ethylene glycol
Ethylene glycol (IUPAC name: ethane-1,2-diol) is an organic compound (a vicinal diol) with the formula . It is mainly used for two purposes: as a raw material in the manufacture of polyester fibers and for antifreeze formulations. It is an odorless, colorless, flammable, viscous liquid. It has a sweet taste, but is toxic in high concentrations. This molecule has been observed in outer space. Production Industrial routes Ethylene glycol is produced from ethylene (ethene), via the intermediate ethylene oxide. Ethylene oxide reacts with water to produce ethylene glycol according to the chemical equation This reaction can be catalyzed by either acids or bases, or can occur at neutral pH under elevated temperatures. The highest yields of ethylene glycol occur at acidic or neutral pH with a large excess of water. Under these conditions, ethylene glycol yields of 90% can be achieved. The major byproducts are the oligomers diethylene glycol, triethylene glycol, and tetraethylene glycol. The separation of these oligomers and water is energy-intensive. World production of ethylene glycol was ~20 Mt in 2010. A higher selectivity is achieved by the use of Shell's OMEGA process. In the OMEGA process, the ethylene oxide is first converted with carbon dioxide () to ethylene carbonate. This ring is then hydrolyzed with a base catalyst in a second step to produce mono-ethylene glycol in 98% selectivity. The carbon dioxide is released in this step again and can be fed back into the process circuit. The carbon dioxide comes in part from ethylene oxide production, where a part of the ethylene is completely oxidized. Ethylene glycol is produced from carbon monoxide in countries with large coal reserves and less stringent environmental regulations. The oxidative carbonylation of methanol to dimethyl oxalate provides a promising approach to the production of -based ethylene glycol. Dimethyl oxalate can be converted into ethylene glycol in high yields (94.7%) by hydrogenation with a copper catalyst: Because the methanol is recycled, only carbon monoxide, hydrogen, and oxygen are consumed. One plant with a production capacity of of ethylene glycol per year is in Inner Mongolia, and a second plant in the Chinese province of Henan with a capacity of was scheduled for 2012. , four plants in China with a capacity of each were operating, with at least 17 more to follow. Biological routes Ethylene glycol can be produced by recycling its polymeric derivatives such a polyethylene terephthalate. Historical routes According to most sources, French chemist Charles-Adolphe Wurtz (1817–1884) first prepared ethylene glycol in 1856. He first treated "ethylene iodide" (1,2-Diiodoethane) with silver acetate and then hydrolyzed the resultant "ethylene diacetate" with potassium hydroxide. Wurtz named his new compound "glycol" because it shared qualities with both ethyl alcohol (with one hydroxyl group) and glycerin (with three hydroxyl groups). In 1859, Wurtz prepared ethylene glycol via the hydration of ethylene oxide. There appears to have been no commercial manufacture or application of ethylene glycol prior to World War I, when it was synthesized from ethylene dichloride in Germany and used as a substitute for glycerol in the explosives industry. In the United States, semicommercial production of ethylene glycol via ethylene chlorohydrin started in 1917. The first large-scale commercial glycol plant was erected in 1925 at South Charleston, West Virginia, by Carbide and Carbon Chemicals Co. (now Union Carbide Corp.). By 1929, ethylene glycol was being used by almost all dynamite manufacturers. In 1937, Carbide started up the first plant based on Lefort's process for vapor-phase oxidation of ethylene to ethylene oxide. Carbide maintained a monopoly on the direct oxidation process until 1953 when the Scientific Design process was commercialized and offered for licensing. Uses Coolant and heat-transfer agent The major use of ethylene glycol is as an antifreeze agent in the coolant in for example, automobiles and air-conditioning systems that either place the chiller or air handlers outside or must cool below the freezing temperature of water. In geothermal heating/cooling systems, ethylene glycol is the fluid that transports heat through the use of a geothermal heat pump. The ethylene glycol either gains energy from the source (lake, ocean, water well) or dissipates heat to the sink, depending on whether the system is being used for heating or cooling. Pure ethylene glycol has a specific heat capacity about one half that of water. So, while providing freeze protection and an increased boiling point, ethylene glycol lowers the specific heat capacity of water mixtures relative to pure water. A 1:1 mix by mass has a specific heat capacity of about 3140 J/(kg·°C) (0.75 BTU/(lb·°F)), three quarters that of pure water, thus requiring increased flow rates in same-system comparisons with water. The mixture of ethylene glycol with water provides additional benefits to coolant and antifreeze solutions, such as preventing corrosion and acid degradation, as well as inhibiting the growth of most microbes and fungi. Mixtures of ethylene glycol and water are sometimes informally referred to in industry as glycol concentrates, compounds, mixtures, or solutions. Table of thermal and physical properties of saturated liquid ethylene glycol: Anti-freeze Pure ethylene glycol freezes at about −12 °C (10.4 °F) but, when mixed with water, the mixture freezes at a lower temperature. For example, a mixture of 60% ethylene glycol and 40% water freezes at −45 °C (−49 °F). Diethylene glycol behaves similarly. The freezing point depression of some mixtures can be explained as a colligative property of solutions but, in highly concentrated mixtures such as the example, deviations from ideal solution behavior are expected due to the influence of intermolecular forces. It's important to note that though pure and distilled water will have a greater specific heat capacity than any mixture of antifreeze and water, commercial antifreezes also typically contain an anti-corrosive additive to prevent pure water from corroding coolant passages in the engine block, cylinder head(s), water pump and radiator. There is a difference in the mixing ratio, depending on whether it is ethylene glycol or propylene glycol. For ethylene glycol, the mixing ratios are typically 30/70 and 35/65, whereas the propylene glycol mixing ratios are typically 35/65 and 40/60. It is important that the mixture be frost-proof at the lowest operating temperature. Because of the depressed freezing temperatures, ethylene glycol is used as a de-icing fluid for windshields and aircraft, as an antifreeze in automobile engines, and as a component of vitrification (anticrystallization) mixtures for low-temperature preservation of biological tissues and organs. The use of ethylene glycol not only depresses the freezing point of aqueous mixtures, but also elevates their boiling point. This results in the operating temperature range for heat-transfer fluids being broadened on both ends of the temperature scale. The increase in boiling temperature is due to pure ethylene glycol having a much higher boiling point and lower vapor pressure than pure water. Precursor to polymers In the plastic industry, ethylene glycol is an important precursor to polyester fibers and resins. Polyethylene terephthalate, used to make plastic bottles for soft drinks, is prepared from ethylene glycol. Other uses Dehydrating agent Ethylene glycol is used in the natural gas industry to remove water vapor from natural gas before further processing, in much the same manner as triethylene glycol (TEG). Hydrate inhibition Because of its high boiling point and affinity for water, ethylene glycol is a useful desiccant. Ethylene glycol is widely used to inhibit the formation of natural gas clathrates (hydrates) in long multiphase pipelines that convey natural gas from remote gas fields to a gas processing facility. Ethylene glycol can be recovered from the natural gas and reused as an inhibitor after purification treatment that removes water and inorganic salts. Natural gas is dehydrated by ethylene glycol. In this application, ethylene glycol flows down from the top of a tower and meets a rising mixture of water vapor and hydrocarbon gases. Dry gas exits from the top of the tower. The glycol and water are separated, and the glycol recycled. Instead of removing water, ethylene glycol can also be used to depress the temperature at which hydrates are formed. The purity of glycol used for hydrate suppression (monoethylene glycol) is typically around 80%, whereas the purity of glycol used for dehydration (triethylene glycol) is typically 95 to more than 99%. Moreover, the injection rate for hydrate suppression is much lower than the circulation rate in a glycol dehydration tower. Precursor to other chemicals Minor uses of ethylene glycol include the manufacture of capacitors, as a chemical intermediate in the manufacture of 1,4-dioxane, as an additive to prevent corrosion in liquid cooling systems for personal computers, and inside the lens devices of cathode-ray tube type of rear projection televisions. Ethylene glycol is also used in the manufacture of some vaccines, but it is not itself present in these injections. It is used as a minor (1–2%) ingredient in shoe polish and also in some inks and dyes. Ethylene glycol has seen some use as a rot and fungal treatment for wood, both as a preventative and a treatment after the fact. It has been used in a few cases to treat partially rotted wooden objects to be displayed in museums. It is one of only a few treatments that are successful in dealing with rot in wooden boats, and is relatively cheap. Ethylene glycol may also be one of the minor ingredients in screen cleaning solutions, along with the main ingredient isopropyl alcohol. Ethylene glycol is commonly used as a preservative for biological specimens, especially in secondary schools during dissection as a safer alternative to formaldehyde. It is also used as part of the water-based hydraulic fluid used to control subsea oil and gas production equipment. Organic building block Although dwarfed by its use as a precursor to polyesters, ethylene glycol is useful in more specialized areas of organic chemistry. It serves as a protecting group in organic synthesis for manipulation of ketones and aldehydes. In one example, isophorone was protected using ethylene glycol: The glycol-derived dioxalane of ethyl acetoacetate is a commercial fragrance fructone. Miscellaneous chemical reactions Silicon dioxide dissolves slowly in hot ethylene glycol in the presence of alkali metal base to produce silicates. Toxicity Ethylene glycol has relatively high mammalian toxicity when ingested, roughly on par with methanol, with an oral LDLo = 786 mg/kg for humans. The major danger is due to its sweet taste, which can attract children and animals. Upon ingestion, ethylene glycol is oxidized to glycolic acid, which is, in turn, oxidized to oxalic acid, which is toxic. It and its toxic byproducts first affect the central nervous system, then the heart, and finally the kidneys. Ingestion of sufficient amounts is fatal if untreated. Several deaths are recorded annually in the U.S. alone. Antifreeze products for automotive use containing propylene glycol in place of ethylene glycol are available. They are generally considered safer to use, as propylene glycol is not as palatable and is converted in the body to lactic acid, a normal product of metabolism and exercise. Australia, the UK, and seventeen US states (as of 2012) require the addition of a bitter flavoring (denatonium benzoate) to antifreeze. In December 2012, US antifreeze manufacturers agreed voluntarily to add a bitter flavoring to all antifreeze that is sold in the consumer market of the US. In 2022, several hundred children died of acute kidney failure in Indonesia and The Gambia because the paracetamol syrup made by New Delhi-based Maiden Pharmaceuticals contained ethylene glycol and diethylene glycol, ingredients that have been linked to child deaths from acute kidney injury in The Gambia. In December 2022, Uzbekistan's health ministry has said children died as a result of ethylene glycol in cough syrup made by Marion Biotech, which is based at Noida, near New Delhi. Environmental effects Ethylene glycol is a high-production-volume chemical. It breaks down in air in about 10 days and in water or soil in a few weeks. It enters the environment through the dispersal of ethylene glycol-containing products, especially at airports, where it is used in de-icing agents for runways and airplanes. While prolonged low doses of ethylene glycol show no toxicity, at near lethal doses (≥ 1000 mg/kg per day) ethylene glycol acts as a teratogen. "Based on a rather extensive database, it induces skeletal variations and malformations in rats and mice by all routes of exposure."
Physical sciences
Alcohols
Chemistry
143135
https://en.wikipedia.org/wiki/Parity%20%28mathematics%29
Parity (mathematics)
In mathematics, parity is the property of an integer of whether it is even or odd. An integer is even if it is divisible by 2, and odd if it is not. For example, −4, 0, and 82 are even numbers, while −3, 5, 7, and 21 are odd numbers. The above definition of parity applies only to integer numbers, hence it cannot be applied to numbers like 1/2 or 4.201. See the section "Higher mathematics" below for some extensions of the notion of parity to a larger class of "numbers" or in other more general settings. Even and odd numbers have opposite parities, e.g., 22 (even number) and 13 (odd number) have opposite parities. In particular, the parity of zero is even. Any two consecutive integers have opposite parity. A number (i.e., integer) expressed in the decimal numeral system is even or odd according to whether its last digit is even or odd. That is, if the last digit is 1, 3, 5, 7, or 9, then it is odd; otherwise it is even—as the last digit of any even number is 0, 2, 4, 6, or 8. The same idea will work using any even base. In particular, a number expressed in the binary numeral system is odd if its last digit is 1; and it is even if its last digit is 0. In an odd base, the number is even according to the sum of its digits—it is even if and only if the sum of its digits is even. Definition An even number is an integer of the form where k is an integer; an odd number is an integer of the form An equivalent definition is that an even number is divisible by 2: and an odd number is not: The sets of even and odd numbers can be defined as following: The set of even numbers is a prime ideal of and the quotient ring is the field with two elements. Parity can then be defined as the unique ring homomorphism from to where odd numbers are 1 and even numbers are 0. The consequences of this homomorphism are covered below. Properties The following laws can be verified using the properties of divisibility. They are a special case of rules in modular arithmetic, and are commonly used to check if an equality is likely to be correct by testing the parity of each side. As with ordinary arithmetic, multiplication and addition are commutative and associative in modulo 2 arithmetic, and multiplication is distributive over addition. However, subtraction in modulo 2 is identical to addition, so subtraction also possesses these properties, which is not true for normal integer arithmetic. Addition and subtraction even ± even = even; even ± odd = odd; odd ± odd = even; Multiplication even × even = even; even × odd = even; odd × odd = odd; By construction in the previous section, the structure ({even, odd}, +, ×) is in fact the field with two elements. Division The division of two whole numbers does not necessarily result in a whole number. For example, 1 divided by 4 equals 1/4, which is neither even nor odd, since the concepts of even and odd apply only to integers. But when the quotient is an integer, it will be even if and only if the dividend has more factors of two than the divisor. History The ancient Greeks considered 1, the monad, to be neither fully odd nor fully even. Some of this sentiment survived into the 19th century: Friedrich Wilhelm August Fröbel's 1826 The Education of Man instructs the teacher to drill students with the claim that 1 is neither even nor odd, to which Fröbel attaches the philosophical afterthought, Higher mathematics Higher dimensions and more general classes of numbers Integer coordinates of points in Euclidean spaces of two or more dimensions also have a parity, usually defined as the parity of the sum of the coordinates. For instance, the face-centered cubic lattice and its higher-dimensional generalizations (the Dn lattices) consist of all of the integer points whose coordinates have an even sum. This feature also manifests itself in chess, where the parity of a square is indicated by its color: bishops are constrained to moving between squares of the same parity, whereas knights alternate parity between moves. This form of parity was famously used to solve the mutilated chessboard problem: if two opposite corner squares are removed from a chessboard, then the remaining board cannot be covered by dominoes, because each domino covers one square of each parity and there are two more squares of one parity than of the other. The parity of an ordinal number may be defined to be even if the number is a limit ordinal, or a limit ordinal plus a finite even number, and odd otherwise. Let R be a commutative ring and let I be an ideal of R whose index is 2. Elements of the coset may be called even, while elements of the coset may be called odd. As an example, let be the localization of Z at the prime ideal (2). Then an element of R is even or odd if and only if its numerator is so in Z. Number theory The even numbers form an ideal in the ring of integers, but the odd numbers do not—this is clear from the fact that the identity element for addition, zero, is an element of the even numbers only. An integer is even if it is congruent to 0 modulo this ideal, in other words if it is congruent to 0 modulo 2, and odd if it is congruent to 1 modulo 2. All prime numbers are odd, with one exception: the prime number 2. All known perfect numbers are even; it is unknown whether any odd perfect numbers exist. Goldbach's conjecture states that every even integer greater than 2 can be represented as a sum of two prime numbers. Modern computer calculations have shown this conjecture to be true for integers up to at least 4 × 1018, but still no general proof has been found. Group theory The parity of a permutation (as defined in abstract algebra) is the parity of the number of transpositions into which the permutation can be decomposed. For example (ABC) to (BCA) is even because it can be done by swapping A and B then C and A (two transpositions). It can be shown that no permutation can be decomposed both in an even and in an odd number of transpositions. Hence the above is a suitable definition. In Rubik's Cube, Megaminx, and other twisting puzzles, the moves of the puzzle allow only even permutations of the puzzle pieces, so parity is important in understanding the configuration space of these puzzles. The Feit–Thompson theorem states that a finite group is always solvable if its order is an odd number. This is an example of odd numbers playing a role in an advanced mathematical theorem where the method of application of the simple hypothesis of "odd order" is far from obvious. Analysis The parity of a function describes how its values change when its arguments are exchanged with their negations. An even function, such as an even power of a variable, gives the same result for any argument as for its negation. An odd function, such as an odd power of a variable, gives for any argument the negation of its result when given the negation of that argument. It is possible for a function to be neither odd nor even, and for the case f(x) = 0, to be both odd and even. The Taylor series of an even function contains only terms whose exponent is an even number, and the Taylor series of an odd function contains only terms whose exponent is an odd number. Combinatorial game theory In combinatorial game theory, an evil number is a number that has an even number of 1's in its binary representation, and an odious number is a number that has an odd number of 1's in its binary representation; these numbers play an important role in the strategy for the game Kayles. The parity function maps a number to the number of 1's in its binary representation, modulo 2, so its value is zero for evil numbers and one for odious numbers. The Thue–Morse sequence, an infinite sequence of 0's and 1's, has a 0 in position i when i is evil, and a 1 in that position when i is odious. Additional applications In information theory, a parity bit appended to a binary number provides the simplest form of error detecting code. If a single bit in the resulting value is changed, then it will no longer have the correct parity: changing a bit in the original number gives it a different parity than the recorded one, and changing the parity bit while not changing the number it was derived from again produces an incorrect result. In this way, all single-bit transmission errors may be reliably detected. Some more sophisticated error detecting codes are also based on the use of multiple parity bits for subsets of the bits of the original encoded value. In wind instruments with a cylindrical bore and in effect closed at one end, such as the clarinet at the mouthpiece, the harmonics produced are odd multiples of the fundamental frequency. (With cylindrical pipes open at both ends, used for example in some organ stops such as the open diapason, the harmonics are even multiples of the same frequency for the given bore length, but this has the effect of the fundamental frequency being doubled and all multiples of this fundamental frequency being produced.) See harmonic series (music). In some countries, house numberings are chosen so that the houses on one side of a street have even numbers and the houses on the other side have odd numbers. Similarly, among United States numbered highways, even numbers primarily indicate east–west highways while odd numbers primarily indicate north–south highways. Among airline flight numbers, even numbers typically identify eastbound or northbound flights, and odd numbers typically identify westbound or southbound flights.
Mathematics
Basics
null
24079760
https://en.wikipedia.org/wiki/Virtual%20state
Virtual state
In quantum physics, a virtual state is a very short-lived, unobservable quantum state. In many quantum processes a virtual state is an intermediate state, sometimes described as "imaginary" in a multi-step process that mediates otherwise forbidden transitions. Since virtual states are not eigenfunctions of any operator, normal parameters such as occupation, energy and lifetime need to be qualified. No measurement of a system will show one to be occupied, but they still have lifetimes derived from uncertainty relations. While each virtual state has an associated energy, no direct measurement of its energy is possible but various approaches have been used to make some measurements (for example see and related work on virtual state spectroscopy) or extract other parameters using measurement techniques that depend upon the virtual state's lifetime. The concept is quite general and can be used to predict and describe experimental results in many areas including Raman spectroscopy, non-linear optics generally, various types of photochemistry, and nuclear processes.
Physical sciences
Quantum mechanics
Physics
24082423
https://en.wikipedia.org/wiki/Mass%E2%80%93luminosity%20relation
Mass–luminosity relation
In astrophysics, the mass–luminosity relation is an equation giving the relationship between a star's mass and its luminosity, first noted by Jakob Karl Ernst Halm. The relationship is represented by the equation: where and are the luminosity and mass of the Sun and . The value is commonly used for main-sequence stars. This equation and the usual value of only applies to main-sequence stars with masses and does not apply to red giants or white dwarfs. As a star approaches the Eddington luminosity then . In summary, the relations for stars with different ranges of mass are, to a good approximation, as the following: For stars with masses less than 0.43M⊙, convection is the sole energy transport process, so the relation changes significantly. For stars with masses M > 55M⊙ the relationship flattens out and becomes L ∝ M but in fact those stars don't last because they are unstable and quickly lose matter by intense solar winds. It can be shown this change is due to an increase in radiation pressure in massive stars. These equations are determined empirically by determining the mass of stars in binary systems to which the distance is known via standard parallax measurements or other techniques. After enough stars are plotted, stars will form a line on a logarithmic plot and slope of the line gives the proper value of a. Another form, valid for K-type main-sequence stars, that avoids the discontinuity in the exponent has been given by Cuntz & Wang; it reads: with (M in M⊙). This relation is based on data by Mann and collaborators, who used moderate-resolution spectra of nearby late-K and M dwarfs with known parallaxes and interferometrically determined radii to refine their effective temperatures and luminosities. Those stars have also been used as a calibration sample for Kepler candidate objects. Besides avoiding the discontinuity in the exponent at M = 0.43M⊙, the relation also recovers a = 4.0 for M ≃ 0.85M⊙. The mass/luminosity relation is important because it can be used to find the distance to binary systems which are too far for normal parallax measurements, using a technique called "dynamical parallax". In this technique, the masses of the two stars in a binary system are estimated, usually in terms of the mass of the Sun. Then, using Kepler's laws of celestial mechanics, the distance between the stars is calculated. Once this distance is found, the distance away can be found via the arc subtended in the sky, giving a preliminary distance measurement. From this measurement and the apparent magnitudes of both stars, the luminosities can be found, and by using the mass–luminosity relationship, the masses of each star. These masses are used to re-calculate the separation distance, and the process is repeated. The process is iterated many times, and accuracies as high as 5% can be achieved. The mass/luminosity relationship can also be used to determine the lifetime of stars by noting that lifetime is approximately proportional to M/L although one finds that more massive stars have shorter lifetimes than that which the M/L relationship predicts. A more sophisticated calculation factors in a star's loss of mass over time. Derivation Deriving a theoretically exact mass/luminosity relation requires finding the energy generation equation and building a thermodynamic model of the inside of a star. However, the basic relation L ∝ M3 can be derived using some basic physics and simplifying assumptions. The first such derivation was performed by astrophysicist Arthur Eddington in 1924. The derivation showed that stars can be approximately modelled as ideal gases, which was a new, somewhat radical idea at the time. What follows is a somewhat more modern approach based on the same principles. An important factor controlling the luminosity of a star (energy emitted per unit time) is the rate of energy dissipation through its bulk. Where there is no heat convection, this dissipation happens mainly by photons diffusing. By integrating Fick's first law over the surface of some radius r in the radiation zone (where there is negligible convection), we get the total outgoing energy flux which is equal to the luminosity by conservation of energy: where D is the photons diffusion coefficient, and u is the energy density. Note that this assumes that the star is not fully convective, and that all heat creating processes (nucleosynthesis) happen in the core, below the radiation zone. These two assumptions are not correct in red giants, which do not obey the usual mass-luminosity relation. Stars of low mass are also fully convective, hence do not obey the law. Approximating the star by a black body, the energy density is related to the temperature by the Stefan–Boltzmann law: where is the Stefan–Boltzmann constant, c is the speed of light, kB is Boltzmann constant and is the reduced Planck constant. As in the theory of diffusion coefficient in gases, the diffusion coefficient D approximately satisfies: where λ is the photon mean free path. Since matter is fully ionized in the star core (as well as where the temperature is of the same order of magnitude as inside the core), photons collide mainly with electrons, and so λ satisfies Here is the electron density and: is the cross section for electron-photon scattering, equal to Thomson cross-section. α is the fine-structure constant and me the electron mass. The average stellar electron density is related to the star mass M and radius R Finally, by the virial theorem, the total kinetic energy is equal to half the gravitational potential energy EG, so if the average nuclei mass is mn, then the average kinetic energy per nucleus satisfies: where the temperature T is averaged over the star and C is a factor of order one related to the stellar structure and can be estimated from the star approximate polytropic index. Note that this does not hold for large enough stars, where the radiation pressure is larger than the gas pressure in the radiation zone, hence the relation between temperature, mass and radius is different, as elaborated below. Wrapping up everything, we also take r to be equal to R up to a factor, and ne at r is replaced by its stellar average up to a factor. The combined factor is approximately 1/15 for the sun, and we get: The added factor is actually dependent on M, therefore the law has an approximate dependence. Distinguishing between small and large stellar masses One may distinguish between the cases of small and large stellar masses by deriving the above results using radiation pressure. In this case, it is easier to use the optical opacity and to consider the internal temperature TI directly; more precisely, one can consider the average temperature in the radiation zone. The consideration begins by noting the relation between the radiation pressure Prad and luminosity. The gradient of radiation pressure is equal to the momentum transfer absorbed from the radiation, giving: where c is the velocity of light. Here, ; the photon mean free path. The radiation pressure is related to the temperature by , therefore from which it follows directly that In the radiation zone gravity is balanced by the pressure on the gas coming from both itself (approximated by ideal gas pressure) and from the radiation. For a small enough stellar mass the latter is negligible and one arrives at as before. More precisely, since integration was done from 0 to R so on the left side, but the surface temperature TE can be neglected with respect to the internal temperature TI. From this it follows directly that For a large enough stellar mass, the radiation pressure is larger than the gas pressure in the radiation zone. Plugging in the radiation pressure, instead of the ideal gas pressure used above, yields hence Core and surface temperatures To the first approximation, stars are black body radiators with a surface area of . Thus, from the Stefan–Boltzmann law, the luminosity is related to the surface temperature TS, and through it to the color of the star, by where σB is Stefan–Boltzmann constant, The luminosity is equal to the total energy produced by the star per unit time. Since this energy is produced by nucleosynthesis, usually in the star core (this is not true for red giants), the core temperature is related to the luminosity by the nucleosynthesis rate per unit volume: Here, ε is the total energy emitted in the chain reaction or reaction cycle. is the Gamow peak energy, dependent on EG, the Gamow factor. Additionally, S(E)/E is the reaction cross section, n is number density, is the reduced mass for the particle collision, and A,B are the two species participating in the limiting reaction (e.g. both stand for a proton in the proton-proton chain reaction, or A a proton and B an nucleus for the CNO cycle). Since the radius R is itself a function of the temperature and the mass, one may solve this equation to get the core temperature.
Physical sciences
Stellar astronomy
Astronomy
24083865
https://en.wikipedia.org/wiki/Coastal%20fish
Coastal fish
Coastal fish, also called inshore fish or neritic fish, inhabit the sea between the shoreline and the edge of the continental shelf. Since the continental shelf is usually less than deep, it follows that pelagic coastal fish are generally epipelagic fish, inhabiting the sunlit epipelagic zone. Coastal fish can be contrasted with oceanic fish or offshore fish, which inhabit the deep seas beyond the continental shelves. Coastal fish are the most abundant in the world. They can be found in tidal pools, fjords and estuaries, near sandy shores and rocky coastlines, around coral reefs and on or above the continental shelf. Coastal fish include forage fish and the predator fish that feed on them. Forage fish thrive in inshore waters where high productivity results from upwelling and shoreline run off of nutrients. Some are partial residents that spawn in streams, estuaries and bays, but most complete their life cycles in the zone. Coastal habitats Coastal fish are found in the waters above the continental shelves that extend from the continental shorelines, and around the coral reefs that surround volcanic islands. The total world shoreline extends for and the continental shelves occupy a total area of 24.3 million km2 (9 376 million sq mi). This is nearly 5% of the world's total area of 510 million km2. Nearshore fish Nearshore fish, sometimes called littoral fish, live close to the shore. They are associated with the intertidal zone, or with estuaries, lagoons, coral reefs, kelp forests, seagrass meadows, or rocky or sandy bottoms, usually in shallow waters less than about deep. Intertidal fish Intertidal fish are fish that move in and out with the tide in the intertidal zone of the seashore, or are found in rock pools or under rocks. The intertidal zone of rocky shores can contain indentations which trap pools of salty water, called rock pools. Living in these habitats are communities of hardy plant and animal species specially adapted for coping with the volatile environment around them. The plants and animals interact with each other and with the rock pool to form miniature ecosystems, easily accessible to students and a source of fascination for young children. Plants such as seaweeds, cnidarians such as sea anemones, arthropods like barnacles, and molluscs such as the common limpet and the common periwinkle can be permanent residents of rock pools. But most rock pool animals, such as crabs, shrimp and fish are just temporary residents, occupying a rock pool only until the next tide takes them to a new location. Some rock pool fish which are temporary residents include the long-spined sea scorpion, the pipefish worm, the rock goby and the common lumpsucker. However some other rock pool fish are territorial in nature, and will stay with the same pool for extended periods. Examples are the common blenny and its near relative the butterfish. The common blenny, also known as the shanny, is found in northern temperate waters. They hide under rocks and in crannies in rock pools when the tide is out. They feed on green seaweed and invertebrates such as barnacles. They can crawl on dry land, using their paired fins. About long, they have smooth skin, without scales, and are covered with soft slime. The slime prevents them drying if they are stranded on a shore between tides. So long as their skin stays moist, they can breathe out of water. They are sometimes called "sea frogs" because they bask in the sun on weeds outside the water, and like frogs, jump to safety when disturbed. They can change their colour to match their surroundings. The female lays eggs in crevices or under stones and the male guards them until they hatch. In the winter, when storms can be severe, they move out of their rock pools into the shallows. The common blenny is bold with strong teeth, and will bite humans if it feels threatened. The rock goby is a small fish, about , found in northern temperate waters. It is coloured black with white blotches, and hides under stones and amongst seaweed. It is a temporary resident of rock pools when the tide is out. The female rock goby lays eggs on the underside of rocks and shells and then leaves them. The male guards the eggs until they hatch. First-year rock gobies often visit rock pools in winter when the older fish have left. The long-spined sea scorpion, a small stout fish which grows about long, is another temporary resident of rocky pools. They have large black eyes, a large mouth, and four long spines—two on each side on the gill cover—that stick out when the fish is removed from the water. They also have an organ like a finger on each side of their mouths which helps them catch prey. Because of their broad heads, they are also called "bullheads". They have a variety of effective camouflaged colours ranging from shades of browns with cream blotches, to orange and red with white blotches. They can also change their body colour to match their surroundings. They are found around the coasts of Northern Europe in shallow rocky waters hiding amongst seaweed. They are also found in rock pools and sometimes in waters deep. Long-spined sea scorpions lay eggs amongst seaweed or attached to rock crevices. The young hatch after two or three weeks, and go through several development stages before maturing into adults. Lumpsuckers are found in temperate northern waters. They live on the seafloor, and are temporary residents of rocky pools in late winter and early spring when they spawn. The body of the lumpsucker is scaleless and covered with small lumps. They have a large sucking disc on their underside which they use to cling to surfaces. They are normally a blue to slate-grey colour, and are effectively camouflaged to look like stones. They are portly, nearly spherical, poor swimmers, reaching lengths up to . After the female lumpsucker lays eggs, the male takes over, clamping itself to a rock where it guards the eggs. When they hatch, lumpsuckers look like tiny tadpoles. They remain in shallow water and rock pools, hiding amongst seaweed and rocks, until they grow up. Estuarine fish Estuaries are partly enclosed coastal bodies of water with one or more rivers or streams flowing into them, and with a free connection to the open sea. These brackish water habitats form a transition zone between river environments and ocean environments, and ecological successions can form along the way. Estuaries are subject to both marine influences, such as tides, waves, and the influx of saline water; and riverine influences, such as flows of fresh water and sediment. The inflow of both seawater and freshwater provide high levels of nutrients in both the water column and sediment, making estuaries productive natural habitats. Fishes that spend time in estuaries (or river mouths) need to be euryhaline (tolerant to a range of salinities). Estuaries provide an unstable environment for fish, where the salinity changes and the waters are often muddy and turbulent. In warmer climates, estuaries have mangroves around their edges. At times there may be only a few different fish species present in an estuary, but seasonal migrants, including eels, salmonids, and some forage fish such as herrings and sprats increase the diversity in the estuary. River estuaries form important staging points during the migration of anadromous and catadromus fish species, such as salmon and eels, giving them time to form social groups and to adjust to the changes in salinity. Salmon are anadromous, meaning they live in the sea but ascend rivers to spawn; eels are catadromous, living in rivers and streams, but returning to the sea to breed. Besides the species that migrate through estuaries, there are many other fish that use them as "nursery grounds" for spawning or as places young fish can feed and grow before moving elsewhere. For example, herring and plaice are two commercially important species that use the Thames Estuary for this purpose. Mangrove swamps are associated brackish water habitats. Many, though not all, mangrove swamps fringe estuaries and lagoons where the salinity changes with each tide. Among the most specialised residents of mangrove forests are mudskippers, fish that forage for food on land, and archer fish, perch-like fish that "spit" at insects and other small animals living in the trees, knocking them into the water where they can be eaten. Like estuaries, mangrove swamps are important breeding grounds for many fish, with species such as snappers, halfbeaks, and tarpon spawning or maturing among them. Coral reef fish In tropical waters, coral reef fish live amongst or in close relation to coral reefs. Coral reefs form complex ecosystems with tremendous biodiversity. Coral reef fish can be particularly colourful and interesting to watch. Hundreds of species can exist in a small area of a healthy reef, many of them hidden or well camouflaged. Reef fish have developed many ingenious specialisations adapted to survival on the reefs. Coral reefs occupy less than one per cent of the surface area of the world oceans, yet they provide a home for 25 per cent of all marine fish species. Coral reefs often depend on other habitats in the surrounding area for the supply of nutrients, such as seagrass meadows and mangrove forests. Seagrass and mangroves supply dead plants and animals which are rich in nitrogen and also serve to feed fish and animals from the reef by supplying wood and vegetation. Reefs in turn protect mangroves and seagrass from waves and produce sediment for the mangroves and seagrass to root in. Anthias are members of the family Serranidae and make up the subfamily Anthiinae. They are widespread in tropical waters. They have been called the "quintessential reef fish", and make up a sizeable portion of the colourful fishes seen swarming in coral reef photography. Anthias are mostly small, peaceful, beautiful and popular as ornamental fish. They are mainly zooplankton feeders. Anthias shoal and school in large numbers, operating more intimate "harems" within the schools. These harems contain a dominant and colourful male, between 2 and 12 females — who operate a hierarchy among themselves — and one or two "subdominant" males, often less brightly coloured and non-territorial. Within the swarm of females, territorial males perform acrobatic U-swim displays and vigorously defend an area of the reef and its associated harem. Anthias are protogynous hermaphrodites. All anthias are born female; if a dominant male perishes, the largest female of the group will often change into a male to take its place. This may lead to squabbling between the next largest male and the transforming female, whose hormones are now surging with testosterone. This can turn quite vicious in the limited confines of captivity. Butterflyfish are group of about 120 species belonging to the family Chaetodontidaeof Perchiformes. They include bannerfish and coralfish. They are widespread on coral reefs. Butterflyfish are mostly between in length. The largest species, the lined butterflyfish and saddle butterflyfish, grow to . Many species are brightly coloured and strikingly patterned, though other species are dull in colour. Many have eyespots on their flanks and dark bands across their eyes, not unlike the patterns seen on butterfly wings. Their deep, laterally narrow bodies are easily noticed through the profusion of reef life. The conspicuous colouration of butterflyfish may be intended for interspecies communication. Butterflyfish have uninterrupted dorsal fins with tail fins that may be rounded or truncated, but are never forked. Generally diurnal and frequenting waters of less than (though some species descend to ), butterflyfish stick to particular home ranges. The corallivores are especially territorial, forming mated pairs and staking claim to a specific coral head. Contrastingly, the zooplankton feeders form large conspecific groups. By night butterflyfish hide in reef crevices and exhibit markedly different colouration. Their colouration also makes butterflyfish popular aquarium fish. However, most species feed on coral polyps and sea anemones, which can result in problems for the hobby aquarists. Clownfish, anemonefish and damselfish are among about 360 species classified in the family Pomacentridae. Most Pomacentrids are associated with coral reefs in the Indo-West Pacific, with a few species occurring in temperate waters. Some species are native to freshwater or brackish estuarine environments. Most live in shallow water, from , although some species are found below . Most species are specialists, living in specific parts of the reef, such as sandy lagoons, steep reef slopes, or areas exposed to strong wave action. In general, the coral is used as shelter, and many species can only survive in its presence. The bottom-dwelling species are territorial, occupying and defending a portion of the reef, often centred around an area of shelter. By keeping away other species of fish, some pomacentrids encourage the growth of thick mats of algae within their territories, leading to the common name farmerfish. Different species display a wide range of colours, although some are relatively drab. Pomacentrids are omnivorous or herbivorous, feeding off algae, plankton, and small bottom-dwelling crustaceans. A small number eat coral. Goatfishes are a family Mullidae of about 55 species of perciform fishes, associated worldwide with tropical reefs. They are typically about 20 cm long, though the dash-and-dot goatfish, grows to 55 cm. Goatfish are tireless benthic feeders, possess a pair of long chemosensory barbels ("whiskers") protruding from their chins resembling a goat's beard. They use these to rifle through the sediments in search of a meal. Like goats, they seek anything edible; worms, crustaceans, molluscs and other small invertebrates are staples. Many species of goatfish are conspicuously coloured and have the ability to change their colouration depending on their current activity. By day, many form large inactive (non-feeding) schools: these aggregates may contain both conspecifics and heterospecifics. For example, the yellowfin goatfish school with blue-striped snappers. When they do that, the yellowfins changes its colouration to match that of the snapper. By night the schools disperse and individual goatfish head their separate ways to loot the sands. The diurnal goldsaddle goatfish changes from a lemon-yellow to a pale cream when feeding. Other nocturnal feeders will shadow the active goatfish, waiting patiently for overlooked morsels. Goatfish stay within the shallows, going no deeper than about 110 metres. Most species do not tolerate brackish water, so they do not enter estuaries or the mouths of rivers. Other nearshore fish Other nearshore or shallow water fish live near the shore in depths of less than 10 metres. They occupy the areas over sandy or rocky bottoms, and can be associated with seagrass meadows and kelp forests. They can be divided into demersal fish and pelagic fish. Demersal fish live on or near the sea floor, while pelagic fish live in the water column away the sea floor. Examples of such shallow water demersal fish, found in both tropical and temperate waters around the world, are seahorses, triplefins, wrasse and flounder. As demersal fish, all these fish spend most of their time on or near the sea floor. Flatfish are superbly adapted groundfish, found on muddy and sandy sea floors. In many species both eyes lie on one side of the head, one or the other migrating through and around the head during development. Some species face their "left" side upward, some face their "right" side upward, and others face either side upward. Some flatfish can camouflage themselves on the ocean floor. Wrasse are a large family of mainly small fish, usually less than long. Most wrasse are loners that prefer habitats such as coral reefs and rocky shores. They live close to the substrate, eating small invertebrates and almost anything else that lurks on the bottom. Many are brightly coloured. They have thick lips and use their sharp teeth to pick small creatures off the rocks. Many smaller wrasses follow the feeding trails of larger fish, picking up invertebrates disturbed by their passing. Triplefins are a family of fish. They are usually found around coral reefs and rocks, usually in shallow, clear sunlit waters such as lagoons and seaward reefs. Triplefins have three dorsal fins (hence the name). They are small fish, usually less than six cm long. Brightly coloured, often for reasons of camouflage, they are nervous and retreat to rock crevices at any perceived threat. Seahorses are a genus of fish. They prefer sheltered harbours, estuaries and other shallow coastal waters, where they hunt tiny crustaceans. They bob around in sheltered areas such as coral reefs, mangrove stands and seagrass meadows and estuaries. They are camouflaged with murky patterns that blend into kelp and sea grass backgrounds. During social moments or in unusual surroundings, seahorses can turn on bright colours. Examples of shallow water pelagic fish, found in both tropical and temperate waters around the world, are grey mullet, sprats and garfish. As pelagic fish, all these fish spend most of their time living in the water column away the sea floor. The grey mullet are medium size fish, typically about long. They are often caught with seine nets. The garfish is a long, slender fish, looking like a spear, which feeds on seagrass fragments, shrimps and crab larvae. In turn it is preyed on by larger fish and, since it is often near the surface, cormorants and gannets. Coastal pelagic fish Plankton feeding At the base of food chains are the primary producers. In the ocean these primary producers are mainly a type of plankton, microscopic phytoplankton which drift in the water column. Phytoplankton need sunlight for photosynthesis to power carbon fixation, so they are mainly located in sunlit surface waters. Phytoplankton also need and rapidly use nutrients in the water column. The phytoplankton are eaten by zooplankton, which in turn are eaten by predatory zooplankton. Filter feeders then eat the plankton and larger predatory fish eat the filter feeders (see diagram at the right). Most filter-feeding pelagic fish found in coastal waters are small, silvery forage fish. Forage fish include fishes of the family Clupeidae (herrings, shad, sardines and pilchards, hilsa, menhaden and sprats), as well as anchovies, capelin and halfbeaks. They use schooling strategies to avoid predators, and different schools of forage fish often associate with each other in open coastal waters. Forage fish feed near the base of the food chain on plankton and fry (recently hatched fish), often by filter feeding. In turn, they are preyed on by larger predatory fish, seabirds and marine mammals. Worldwide, there are five major coastal currents associated with upwelling areas: the Canary Current (off Northwest Africa), the Benguela Current (off southern Africa), the California Current (off California and Oregon), the Humboldt Current (off Peru and Chile), and the Somali Current (off Western India). All of these currents support major fisheries. Many forage fish are important commercial species, and the schools can be targeted by spotter planes. The fish are caught by purse seiners—fishing boats that use nets to enclose the fish—and can be overfished. Predatory Predatory pelagic fishes found on continental shelves worldwide in both tropical and temperate waters include porgies, barracuda, amberjacks and cutlassfishes. They tend to be larger fish, and are carnivorous, feeding on the smaller, silvery forage fish that eat plankton (see section above). Some species also feed on crabs and other invertebrates, foraged from the sea floor. Mackerel scom* Porgies sometimes called sea breams, are any of about 100 species belonging to the family Sparidae. Porgies usually have high backs and a single dorsal fin, like snapper or grunt fishes (grunts are named for the sound they make grinding their teeth). They are bottom feeding pelagic fishes, with small mouths equipped with strong teeth adapted for handling small fishes and invertebrates with hard shells. Most do not exceed a size of about , but some may grow to four times that length. They often school, and will migrate between reefs. Larger fish enter estuaries and harbours. Barracuda have long slender bodies typically about long. They have a wicked set of teeth and are ferocious predators. They feed on crustaceans, cephalopods and small fish like anchovy and pilchard. Barracouta often hunt in schools near the bottom or midwater, and sometimes even near the surface at night. Cutlassfishes are a group of about 40 species belonging to the family Trichiuridae. They are ocean fish which regularly stray into coastal waters around the world. Fish of this family are long, slender, and generally steely blue or silver in colour, giving rise to their name. They have reduced or absent pelvic and caudal fins, giving them an eel-like appearance, and large fang-like teeth. Jacks, amberjacks, pompanos, horse mackerel, scads, leatherjackets and trevally are fish of the family Carangidae. Found in most coastal waters, they are fast predatory fishes that hunt in the waters above reefs and in the open sea; some dig in the sea floor for invertebrates (some can also filter feed, such as the white trevally). The largest fish in the family, the giant trevally, grows up to 1.7 m in length; most fish in the family reach a maximum length of 25–100 cm. The family contains many important commercial and game fish, notably the Pacific jack mackerel and the other jack mackerels in the genus Trachurus. The type species of this genus is the Atlantic horse mackerel. Jack mackerels are an important inshore commercial species. Amberjacks are a group of nine species belonging to the genus Seriola within the family Carangidae. Mainly open water fish, they can follow small forage fish into estuaries and enclosed waters, where they will also hunt for crustaceans. Amberjacks are fast swimming and aggressive predators that often hunt in schools around offshore reefs. The yellowtail amberjack can reach 1.8 m in length and weigh 60 kilograms. Demersal pelagic fish Fish that live on or in close association with the sea floor are called demersal fish. This section discusses the coastal demersal fish that live on the continental shelf, but are living further from the coast and in deeper water than the nearshore fish discussed above. Demersal fish are white fish. Unlike oily fish, white fish contain oils only in their liver, rather than in the gut, and can therefore be gutted as soon as they are caught, on board the ship. White fish has dry and white flesh. They can be divided into benthopelagic fish (mostly "round" fish) which live near the sea bed, such as cod, and benthic fish (flatfish) such as plaice which live on the sea bed. Benthic fish tend to be "flat", so they can lie on the bottom. Cod-like fishes are a number of round benthopelagic species belonging to the order Gadiformes, such as Atlantic and Pacific cod, morid cod, haddock and pollock, including the highly commercial Alaska pollock. Cod-like fishes are often found in large schools over sandy or muddy bottoms. They have a barbel (fleshy filament) on their lower jaw which they use to detect prey buried in the sand or mud. Some migrate to warm water in winter to spawn. John Dory are fishes of the genus Zeus. They have a widespread distribution and are typically found near the seabed in depths from . The John Dory grows to a maximum length of . Although it is a benthopelagic fish, its body is flat and it can hardly be seen from the front because it is so thin. It is a poor swimmer with long spines on the dorsal fin. It has a large dark eyespot on the side of its body which is used to confuse prey, which are scooped up in its big mouth. Large eyes at the front of the head provide it with bifocal vision and depth perception, which are important for predators. The John Dory usually gets its food by stalking it then shooting out a tube in its mouth to capture its prey. It eats forage fish, and occasionally squid and cuttlefish. In turn, they are preyed on by sharks, like the dusky shark, and other large bony fish. They are normally solitary. Turbot and brill are benthic flatfish, resembling flounder and sole, but found in deeper offshore waters on the continental shelf. They are brownish-green, with dark blotches on the turbot and mottling on the brill. They are fished by coastal trawlers. Mail-cheeked fishes belong to a group of about 30 species in the order Scorpaeniformes. Mail-cheeked fishes are named after a plate of bone that runs across each cheek. They are widespread in all the oceans of the world. Mail-cheeked fishes are carnivorous, mostly feeding on crustaceans, such as crabs and shrimp, and on smaller fish. Most species live on the sea bottom in relatively shallow waters, although species are known from mid and deep water, from the mid-water, and even from fresh water. They typically have spiny heads, and rounded pectoral and caudal fins. Most species are less than in length, but the full size range of the order varies from the velvetfishes, which can be just long as adults, to the Lingcod, which can reach in length. Red gurnard are mail-cheeked fish. They use their large pectoral fins to rest on the bottom and to detect food. Stargazers are about 50 species of fishes, belonging to the family Uranoscopidae, and found worldwide in shallow waters. Stargazers are venomous; they have two large poison spines situated behind the opercle and above the pectoral fins. They can also deliver electric shocks. They are ambush predators with eyes on top of their heads (thus the name). Stargazers also have a large upward-facing mouth in a large head. They bury themselves in sand with only their eyes showing, and leap upwards to ambush fish and invertebrates overhead. Some species have a worm-shaped lure growing out of the floor of the mouth, which they wiggle to attract prey's attention. Lengths range from 18 cm up to 90 cm, for the giant stargazer Kathetostoma giganteum. Stargazers are a delicacy in some cultures. The venom is destroyed when it is cooked, and stargazers are sold in some fish markets with their electric organ removed. They have been called "the meanest things in creation" and the "worst pet on earth". Sandperches are a family, Pinguipedidae, containing 63 species of fishes in the order Perciformes. They are benthic carnivores, feeding on small fish and invertebrates. Examples are the redbanded weever, yellow weaver and blue cod. They are often caught in pots like crayfish. Medusa fishes are a family Centrolophidae of 31 species of perciform fishes. They are found in temperate and tropical waters throughout the world, usually feeding on fish, crustaceans and small squid near rough sea floors on continental shelf and slope. Examples are barrelfish, southern driftfish, imperial blackfish, the Japanese and pelagic butterfish, the New Zealand and Tasmanian ruffe, and the common, silver and white warehou. The young of some species associate with jellyfish, which provides them with protection from predators and opportunities to scavenge the remains of the jellyfish's meals. The young of other species associate with large masses of floating kelp. Grouper are fish belonging to a number of genera in the subfamily Epinephelinae of the family Serranidae, in the order Perciformes. Species of grouper include the black, comet, gag, giant, Goliath, Nassau, saddletail, tiger, Warsaw, white and yellowfin grouper. Typical lengths are 80–120 centimetres. They inhabit depths from reefs near the surface down to over 400 metres. They feed on just about any moving animal they encounter. Groper are important inshore commercial fish, usually caught with gill nets (in earlier times longlines were used). Wreckfish are a family Polyprionidae of perciform fishes, found on the floor of the continental shelf and slope where they inhabit caves and shipwrecks (thus their common name). The Atlantic wreckfish is at depths between . They are largely a solitary fish, though juveniles school below floating objects. Their diet includes large ocean cephalopods, crustaceans, and other bottom-dwelling fishes.
Biology and health sciences
Fishes by habitat
Animals
8481594
https://en.wikipedia.org/wiki/Co-orbital%20configuration
Co-orbital configuration
In astronomy, a co-orbital configuration is a configuration of two or more astronomical objects (such as asteroids, moons, or planets) orbiting at the same, or very similar, distance from their primary; i.e., they are in a 1:1 mean-motion resonance. (or 1:-1 if orbiting in opposite directions). There are several classes of co-orbital objects, depending on their point of libration. The most common and best-known class is the trojan, which librates around one of the two stable Lagrangian points (Trojan points), and , 60° ahead of and behind the larger body respectively. Another class is the horseshoe orbit, in which objects librate around 180° from the larger body. Objects librating around 0° are called quasi-satellites. An exchange orbit occurs when two co-orbital objects are of similar masses and thus exert a non-negligible influence on each other. The objects can exchange semi-major axes or eccentricities when they approach each other. Parameters Orbital parameters that are used to describe the relation of co-orbital objects are the longitude of the periapsis difference and the mean longitude difference. The longitude of the periapsis is the sum of the mean longitude and the mean anomaly and the mean longitude is the sum of the longitude of the ascending node and the argument of periapsis . Trojans Trojan objects orbit 60° ahead of () or behind () a more massive object, both in orbit around an even more massive central object. The best known examples are the large population of asteroids that orbit ahead of or behind Jupiter around the Sun. Trojan objects do not orbit exactly at one of either Lagrangian points, but do remain relatively close to it, appearing to slowly orbit it. In technical terms, they librate around = (±60°, ±60°). The point around which they librate is the same, irrespective of their mass or orbital eccentricity. Trojan minor planets There are several thousand known trojan minor planets orbiting the Sun. Most of these orbit near Jupiter's Lagrangian points, the traditional Jupiter trojans. , there are also 13 Neptune trojans, 7 Mars trojans, 2 Uranus trojans ( and ), and 2 Earth trojans ( and (614689) 2020 XL5 ) that are known to exist. No Saturnian trojans have been observed until the discovery of 2019 UO14. Trojan moons The Saturnian system contains two sets of trojan moons. Both Tethys and Dione have two trojan moons each, Telesto and Calypso in Tethys's and respectively, and Helene and Polydeuces in Dione's and respectively. Polydeuces is noticeable for its wide libration: it wanders as far as ±30° from its Lagrangian point and ±2% from its mean orbital radius, along a tadpole orbit in 790 days (288 times its orbital period around Saturn, the same as Dione's). Trojan planets A pair of co-orbital exoplanets was proposed to be orbiting the star Kepler-223, but this was later retracted. The possibility of a trojan planet to Kepler-91b was studied but the conclusion was that the transit-signal was a false-positive. In April 2023, a group of amateur astronomers reported two new exoplanet candidates co-orbiting , in a horseshoe exchange orbit, close to the star GJ 3470 (this star has been known to have a confirmed planet GJ 3470 b). However, the mentioned study is only in preprint form on arXiv, and it has not yet been peer reviewed and published in a reputable scientific journal. In July 2023, the possible detection of a cloud of debris co-orbital with the proto-planet PDS 70 b was announced. This debris cloud could be evidence of a Trojan planetary-mass body or one in the process of forming. One possibility for the habitable zone is a trojan planet of a giant planet close to its star. The reason why no trojan planets have been definitively detected could be that tides destabilize their orbits. Formation of the Earth–Moon system According to the giant impact hypothesis, the Moon formed after a collision between two co-orbital objects: Theia, thought to have had about 10% of the mass of Earth (about as massive as Mars), and the proto-Earth. Their orbits were perturbed by other planets, bringing Theia out of its trojan position and causing the collision. Horseshoe orbits Objects in a horseshoe orbit librate around 180° from the primary. Their orbits encompass both equilateral Lagrangian points, i.e. and . Co-orbital moons The Saturnian moons Janus and Epimetheus share their orbits, the difference in semi-major axes being less than either's mean diameter. This means the moon with the smaller semi-major axis slowly catches up with the other. As it does this, the moons gravitationally tug at each other, increasing the semi-major axis of the moon that has caught up and decreasing that of the other. This reverses their relative positions proportionally to their masses and causes this process to begin anew with the moons' roles reversed. In other words, they effectively swap orbits, ultimately oscillating both about their mass-weighted mean orbit. Earth co-orbital asteroids A small number of asteroids have been found which are co-orbital with Earth. The first of these to be discovered, asteroid 3753 Cruithne, orbits the Sun with a period slightly less than one Earth year, resulting in an orbit that (from the point of view of Earth) appears as a bean-shaped orbit centered on a position ahead of the position of Earth. This orbit slowly moves further ahead of Earth's orbital position. When Cruithne's orbit moves to a position where it trails Earth's position, rather than leading it, the gravitational effect of Earth increases the orbital period, and hence the orbit then begins to lag, returning to the original location. The full cycle from leading to trailing Earth takes 770 years, leading to a horseshoe-shaped movement with respect to Earth. More resonant near-Earth objects (NEOs) have since been discovered. These include 54509 YORP, , , , , and which exist in resonant orbits similar to Cruithne's. and are the only two identified Earth trojans. Hungaria asteroids were found to be one of the possible sources for co-orbital objects of the Earth with a lifetime up to ~58 kyrs. Quasi-satellite Quasi-satellites are co-orbital objects that librate around 0° from the primary. Low-eccentricity quasi-satellite orbits are highly unstable, but for moderate to high eccentricities such orbits can be stable. From a co-rotating perspective the quasi-satellite appears to orbit the primary like a retrograde satellite, although at distances so large that it is not gravitationally bound to it. Two examples of quasi-satellites of the Earth are and 469219 Kamoʻoalewa. Exchange orbits In addition to swapping semi-major axes like Saturn's moons Epimetheus and Janus, another possibility is to share the same axis, but swap eccentricities instead.
Physical sciences
Orbital mechanics
Astronomy
8482163
https://en.wikipedia.org/wiki/Magnetosphere%20of%20Jupiter
Magnetosphere of Jupiter
The magnetosphere of Jupiter is the cavity created in the solar wind by Jupiter's magnetic field. Extending up to seven million kilometers in the Sun's direction and almost to the orbit of Saturn in the opposite direction, Jupiter's magnetosphere is the largest and most powerful of any planetary magnetosphere in the Solar System, and by volume the largest known continuous structure in the Solar System after the heliosphere. Wider and flatter than the Earth's magnetosphere, Jupiter's is stronger by an order of magnitude, while its magnetic moment is roughly 18,000 times larger. The existence of Jupiter's magnetic field was first inferred from observations of radio emissions at the end of the 1950s and was directly observed by the Pioneer 10 spacecraft in 1973. Jupiter's internal magnetic field is generated by electrical currents in the planet's outer core, which is theorized to be composed of liquid metallic hydrogen. Volcanic eruptions on Jupiter's moon Io eject large amounts of sulfur dioxide gas into space, forming a large torus around the planet. Jupiter's magnetic field forces the torus to rotate with the same angular velocity and direction as the planet. The torus in turn loads the magnetic field with plasma, in the process stretching it into a pancake-like structure called a magnetodisk. In effect, Jupiter's magnetosphere is internally driven, shaped primarily by Io's plasma and its own rotation, rather than by the solar wind as at Earth's magnetosphere. Strong currents in the magnetosphere generate permanent aurorae around the planet's poles and intense variable radio emissions, which means that Jupiter can be thought of as a very weak radio pulsar. Jupiter's aurorae have been observed in almost all parts of the electromagnetic spectrum, including infrared, visible, ultraviolet and soft X-rays. The action of the magnetosphere traps and accelerates particles, producing intense belts of radiation similar to Earth's Van Allen belts, but thousands of times stronger. The interaction of energetic particles with the surfaces of Jupiter's largest moons markedly affects their chemical and physical properties. Those same particles also affect and are affected by the motions of the particles within Jupiter's tenuous planetary ring system. Radiation belts present a significant hazard for spacecraft and potentially to human space travellers. Structure Jupiter's magnetosphere is a complex structure comprising a bow shock, magnetosheath, magnetopause, magnetotail, magnetodisk, and other components. The magnetic field around Jupiter emanates from a number of different sources, including fluid circulation at the planet's core (the internal field), electrical currents in the plasma surrounding Jupiter and the currents flowing at the boundary of the planet's magnetosphere. The magnetosphere is embedded within the plasma of the solar wind, which carries the interplanetary magnetic field. Internal magnetic field The bulk of Jupiter's magnetic field, like Earth's, is generated by an internal dynamo supported by the circulation of a conducting fluid in its outer core. But whereas Earth's core is made of molten iron and nickel, Jupiter's is composed of metallic hydrogen. As with Earth's, Jupiter's magnetic field is mostly a dipole, with north and south magnetic poles at the ends of a single magnetic axis. On Jupiter the north pole of the dipole (where magnetic field lines point radially outward) is located in the planet's northern hemisphere and the south pole of the dipole lies in its southern hemisphere. This is opposite from the Earth. Jupiter's field also has quadrupole, octupole and higher components, though they are less than one-tenth as strong as the dipole component. The dipole is tilted roughly 10° from Jupiter's axis of rotation; the tilt is similar to that of the Earth (11.3°). Its equatorial field strength is about 417.0  μT (4.170 G), which corresponds to a dipole magnetic moment of about 2.83 T·m3. This makes Jupiter's magnetic field about 20 times stronger than Earth's, and its magnetic moment ~20,000 times larger. Jupiter's magnetic field rotates at the same speed as the region below its atmosphere, with a period of 9 h 55 m. No changes in its strength or structure had been observed since the first measurements were taken by the Pioneer spacecraft in the mid-1970s, until 2019. Analysis of observations from the Juno spacecraft show a small but measurable change from the planet's magnetic field observed during the Pioneer era. In particular, Jupiter has a region of strongly non-dipolar field, known as the "Great Blue Spot", near the equator. This may be roughly analogous to the Earth's South Atlantic Anomaly. This region shows signs of large secular variations. Size and shape Jupiter's internal magnetic field prevents the solar wind, a stream of ionized particles emitted by the Sun, from interacting directly with its atmosphere, and instead diverts it away from the planet, effectively creating a cavity in the solar wind flow, called a magnetosphere, composed of a plasma different from that of the solar wind. The Jovian magnetosphere is so large that the Sun and its visible corona would fit inside it with room to spare. If one could see it from Earth, it would appear five times larger than the full moon in the sky despite being nearly 1700 times farther away. As with Earth's magnetosphere, the boundary separating the denser and colder solar wind's plasma from the hotter and less dense one within Jupiter's magnetosphere is called the magnetopause. The distance from the magnetopause to the center of the planet is from 45 to 100 RJ (where RJ=71,492 km is the radius of Jupiter) at the subsolar point—the unfixed point on the surface at which the Sun would appear directly overhead to an observer. The position of the magnetopause depends on the pressure exerted by the solar wind, which in turn depends on solar activity. In front of the magnetopause (at a distance from 80 to 130 RJ from the planet's center) lies the bow shock, a wake-like disturbance in the solar wind caused by its collision with the magnetosphere. The region between the bow shock and magnetopause is called the magnetosheath. At the opposite side of the planet, the solar wind stretches Jupiter's magnetic field lines into a long, trailing magnetotail, which sometimes extends well beyond the orbit of Saturn. The structure of Jupiter's magnetotail is similar to Earth's. It consists of two lobes (blue areas in the figure), with the magnetic field in the southern lobe pointing toward Jupiter, and that in the northern lobe pointing away from it. The lobes are separated by a thin layer of plasma called the tail current sheet (orange layer in the middle). The shape of Jupiter's magnetosphere described above is sustained by the neutral sheet current (also known as the magnetotail current), which flows with Jupiter's rotation through the tail plasma sheet, the tail currents, which flow against Jupiter's rotation at the outer boundary of the magnetotail, and the magnetopause currents (or Chapman–Ferraro currents), which flow against rotation along the dayside magnetopause. These currents create the magnetic field that cancels the internal field outside the magnetosphere. They also interact substantially with the solar wind. Jupiter's magnetosphere is traditionally divided into three parts: the inner, middle and outer magnetosphere. The inner magnetosphere is located at distances closer than 10 RJ from the planet. The magnetic field within it remains approximately dipole, because contributions from the currents flowing in the magnetospheric equatorial plasma sheet are small. In the middle (between 10 and 40 RJ) and outer (further than 40 RJ) magnetospheres, the magnetic field is not a dipole, and is seriously disturbed by its interaction with the plasma sheet (see magnetodisk below). Role of Io Although overall the shape of Jupiter's magnetosphere resembles that of the Earth's, closer to the planet its structure is very different. Jupiter's volcanically active moon Io is a strong source of plasma in its own right, and loads Jupiter's magnetosphere with as much as 1,000 kg of new material every second. Strong volcanic eruptions on Io emit huge amounts of sulfur dioxide, a major part of which is dissociated into atoms and ionized by electron impacts and, to a lesser extent, solar ultraviolet radiation, producing ions of sulfur and oxygen. Further electron impacts produce higher charge state, resulting in a plasma of S+, O+, S2+, O2+ and S3+. They form the Io plasma torus: a thick and relatively cool ring of plasma encircling Jupiter, located near Io's orbit. The plasma temperature within the torus is 10–100 eV (100,000–1,000,000 K), which is much lower than that of the particles in the radiation belts—10 keV (100 million K). The plasma in the torus is forced into co-rotation with Jupiter, meaning both share the same period of rotation. The Io torus fundamentally alters the dynamics of the Jovian magnetosphere. As a result of several processes—diffusion and interchange instability being the main escape mechanisms—the plasma slowly leaks away from Jupiter. As the plasma moves further from the planet, the radial currents flowing within it gradually increase its velocity, maintaining co-rotation. These radial currents are also the source of the magnetic field's azimuthal component, which as a result bends back against the rotation. The particle number density of the plasma decreases from around 2,000 cm−3 in the Io torus to about 0.2 cm−3 at a distance of 35 RJ. In the middle magnetosphere, at distances greater than 10 RJ from Jupiter, co-rotation gradually breaks down and the plasma begins to rotate more slowly than the planet. Eventually at the distances greater than roughly 40 RJ (in the outer magnetosphere) this plasma is no longer confined by the magnetic field and leaves the magnetosphere through the magnetotail. As cold, dense plasma moves outward, it is replaced by hot, low-density plasma, with temperatures of up to 20 keV (200 million K) or higher) moving in from the outer magnetosphere. Some of this plasma, adiabatically heated as it approaches Jupiter, may form the radiation belts in Jupiter's inner magnetosphere. Magnetodisk While Earth's magnetic field is roughly teardrop-shaped, Jupiter's is flatter, more closely resembling a disk, and "wobbles" periodically about its axis. The main reasons for this disk-like configuration are the centrifugal force from the co-rotating plasma and thermal pressure of hot plasma, both of which act to stretch Jupiter's magnetic field lines, forming a flattened pancake-like structure, known as the magnetodisk, at the distances greater than 20 RJ from the planet. The magnetodisk has a thin current sheet at the middle plane, approximately near the magnetic equator. The magnetic field lines point away from Jupiter above the sheet and towards Jupiter below it. The load of plasma from Io greatly expands the size of the Jovian magnetosphere, because the magnetodisk creates an additional internal pressure which balances the pressure of the solar wind. In the absence of Io the distance from the planet to the magnetopause at the subsolar point would be no more than 42 RJ, whereas it is actually 75 RJ on average. The configuration of the magnetodisk's field is maintained by the azimuthal ring current (not an analog of Earth's ring current), which flows with rotation through the equatorial plasma sheet. The Lorentz force resulting from the interaction of this current with the planetary magnetic field creates a centripetal force, which keeps the co-rotating plasma from escaping the planet. The total ring current in the equatorial current sheet is estimated at 90–160 million amperes. Dynamics Co-rotation and radial currents The main driver of Jupiter's magnetosphere is the planet's rotation. In this respect Jupiter is similar to a device called a Unipolar generator. When Jupiter rotates, its ionosphere moves relatively to the dipole magnetic field of the planet. Because the dipole magnetic moment points in the direction of the rotation, the Lorentz force, which appears as a result of this motion, drives negatively charged electrons to the poles, while positively charged ions are pushed towards the equator. As a result, the poles become negatively charged and the regions closer to the equator become positively charged. Since the magnetosphere of Jupiter is filled with highly conductive plasma, the electrical circuit is closed through it. A current called the direct current flows along the magnetic field lines from the ionosphere to the equatorial plasma sheet. This current then flows radially away from the planet within the equatorial plasma sheet and finally returns to the planetary ionosphere from the outer reaches of the magnetosphere along the field lines connected to the poles. The currents that flow along the magnetic field lines are generally called field-aligned or Birkeland currents. The radial current interacts with the planetary magnetic field, and the resulting Lorentz force accelerates the magnetospheric plasma in the direction of planetary rotation. This is the main mechanism that maintains co-rotation of the plasma in Jupiter's magnetosphere. The current flowing from the ionosphere to the plasma sheet is especially strong when the corresponding part of the plasma sheet rotates slower than the planet. As mentioned above, co-rotation breaks down in the region located between 20 and 40 RJ from Jupiter. This region corresponds to the magnetodisk, where the magnetic field is highly stretched. The strong direct current flowing into the magnetodisk originates in a very limited latitudinal range of about ° from the Jovian magnetic poles. These narrow circular regions correspond to Jupiter's main auroral ovals. (See below.) The return current flowing from the outer magnetosphere beyond 50 RJ enters the Jovian ionosphere near the poles, closing the electrical circuit. The total radial current in the Jovian magnetosphere is estimated at 60 million–140 million amperes. The acceleration of the plasma into the co-rotation leads to the transfer of energy from the Jovian rotation to the kinetic energy of the plasma. In that sense, the Jovian magnetosphere is powered by the planet's rotation, whereas the Earth's magnetosphere is powered mainly by the solar wind. Interchange instability and reconnection The main problem encountered in deciphering the dynamics of the Jovian magnetosphere is the transport of heavy cold plasma from the Io torus at 6 RJ to the outer magnetosphere at distances of more than 50 RJ. The precise mechanism of this process is not known, but it is hypothesized to occur as a result of plasma diffusion due to interchange instability. The process is similar to the Rayleigh-Taylor instability in hydrodynamics. In the case of the Jovian magnetosphere, centrifugal force plays the role of gravity; the heavy liquid is the cold and dense Ionian (i.e. pertaining to Io) plasma, and the light liquid is the hot, much less dense plasma from the outer magnetosphere. The instability leads to an exchange between the outer and inner parts of the magnetosphere of flux tubes filled with plasma. The buoyant empty flux tubes move towards the planet, while pushing the heavy tubes, filled with the Ionian plasma, away from Jupiter. This interchange of flux tubes is a form of magnetospheric turbulence. This highly hypothetical picture of the flux tube exchange was partly confirmed by the Galileo spacecraft, which detected regions of sharply reduced plasma density and increased field strength in the inner magnetosphere. These voids may correspond to the almost empty flux tubes arriving from the outer magnetosphere. In the middle magnetosphere, Galileo detected so-called injection events, which occur when hot plasma from the outer magnetosphere impacts the magnetodisk, leading to increased flux of energetic particles and a strengthened magnetic field. No mechanism is yet known to explain the transport of cold plasma outward. When flux tubes loaded with the cold Ionian plasma reach the outer magnetosphere, they go through a reconnection process, which separates the magnetic field from the plasma. The former returns to the inner magnetosphere in the form of flux tubes filled with hot and less dense plasma, while the latter are probably ejected down the magnetotail in the form of plasmoids—large blobs of plasma. The reconnection processes may correspond to the global reconfiguration events also observed by the Galileo spacecraft, which occurred regularly every 2–3 days. The reconfiguration events usually included rapid and chaotic variation of the magnetic field strength and direction, as well as abrupt changes in the motion of the plasma, which often stopped co-rotating and began flowing outward. They were mainly observed in the dawn sector of the night magnetosphere. The plasma flowing down the tail along the open field lines is called the planetary wind. The reconnection events are analogues to the magnetic substorms in the Earth's magnetosphere. The difference seems to be their respective energy sources: terrestrial substorms involve storage of the solar wind's energy in the magnetotail followed by its release through a reconnection event in the tail's neutral current sheet. The latter also creates a plasmoid which moves down the tail. Conversely, in Jupiter's magnetosphere the rotational energy is stored in the magnetodisk and released when a plasmoid separates from it. Influence of the solar wind Whereas the dynamics of the Jovian magnetosphere mainly depend on internal sources of energy, the solar wind probably has a role as well, particularly as a source of high-energy protons. The structure of the outer magnetosphere shows some features of a solar wind-driven magnetosphere, including a significant dawn–dusk asymmetry. In particular, magnetic field lines in the dusk sector are bent in the opposite direction to those in the dawn sector. In addition, the dawn magnetosphere contains open field lines connecting to the magnetotail, whereas in the dusk magnetosphere, the field lines are closed. All these observations indicate that a solar wind driven reconnection process, known on Earth as the Dungey cycle, may also be taking place in the Jovian magnetosphere. The extent of the solar wind's influence on the dynamics of Jupiter's magnetosphere is currently unknown; however, it could be especially strong at times of elevated solar activity. The auroral radio, optical and X-ray emissions, as well as synchrotron emissions from the radiation belts all show correlations with solar wind pressure, indicating that the solar wind may drive plasma circulation or modulate internal processes in the magnetosphere. Emissions Aurorae Jupiter demonstrates bright, persistent aurorae around both poles. Unlike Earth's aurorae, which are transient and only occur at times of heightened solar activity, Jupiter's aurorae are permanent, though their intensity varies from day to day. They consist of three main components: the main ovals, which are bright, narrow (less than 1000 km in width) circular features located at approximately 16° from the magnetic poles; the satellites' auroral spots, which correspond to the footprints of the magnetic field lines connecting Jupiter's ionosphere with those of its largest moons, and transient polar emissions situated within the main ovals (elliptical field may prove to be a better description). Auroral emissions have been detected in almost all parts of the electromagnetic spectrum from radio waves to X-rays (up to 3 keV); they are most frequently observed in the mid-infrared (wavelength 3–4 μm and 7–14 μm) and far ultraviolet spectral regions (wavelength 120–180 nm). The main ovals are the dominant part of the Jovian aurorae. They have roughly stable shapes and locations, but their intensities are strongly modulated by the solar wind pressure—the stronger solar wind, the weaker the aurorae. As mentioned above, the main ovals are maintained by the strong influx of electrons accelerated by the electric potential drops between the magnetodisk plasma and the Jovian ionosphere. These electrons carry field aligned currents, which maintain the plasma's co-rotation in the magnetodisk. The potential drops develop because the sparse plasma outside the equatorial sheet can only carry a current of a limited strength without driving instabilities and producing potential drops. The precipitating electrons have energy in the range 10–100 keV and penetrate deep into the atmosphere of Jupiter, where they ionize and excite molecular hydrogen causing ultraviolet emission. The total energy input into the ionosphere is 10–100 TW. In addition, the currents flowing in the ionosphere heat it by the process known as Joule heating. This heating, which produces up to 300 TW of power, is responsible for the strong infrared radiation from the Jovian aurorae and partially for the heating of the thermosphere of Jupiter. Spots were found to correspond to the Galilean moons Io, Europa and Ganymede. They develop because the co-rotation of the plasma interacts with the moons and is slowed in their vicinity. The brightest spot belongs to Io, which is the main source of the plasma in the magnetosphere (see above). The Ionian auroral spot is thought to be related to Alfvén currents flowing from the Jovian to Ionian ionosphere. Europa's is similar but much dimmer, because it has a more tenuous atmosphere and is a weaker plasma source. Europa's atmosphere is produced by sublimation of water ice from its surfaces, rather than the volcanic activity which produces Io's atmosphere. Ganymede has an internal magnetic field and a magnetosphere of its own. The interaction between this magnetosphere and that of Jupiter produces currents due to magnetic reconnection. The auroral spot associated with Callisto is probably similar to that of Europa, but has only been seen once as of June, 2019. Normally, magnetic field lines connected to Callisto touch Jupiter's atmosphere very close to or along the main auroral oval, making it difficult to detect Callisto's auroral spot. Bright arcs and spots sporadically appear within the main ovals. These transient phenomena are thought to be related to interaction with either the solar wind or the dynamics of the outer magnetosphere. The magnetic field lines in this region are believed to be open or to map onto the magnetotail. The secondary ovals are sometimes observed inside the main oval and may be related to the boundary between open and closed magnetic field lines or to the polar cusps. The polar auroral emissions could be similar to those observed around Earth's poles: appearing when electrons are accelerated towards the planet by potential drops, during reconnection of solar magnetic field with that of the planet. The regions within the main ovals emits most of auroral X-rays. The spectrum of the auroral X-ray radiation consists of spectral lines of highly ionized oxygen and sulfur, which probably appear when energetic (hundreds of kiloelectronvolts) S and O ions precipitate into the polar atmosphere of Jupiter. The source of this precipitation remains unknown but this is inconsistent with the theory that these magnetic field lines are open and connect to the solar wind. Jupiter at radio wavelengths Jupiter is a powerful source of radio waves in the spectral regions stretching from several kilohertz to tens of megahertz. Radio waves with frequencies of less than about 0.3 MHz (and thus wavelengths longer than 1 km) are called the Jovian kilometric radiation or KOM. Those with frequencies in the interval of 0.3–3 MHz (with wavelengths of 100–1000 m) are called the hectometric radiation or HOM, while emissions in the range 3–40 MHz (with wavelengths of 10–100 m) are referred to as the decametric radiation or DAM. The latter radiation was the first to be observed from Earth, and its approximately 10-hour periodicity helped to identify it as originating from Jupiter. The strongest part of decametric emission, which is related to Io and to the Io–Jupiter current system, is called Io-DAM. The majority of these emissions are thought to be produced by a mechanism called "cyclotron maser instability", which develops close to the auroral regions. Electrons moving parallel to the magnetic field precipitate into the atmosphere while those with a sufficient perpendicular velocity are reflected by the converging magnetic field. This results in an unstable velocity distribution. This velocity distribution spontaneously generates radio waves at the local electron cyclotron frequency. The electrons involved in the generation of radio waves are probably those carrying currents from the poles of the planet to the magnetodisk. The intensity of Jovian radio emissions usually varies smoothly with time. However, there are short and powerful bursts (S bursts) of emission superimposed on the more gradual variations and which can outshine all other components. The total emitted power of the DAM component is about 100 GW, while the power of all other HOM/KOM components is about 10 GW. In comparison, the total power of Earth's radio emissions is about 0.1 GW. Jupiter's radio and particle emissions are strongly modulated by its rotation, which makes the planet somewhat similar to a pulsar. This periodical modulation is probably related to asymmetries in the Jovian magnetosphere, which are caused by the tilt of the magnetic moment with respect to the rotational axis as well as by high-latitude magnetic anomalies. The physics governing Jupiter's radio emissions is similar to that of radio pulsars. They differ only in the scale, and Jupiter can be considered a very small radio pulsar too. In addition, Jupiter's radio emissions strongly depend on solar wind pressure and, hence, on solar activity. In addition to relatively long-wavelength radiation, Jupiter also emits synchrotron radiation (also known as the Jovian decimetric radiation or DIM radiation) with frequencies in the range of 0.1–15 GHz (wavelength from 3 m to 2 cm),. These emissions are from relativistic electrons trapped in the inner radiation belts of the planet. The energy of the electrons that contribute to the DIM emissions is from 0.1 to 100 MeV, while the leading contribution comes from the electrons with energy in the range 1–20 MeV. This radiation is well understood and was used since the beginning of the 1960s to study the structure of the planet's magnetic field and radiation belts. The particles in the radiation belts originate in the outer magnetosphere and are adiabatically accelerated, when they are transported to the inner magnetosphere. However, this requires a source population of moderately high energy electrons (>> 1 keV), and the origin of this population is not well understood. Jupiter's magnetosphere ejects streams of high-energy electrons and ions (energy up to tens megaelectronvolts), which travel as far as Earth's orbit. These streams are highly collimated and vary with the rotational period of the planet like the radio emissions. In this respect as well, Jupiter shows similarity to a pulsar. Interaction with rings and moons Jupiter's extensive magnetosphere envelops its ring system and the orbits of all four Galilean satellites. Orbiting near the magnetic equator, these bodies serve as sources and sinks of magnetospheric plasma, while energetic particles from the magnetosphere alter their surfaces. The particles sputter off material from the surfaces and create chemical changes via radiolysis. The plasma's co-rotation with the planet means that the plasma preferably interacts with the moons' trailing hemispheres, causing noticeable hemispheric asymmetries. Close to Jupiter, the planet's rings and small moons absorb high-energy particles (energy above 10 keV) from the radiation belts. This creates noticeable gaps in the belts' spatial distribution and affects the decimetric synchrotron radiation. In fact, the existence of Jupiter's rings was first hypothesized on the basis of data from the Pioneer 11 spacecraft, which detected a sharp drop in the number of high-energy ions close to the planet. The planetary magnetic field strongly influences the motion of sub-micrometer ring particles as well, which acquire an electrical charge under the influence of solar ultraviolet radiation. Their behavior is similar to that of co-rotating ions. Resonant interactions between the co-rotation and the particles' orbital motion has been used to explain the creation of Jupiter's innermost halo ring (located between 1.4 and 1.71 RJ). This ring consists of sub-micrometer particles on highly inclined and eccentric orbits. The particles originate in the main ring; however, when they drift toward Jupiter, their orbits are modified by the strong 3:2 Lorentz resonance located at 1.71 RJ, which increases their inclinations and eccentricities. Another 2:1 Lorentz resonance at 1.4 Rj defines the inner boundary of the halo ring. All Galilean moons have thin atmospheres with surface pressures in the range 0.01–1 nbar, which in turn support substantial ionospheres with electron densities in the range of 1,000–10,000 cm−3. The co-rotational flow of cold magnetospheric plasma is partially diverted around them by the currents induced in their ionospheres, creating wedge-shaped structures known as Alfvén wings. The interaction of the large moons with the co-rotational flow is similar to the interaction of the solar wind with the non-magnetized planets like Venus, although the co-rotational speed is usually subsonic (the speeds vary from 74 to 328 km/s), which prevents the formation of a bow shock. The pressure from the co-rotating plasma continuously strips gases from the moons' atmospheres (especially from that of Io), and some of these atoms are ionized and brought into co-rotation. This process creates gas and plasma tori in the vicinity of moons' orbits with the Ionian torus being the most prominent. In effect, the Galilean moons (mainly Io) serve as the principal plasma sources in Jupiter's inner and middle magnetosphere. Meanwhile, the energetic particles are largely unaffected by the Alfvén wings and have free access to the moons' surfaces (except Ganymede's). The icy Galilean moons, Europa, Ganymede and Callisto, all generate induced magnetic moments in response to changes in Jupiter's magnetic field. These varying magnetic moments create dipole magnetic fields around them, which act to compensate for changes in the ambient field. The induction is thought to take place in subsurface layers of salty water, which are likely to exist in all of Jupiter's large icy moons. These underground oceans can potentially harbor life, and evidence for their presence was one of the most important discoveries made in the 1990s by spacecraft. The interaction of the Jovian magnetosphere with Ganymede, which has an intrinsic magnetic moment, differs from its interaction with the non-magnetized moons. Ganymede's internal magnetic field carves a cavity inside Jupiter's magnetosphere with a diameter of approximately two Ganymede diameters, creating a mini-magnetosphere within Jupiter's magnetosphere. Ganymede's magnetic field diverts the co-rotating plasma flow around its magnetosphere. It also protects the moon's equatorial regions, where the field lines are closed, from energetic particles. The latter can still freely strike Ganymede's poles, where the field lines are open. Some of the energetic particles are trapped near the equator of Ganymede, creating mini-radiation belts. Energetic electrons entering its thin atmosphere are responsible for the observed Ganymedian polar aurorae. Charged particles have a considerable influence on the surface properties of Galilean moons. Plasma originating from Io carries sulfur and sodium ions farther from the planet, where they are implanted preferentially on the trailing hemispheres of Europa and Ganymede. On Callisto however, for unknown reasons, sulfur is concentrated on the leading hemisphere. Plasma may also be responsible for darkening the moons' trailing hemispheres (again, except Callisto's). Energetic electrons and ions, with the flux of the latter being more isotropic, bombard surface ice, sputtering atoms and molecules off and causing radiolysis of water and other chemical compounds. The energetic particles break water into oxygen and hydrogen, maintaining the thin oxygen atmospheres of the icy moons (since the hydrogen escapes more rapidly). The compounds produced radiolytically on the surfaces of Galilean moons also include ozone and hydrogen peroxide. If organics or carbonates are present, carbon dioxide, methanol and carbonic acid can be produced as well. In the presence of sulfur, likely products include sulfur dioxide, hydrogen disulfide and sulfuric acid. Oxidants produced by radiolysis, like oxygen and ozone, may be trapped inside the ice and carried downward to the oceans over geologic time intervals, thus serving as a possible energy source for life. Discovery The first evidence for the existence of Jupiter's magnetic field came in 1955, with the discovery of the decametric radio emission or DAM. As the DAM's spectrum extended up to 40 MHz, astronomers concluded that Jupiter must possess a magnetic field with a maximum strength of above 1 milliteslas (10 gauss). In 1959, observations in the microwave part of the electromagnetic (EM) spectrum (0.1–10 GHz) led to the discovery of the Jovian decimetric radiation (DIM) and the realization that it was synchrotron radiation emitted by relativistic electrons trapped in the planet's radiation belts. These synchrotron emissions were used to estimate the number and energy of the electrons around Jupiter and led to improved estimates of the magnetic moment and its tilt. By 1973 the magnetic moment was known within a factor of two, whereas the tilt was correctly estimated at about 10°. The modulation of Jupiter's DAM by Io (the so-called Io-DAM) was discovered in 1964, and allowed Jupiter's rotation period to be precisely determined. The definitive discovery of the Jovian magnetic field occurred in December 1973, when the Pioneer 10 spacecraft flew near the planet. Exploration after 1970 As of 2009 a total of eight spacecraft have flown around Jupiter and all have contributed to the present knowledge of the Jovian magnetosphere. The first space probe to reach Jupiter was Pioneer 10 in December 1973, which passed within 2.9 RJ from the center of the planet. Its twin Pioneer 11 visited Jupiter a year later, traveling along a highly inclined trajectory and approaching the planet as close as 1.6 RJ. Pioneer 10 provided the best coverage available of the inner magnetic field as it passed through the inner radiation belts within 20 RJ, receiving an integrated dose of 200,000 rads from electrons and 56,000 rads from protons (for a human, a whole body dose of 500 rads would be fatal). The level of radiation at Jupiter was ten times more powerful than Pioneer's designers had predicted, leading to fears that the probe would not survive; however, with a few minor glitches, it managed to pass through the radiation belts, saved in large part by the fact that Jupiter's magnetosphere had "wobbled" slightly upward at that point, moving away from the spacecraft. However, Pioneer 11 did lose most images of Io, as the radiation had caused its imaging photo polarimeter to receive a number of spurious commands. The subsequent and far more technologically advanced Voyager spacecraft had to be redesigned to cope with the massive radiation levels. Voyagers 1 and 2 arrived at Jupiter in 1979–1980 and traveled almost in its equatorial plane. Voyager 1, which passed within 5 RJ from the planet's center, was first to encounter the Io plasma torus. It received a radiation dosage one thousand times the lethal level for humans, the damage resulting in serious degradation of some high-resolution images of Io and Ganymede. Voyager 2 passed within 10 RJ and discovered the current sheet in the equatorial plane. The next probe to approach Jupiter was Ulysses in 1992, which investigated the planet's polar magnetosphere. The Galileo spacecraft, which orbited Jupiter from 1995 to 2003, provided a comprehensive coverage of Jupiter's magnetic field near the equatorial plane at distances up to 100 RJ. The regions studied included the magnetotail and the dawn and dusk sectors of the magnetosphere. While Galileo successfully survived in the harsh radiation environment of Jupiter, it still experienced a few technical problems. In particular, the spacecraft's gyroscopes often exhibited increased errors. Several times electrical arcs occurred between rotating and non-rotating parts of the spacecraft, causing it to enter safe mode, which led to total loss of the data from the 16th, 18th and 33rd orbits. The radiation also caused phase shifts in Galileo'''s ultra-stable quartz oscillator. When the Cassini spacecraft flew by Jupiter in 2000, it conducted coordinated measurements with Galileo. New Horizons passed close to Jupiter in 2007, carrying out a unique investigation of the Jovian magnetotail, traveling as far as 2500 RJ along its length. In July 2016 Juno was inserted into Jupiter orbit, its scientific objectives include exploration of Jupiter's polar magnetosphere. The coverage of Jupiter's magnetosphere remains much poorer than for Earth's magnetic field. Further study is important to further understand the Jovian magnetosphere's dynamics. In 2003, NASA conducted a conceptual study called "Human Outer Planets Exploration" (HOPE) regarding the future human exploration of the outer Solar System. The possibility was mooted of building a surface base on Callisto, because of the low radiation levels at the moon's distance from Jupiter and its geological stability. Callisto is the only one of Jupiter's Galilean satellites for which human exploration is feasible. The levels of ionizing radiation on Io, Europa and Ganymede are inimical to human life, and adequate protective measures have yet to be devised. Exploration after 2010 The Juno New Frontiers mission to Jupiter was launched in 2011 and arrived at Jupiter in 2016. It includes a suite of instruments designed to better understand the magnetosphere, including a magnetometer as well as other devices such as a detector for plasma and radio waves called Waves. The Jovian Auroral Distributions Experiment (JADE) instrument should also help to understand the magnetosphere.Juno'' revealed a planetary magnetic field rich in spatial variation, possibly due to a relatively large dynamo radius. The most surprising observation until late 2017 was the absence of the expected magnetic signature of intense field aligned currents (Birkeland currents) associated with the main aurora. One of the goals of the European Space Agency's Jupiter Icy Moons Explorer (JUICE) mission, launched April, 2023, is to understand the magnetic field from Ganymede and how it impacts Jupiter. Tianwen-4 is a proposed Chinese mission that will either explore the moon Callisto or gather more information on Io.
Physical sciences
Solar System
Astronomy
296838
https://en.wikipedia.org/wiki/Universe%20%28mathematics%29
Universe (mathematics)
In mathematics, and particularly in set theory, category theory, type theory, and the foundations of mathematics, a universe is a collection that contains all the entities one wishes to consider in a given situation. In set theory, universes are often classes that contain (as elements) all sets for which one hopes to prove a particular theorem. These classes can serve as inner models for various axiomatic systems such as ZFC or Morse–Kelley set theory. Universes are of critical importance to formalizing concepts in category theory inside set-theoretical foundations. For instance, the canonical motivating example of a category is Set, the category of all sets, which cannot be formalized in a set theory without some notion of a universe. In type theory, a universe is a type whose elements are types. In a specific context Perhaps the simplest version is that any set can be a universe, so long as the object of study is confined to that particular set. If the object of study is formed by the real numbers, then the real line R, which is the real number set, could be the universe under consideration. Implicitly, this is the universe that Georg Cantor was using when he first developed modern naive set theory and cardinality in the 1870s and 1880s in applications to real analysis. The only sets that Cantor was originally interested in were subsets of R. This concept of a universe is reflected in the use of Venn diagrams. In a Venn diagram, the action traditionally takes place inside a large rectangle that represents the universe U. One generally says that sets are represented by circles; but these sets can only be subsets of U. The complement of a set A is then given by that portion of the rectangle outside of As circle. Strictly speaking, this is the relative complement U \ A of A relative to U; but in a context where U is the universe, it can be regarded as the absolute complement AC of A. Similarly, there is a notion of the nullary intersection, that is the intersection of zero sets (meaning no sets, not null sets). Without a universe, the nullary intersection would be the set of absolutely everything, which is generally regarded as impossible; but with the universe in mind, the nullary intersection can be treated as the set of everything under consideration, which is simply U. These conventions are quite useful in the algebraic approach to basic set theory, based on Boolean lattices. Except in some non-standard forms of axiomatic set theory (such as New Foundations), the class of all sets is not a Boolean lattice (it is only a relatively complemented lattice). In contrast, the class of all subsets of U, called the power set of U, is a Boolean lattice. The absolute complement described above is the complement operation in the Boolean lattice; and U, as the nullary intersection, serves as the top element (or nullary meet) in the Boolean lattice. Then De Morgan's laws, which deal with complements of meets and joins (which are unions in set theory) apply, and apply even to the nullary meet and the nullary join (which is the empty set). In ordinary mathematics However, once subsets of a given set X (in Cantor's case, X = R) are considered, the universe may need to be a set of subsets of X. (For example, a topology on X is a set of subsets of X.) The various sets of subsets of X will not themselves be subsets of X but will instead be subsets of PX, the power set of X. This may be continued; the object of study may next consist of such sets of subsets of X, and so on, in which case the universe will be P(PX). In another direction, the binary relations on X (subsets of the Cartesian product may be considered, or functions from X to itself, requiring universes like or XX. Thus, even if the primary interest is X, the universe may need to be considerably larger than X. Following the above ideas, one may want the superstructure over X as the universe. This can be defined by structural recursion as follows: Let S0X be X itself. Let S1X be the union of X and PX. Let S2X be the union of S1X and P(S1X). In general, let Sn+1X be the union of SnX and P(SnX). Then the superstructure over X, written SX, is the union of S0X, S1X, S2X, and so on; or No matter what set X is the starting point, the empty set {} will belong to S1X. The empty set is the von Neumann ordinal [0]. Then {[0]}, the set whose only element is the empty set, will belong to S2X; this is the von Neumann ordinal [1]. Similarly, {[1]} will belong to S3X, and thus so will {[0],[1]}, as the union of {[0]} and {[1]}; this is the von Neumann ordinal [2]. Continuing this process, every natural number is represented in the superstructure by its von Neumann ordinal. Next, if x and y belong to the superstructure, then so does {{x},{x,y}}, which represents the ordered pair (x,y). Thus the superstructure will contain the various desired Cartesian products. Then the superstructure also contains functions and relations, since these may be represented as subsets of Cartesian products. The process also gives ordered n-tuples, represented as functions whose domain is the von Neumann ordinal [n], and so on. So if the starting point is just X = {}, a great deal of the sets needed for mathematics appear as elements of the superstructure over {}. But each of the elements of S{} will be a finite set. Each of the natural numbers belongs to it, but the set N of all natural numbers does not (although it is a subset of S{}). In fact, the superstructure over {} consists of all of the hereditarily finite sets. As such, it can be considered the universe of finitist mathematics. Speaking anachronistically, one could suggest that the 19th-century finitist Leopold Kronecker was working in this universe; he believed that each natural number existed but that the set N (a "completed infinity") did not. However, S{} is unsatisfactory for ordinary mathematicians (who are not finitists), because even though N may be available as a subset of S{}, still the power set of N is not. In particular, arbitrary sets of real numbers are not available. So it may be necessary to start the process all over again and form S(S{}). However, to keep things simple, one can take the set N of natural numbers as given and form SN, the superstructure over N. This is often considered the universe of ordinary mathematics. The idea is that all of the mathematics that is ordinarily studied refers to elements of this universe. For example, any of the usual constructions of the real numbers (say by Dedekind cuts) belongs to SN. Even non-standard analysis can be done in the superstructure over a non-standard model of the natural numbers. There is a slight shift in philosophy from the previous section, where the universe was any set U of interest. There, the sets being studied were subsets of the universe; now, they are members of the universe. Thus although P(SX) is a Boolean lattice, what is relevant is that SX itself is not. Consequently, it is rare to apply the notions of Boolean lattices and Venn diagrams directly to the superstructure universe as they were to the power-set universes of the previous section. Instead, one can work with the individual Boolean lattices PA, where A is any relevant set belonging to SX; then PA is a subset of SX (and in fact belongs to SX). In Cantor's case X = R in particular, arbitrary sets of real numbers are not available, so there it may indeed be necessary to start the process all over again. In set theory It is possible to give a precise meaning to the claim that SN is the universe of ordinary mathematics; it is a model of Zermelo set theory, the axiomatic set theory originally developed by Ernst Zermelo in 1908. Zermelo set theory was successful precisely because it was capable of axiomatising "ordinary" mathematics, fulfilling the programme begun by Cantor over 30 years earlier. But Zermelo set theory proved insufficient for the further development of axiomatic set theory and other work in the foundations of mathematics, especially model theory. For a dramatic example, the description of the superstructure process above cannot itself be carried out in Zermelo set theory. The final step, forming S as an infinitary union, requires the axiom of replacement, which was added to Zermelo set theory in 1922 to form Zermelo–Fraenkel set theory, the set of axioms most widely accepted today. So while ordinary mathematics may be done in SN, discussion of SN goes beyond the "ordinary", into metamathematics. But if high-powered set theory is brought in, the superstructure process above reveals itself to be merely the beginning of a transfinite recursion. Going back to X = {}, the empty set, and introducing the (standard) notation Vi for Si{}, V0 = {}, V1 = P{}, and so on as before. But what used to be called "superstructure" is now just the next item on the list: Vω, where ω is the first infinite ordinal number. This can be extended to arbitrary ordinal numbers: defines Vi for any ordinal number i. The union of all of the Vi is the von Neumann universe V: . Every individual Vi is a set, but their union V is a proper class. The axiom of foundation, which was added to ZF set theory at around the same time as the axiom of replacement, says that every set belongs to V. Kurt Gödel's constructible universe L and the axiom of constructibility Inaccessible cardinals yield models of ZF and sometimes additional axioms, and are equivalent to the existence of the Grothendieck universe setIn predicate calculus In an interpretation of first-order logic, the universe (or domain of discourse) is the set of individuals (individual constants) over which the quantifiers range. A proposition such as is ambiguous, if no domain of discourse has been identified. In one interpretation, the domain of discourse could be the set of real numbers; in another interpretation, it could be the set of natural numbers. If the domain of discourse is the set of real numbers, the proposition is false, with as counterexample; if the domain is the set of naturals, the proposition is true, since 2 is not the square of any natural number. In category theory There is another approach to universes which is historically connected with category theory. This is the idea of a Grothendieck universe. Roughly speaking, a Grothendieck universe is a set inside which all the usual operations of set theory can be performed. This version of a universe is defined to be any set for which the following axioms hold: implies and imply {u,v}, (u,v), and . implies and (here is the set of all finite ordinals.) if is a surjective function with and , then . The most common use of a Grothendieck universe U is to take U as a replacement for the category of all sets. One says that a set S is U-small if S ∈U, and U-large otherwise. The category U-Set of all U-small sets has as objects all U-small sets and as morphisms all functions between these sets. Both the object set and the morphism set are sets, so it becomes possible to discuss the category of "all" sets without invoking proper classes. Then it becomes possible to define other categories in terms of this new category. For example, the category of all U-small categories is the category of all categories whose object set and whose morphism set are in U. Then the usual arguments of set theory are applicable to the category of all categories, and one does not have to worry about accidentally talking about proper classes. Because Grothendieck universes are extremely large, this suffices in almost all applications. Often when working with Grothendieck universes, mathematicians assume the Axiom of Universes: "For any set x, there exists a universe U such that x ∈U." The point of this axiom is that any set one encounters is then U-small for some U, so any argument done in a general Grothendieck universe can be applied. This axiom is closely related to the existence of strongly inaccessible cardinals. In type theory In some type theories, especially in systems with dependent types, types themselves can be regarded as terms. There is a type called the universe (often denoted ) which has types as its elements. To avoid paradoxes such as Girard's paradox (an analogue of Russell's paradox for type theory), type theories are often equipped with a countably infinite hierarchy of such universes, with each universe being a term of the next one. There are at least two kinds of universes that one can consider in type theory: Russell-style universes (named after Bertrand Russell) and Tarski-style universes''' (named after Alfred Tarski).Zhaohui Luo, "
Mathematics
Set theory
null
296928
https://en.wikipedia.org/wiki/Domestic%20turkey
Domestic turkey
The domestic turkey (Meleagris gallopavo domesticus) is a large fowl, one of the two species in the genus Meleagris and the same species as the wild turkey. Although turkey domestication was thought to have occurred in central Mesoamerica at least 2,000 years ago, recent research suggests a possible second domestication event in the area that is now the southwestern United States between 200 BC and 500 AD. However, all of the main domestic turkey varieties today descend from the turkey raised in central Mexico that was subsequently imported into Europe by the Spanish in the 16th century. The domestic turkey is a popular form of poultry. It is raised throughout temperate parts of the world, partially because industrialized farming has made it very cheap for the amount of meat it produces. Female domestic turkeys are called hens, and the chicks are poults or turkeylings. In Canada and the United States, male turkeys are called toms. In the United Kingdom and Ireland, they are stags. The great majority of domestic turkeys are bred to have white feathers because their pin feathers are less visible when the carcass is dressed, although brown or bronze-feathered varieties are also raised. The fleshy protuberance atop the beak is the snood and the one attached to the underside of the beak is known as a wattle. The English-language name for this species results from an early misidentification of the bird with an unrelated species which was imported to Europe through the country of Turkey. The Latin species name means "chicken peacock". History The modern domestic turkey is descended from the South Mexican subspecies (the nominate subspecies M. g. gallopavo) of wild turkey, found in Central Mexico in a region bounded by the present Mexican states of Jalisco to the northwest, Guerrero to the southwest, and Veracruz to the east. Ancient Mesoamericans domesticated this subspecies, using its meat and eggs as major sources of protein and employing its feathers extensively for decorative purposes. The Aztecs associated the turkey with their trickster god Tezcatlipoca, perhaps because of its perceived humorous behavior. Domestic turkeys were taken to Europe by the Spanish. Many distinct breeds were developed in Europe (e.g. Spanish Black, Royal Palm). In the early 20th century, many advances were made in the breeding of turkeys, resulting in breeds such as the Beltsville Small White. The 16th-century English navigator William Strickland is generally credited with introducing the turkey into England. His family coat of arms – showing a turkey cock as the family crest – is among the earliest known European depictions of a turkey. English farmer Thomas Tusser notes the turkey being among farmer's fare at Christmas in 1573. The domestic turkey was sent from England to Jamestown, Virginia in 1608. A document written in 1584 lists supplies to be furnished to future colonies in the New World; "turkies, male and female". Prior to the late 19th century, turkey was something of a luxury in the UK, with goose or beef a more common Christmas dinner among the working classes. In Charles Dickens' A Christmas Carol (1843), Bob Cratchit had a goose before Scrooge bought him a turkey. Turkey production in the UK was centered in East Anglia, using two breeds, the Norfolk Black and the Norfolk Bronze (also known as Cambridge Bronze). These would be driven as flocks, after shoeing, down to markets in London from the 17th century onwards – the breeds having arrived in the early 16th century via Spain. Intensive farming of turkeys from the late 1940s dramatically cut the price, making it more affordable for the working classes. With the availability of refrigeration, whole turkeys could be shipped frozen to distant markets. Later advances in disease control increased production even more. Advances in shipping, changing consumer preferences and the proliferation of commercial poultry plants has made fresh turkey inexpensive as well as readily available. Recent genome analysis has provided researchers with the opportunity to determine the evolutionary history of domestic turkeys, and their relationship to other domestic fowl. Behavior Young domestic turkeys readily fly short distances, perch and roost. These behaviours become less frequent as the birds mature, but adults will readily climb on objects such as bales of straw. Young birds perform spontaneous, frivolous running ('frolicking') which has all the appearance of play. Commercial turkeys show a wide diversity of behaviours including 'comfort' behaviours such as wing-flapping, feather ruffling, leg stretching and dust-bathing. Turkeys are highly social and become very distressed when isolated. Many of their behaviours are socially facilitated; i.e., expression of a behaviour by one animal increases the tendency for this behaviour to be performed by others. Adults can recognise 'strangers' and placing any alien turkey into an established group will almost certainly result in that individual being attacked, sometimes fatally. Turkeys are highly vocal, and 'social tension' within the group can be monitored by the birds' vocalisations. A high-pitched trill indicates the birds are becoming aggressive which can develop into intense sparring where opponents leap at each other with the large, sharp talons, and try to peck or grasp the head of each other. Aggression increases in frequency and severity as the birds mature. Maturing males spend a considerable proportion of their time sexually displaying. This is very similar to that of the wild turkey and involves fanning the tail feathers, drooping the wings and erecting all body feathers, including the 'beard' (a tuft of black, modified hair-like feathers on the centre of the breast). The skin of the head, neck and caruncles (fleshy nodules) becomes bright blue and red, and the snood (an erectile appendage on the forehead) elongates, the birds 'sneeze' at regular intervals, followed by a rapid vibration of their tail feathers. Throughout, the birds strut slowly about, with the neck arched backward, their breasts thrust forward and emitting their characteristic 'gobbling' call. Size and weight The domestic turkey is the eighth largest living bird species in terms of maximum mass at 39 kg (86 lbs). Due to their extreme size differences, domestic turkeys are semi-flightless, as younger or smaller specimens are still capable of short-distance flight, whereas the largest individuals are completely flightless and terrestrial. Turkey breeds The Broad Breasted White is the commercial turkey of choice for large scale industrial turkey farms, and consequently is the most consumed variety of the bird. Usually the turkey to receive a "presidential pardon", a U.S. custom, is a Broad Breasted White. The Broad Breasted Bronze is another commercially developed strain of table bird. The Standard Bronze looks much like the Broad Breasted Bronze, except that it is single breasted, and can naturally breed. The Bourbon Red turkey is a smaller, non-commercial breed with dark reddish feathers with white markings. Slate, or Blue Slate, turkeys are a very rare breed with gray-blue feathers. The Black ("Spanish Black", "Norfolk Black") has very dark plumage with a green sheen. The Narragansett Turkey is a popular heritage breed named after Narraganset Bay in New England. The Chocolate is a rarer heritage breed with markings similar to a Black Spanish, but light brown instead of black in color. Common in the Southern U.S. and France before the Civil War. The Beltsville Small White is a small heritage breed, whose development started in 1934. The breed was introduced in 1941 and was admitted to the APA Standard in 1951. Although slightly bigger and broader than the Midget White, both are often mislabeled. The Midget White is a smaller heritage breed. Commercial production In commercial production, breeder farms supply eggs to hatcheries. After 28 days of incubation, the hatched poults are sexed and delivered to the grow-out farms; hens are raised separately from toms because of different growth rates. In the UK, it is common to rear chicks in the following way. Between one and seven days of age, chicks are placed into small circular brooding pens to ensure they encounter food and water. To encourage feeding, they may be kept under constant light for the first 48 hours. To assist thermoregulation, air temperature is maintained at for the first three days, then lowered by approximately 3 °C (5.4 °F) every two days to at 37 days of age, and infrared heaters are usually provided for the first few days. Whilst in the pens, feed is made widely accessible by scattering it on sheets of paper in addition to being available in feeders. After several days, the pens are removed, allowing the birds access to the entire rearing shed, which may contain tens of thousands of birds. The birds remain there for several weeks, after which they are transported to another unit. The vast majority of turkeys are reared indoors in purpose-built or modified buildings of which there are many types. Some types have slatted walls to allow ventilation, but many have solid walls and no windows to allow artificial lighting manipulations to optimise production. The buildings can be very large (converted aircraft hangars are sometimes used) and may contain tens of thousands of birds as a single flock. The floor substrate is usually deep-litter, e.g. wood shavings, which relies upon the controlled build-up of a microbial flora requiring skilful management. Ambient temperatures for adult domestic turkeys are usually maintained between . High temperatures should be avoided because the high metabolic rate of turkeys (up to 69 W/bird) makes them susceptible to heat stress, exacerbated by high stocking densities. Commercial turkeys are kept under a variety of lighting schedules, e.g. continuous light, long photoperiods (23 h), or intermittent lighting, to encourage feeding and accelerate growth. Light intensity is usually low (e.g. less than one lux) to reduce feather pecking. Rations generally include corn and soybean meal, with added vitamins and minerals, and is adjusted for protein, carbohydrate and fat based on the age and nutrient requirements. Hens are slaughtered at about 14–16 weeks and toms at about 18–20 weeks of age when they can weigh over compared to a mature male wild turkey which weighs approximately . Welfare concerns Stocking density is an issue in the welfare of commercial turkeys and high densities are a major animal welfare concern. Permitted stocking densities for turkeys reared indoors vary according to geography and animal welfare farm assurance schemes. For example, in Germany, there is a voluntary maximum of 52 kg/m2 and 58 kg/m2 for males and females respectively. In the UK, the RSPCA Freedom Foods assurance scheme reduces permissible stocking density to 25 kg/m2 for turkeys reared indoors. Turkeys maintained at commercial stocking densities (8 birds/m2; 61 kg/m2) exhibit increased welfare problems such as increases in gait abnormalities, hip and foot lesions, and bird disturbances, and decreased bodyweight compared with lower stocking densities. Turkeys reared at 8 birds/m2 have a higher incidence of hip lesions and foot pad dermatitis than those reared at 6.5 or 5.0 birds/m2. Insufficient space may lead to an increased risk for injuries such as broken wings caused by hitting the pen walls or other turkeys during aggressive encounters and can also lead to heat stress. The problems of small space allowance are exacerbated by the major influence of social facilitation on the behaviour of turkeys. If turkeys are to feed, drink, dust-bathe, etc., simultaneously, then to avoid causing frustration, resources and space must be available in large quantities. Lighting manipulations used to optimise production can compromise welfare. Long photoperiods combined with low light intensity can result in blindness from buphthalmia (distortions of the eye morphology) or retinal detachment. Feather pecking occurs frequently amongst commercially reared turkeys and can begin at 1 day of age. This behaviour is considered to be re-directed foraging behaviour, caused by providing poultry with an impoverished foraging environment. To reduce feather pecking, turkeys are often beak-trimmed. Ultraviolet-reflective markings appear on young birds at the same time as feather pecking becomes targeted toward these areas, indicating a possible link. Commercially reared turkeys also perform head-pecking, which becomes more frequent as they sexually mature. When this occurs in small enclosures or environments with few opportunities to escape, the outcome is often fatal and rapid. Frequent monitoring is therefore essential, particularly of males approaching maturity. Injuries to the head receive considerable attention from other birds, and head-pecking often occurs after a relatively minor injury has been received during a fight or when a lying bird has been trodden upon and scratched by another. Individuals being re-introduced after separation are often immediately attacked again. Fatal head-pecking can occur even in small (10 birds), stable groups. Commercial turkeys are normally reared in single-sex flocks. If a male is inadvertently placed in a female flock, he may be aggressively victimised (hence the term 'henpecked'). Females in male groups will be repeatedly mated, during which it is highly likely she will be injured from being trampled upon. Breeding and companies The dominant commercial breed is the Broad-Breasted Whites (similar to the "White Holland", but a separate breed), which have been selected for size and amount of meat. Mature toms are too large to achieve natural fertilization without injuring the hens, so their semen is collected, and hens are inseminated artificially. Several hens can be inseminated from each collection, so fewer toms are needed. The eggs of some turkey breeds are able to develop without fertilization, in a process called parthenogenesis. Breeders' meat is too tough for roasting, and is mostly used to make processed meats. Waste products Approximately of poultry feathers are produced every year by the poultry industry. Most are ground into a protein source for ruminant animal feed, which are able to digest the protein keratin of which feathers are composed. Researchers at the United States Department of Agriculture (USDA) have patented a method of removing the stiff quill from the fibers which make up the feather. As this is a potential supply of natural fibers, research has been conducted at Philadelphia University's School of Engineering and Textiles to determine textile applications for feather fibers. Turkey feather fibers have been blended with nylon and spun into yarn, and then used for knitting. The yarns were tested for strength while the fabrics were evaluated as insulation materials. In the case of the yarns, as the percentage of turkey feather fibers increased, the strength decreased. In fabric form, as the percentage of turkey feather fibers increased, the heat retention capability of the fabric increased. Turkeys as food Approximately 620 million turkeys are slaughtered each year for meat worldwide. Turkeys are traditionally eaten as the main course of Christmas feasts in much of the English-speaking world (stuffed turkey) since appearing in England in the 16th century, as well as for Thanksgiving in the United States and Canada. While eating turkey was once mainly restricted to special occasions such as these, turkey is now eaten year-round and forms a regular part of many diets. Turkeys are sold sliced and ground, as well as "whole" in a manner similar to chicken with the head, feet, and feathers removed. Frozen whole turkeys remain popular. Sliced turkey is frequently used as a sandwich meat or served as cold cuts; in some cases, where recipes call for chicken, turkey can be used as a substitute. Additionally, ground turkey is frequently marketed as a healthy ground beef substitute. Without careful preparation, cooked turkey may end up less moist than other poultry meats, such as chicken or duck. The breast of the turkey can be dipped in breadcrumbs as an alternative to chicken nuggets. Wild turkeys, while technically the same species as domestic turkeys, have a very different taste from farm-raised turkeys. In contrast to domestic turkeys, almost all wild turkey meat is "dark" (even the breast) and more intensely flavored. The flavor can also vary seasonally with changes in available forage, often leaving wild turkey meat with a gamier flavor in late summer due to the greater number of insects in its diet over the preceding months. Wild turkeys that have fed predominantly on grass and grain have a milder flavor. Older heritage breeds also differ in flavor. Unlike chicken, duck, and quail eggs, turkey eggs are not commonly sold as food due to the high demand for whole turkeys and the lower output of turkey eggs as compared with other fowl. The value of a single turkey egg is estimated to be about US$3.50 on the open market, substantially more than a single carton of one dozen chicken eggs. White turkey meat is often considered healthier than dark meat because of its lower fat content, but the nutritional differences are small. Although turkey is reputed to cause sleepiness, holiday dinners are commonly large meals served with carbohydrates, fats, and alcohol in a relaxed atmosphere, all of which are bigger contributors to post-meal sleepiness than the tryptophan in turkey. Cooking Both fresh and frozen turkeys are used for cooking; as with most foods, fresh turkeys are generally preferred, although they cost more. Around holiday seasons, high demand for fresh turkeys often makes them difficult to purchase without ordering in advance. For the frozen variety, the large size of the turkeys typically used for consumption makes defrosting them a major endeavor: a typically sized turkey will take several days to properly defrost. Turkeys are usually baked or roasted in an oven for several hours, often while the cook prepares the remainder of the meal. Sometimes, a turkey is brined before roasting to enhance flavor and moisture content. This is necessary because the dark meat requires a higher temperature to denature all of the myoglobin pigment than the white meat (which is very low in myoglobin), so that fully cooking the dark meat tends to dry out the breast. Brining makes it possible to fully cook the dark meat without drying the breast meat. Turkeys are sometimes decorated with turkey frills prior to serving. In some areas, particularly the American South, turkeys may also be deep fried in hot oil (often peanut oil) for 30 to 45 minutes by using a turkey fryer. Deep frying turkey has become something of a fad, with hazardous consequences for those unprepared to safely handle the large quantities of hot oil required. Turkey litter for fuel Although most commonly used as fertilizer, turkey litter (droppings mixed with bedding material, usually wood chips) has been used as a fuel source in electric power plants. One such plant in western Minnesota provided 55 megawatts of power using 500,000 tons of litter per year. The plant, known as Fibrominn, operated from 2007 to 2018, closing due to being unable to compete commercially with low-carbon sources of renewable energy.
Biology and health sciences
Galliformes
null
296936
https://en.wikipedia.org/wiki/Golden%20pheasant
Golden pheasant
The golden pheasant (Chrysolophus pictus), also known as the Chinese pheasant, and rainbow pheasant, is a gamebird of the order Galliformes (gallinaceous birds) and the family Phasianidae (pheasants). The genus name is from Ancient Greek khrusolophos, "with golden crest", and pictus is Latin for "painted" from pingere, "to paint". Description The adult male is approximately in length, with its tail accounting for two-thirds of the total length, and around in weight. Its coloration is characterized by a golden crest and rump and by a bright red body. It possess an orange ruff or "cape" on the beck that can be spread in display, appearing as an alternating black and orange fan that covers all of the face except for the eyes. The eye is bright yellow, with a pinpoint black pupil. The face, throat, chin, and the sides of neck are rusty tan. The wattles and orbital skin are both yellow. The upper back is green and the rest of the back and rump is golden-yellow. The tertiary feathers on the wings are blue, whereas the scapulars are dark red. The central tail feathers are black spotted with cinnamon, while the tip of the tail is a cinnamon buff. The upper tail coverts are the same colour as the central tail feathers. The male also has a scarlet breast, and scarlet and light chestnut flanks and underparts. Lower legs and feet are a dull yellow. The adult female (hen) is in length and weights around . Her tail is proportionally longer, and makes up roughly half of her total length. She is much less showy than the male, with a duller mottled brown plumage similar to that of the female common pheasant, but is darker and more slender. The female's breast and sides are barred buff and blackish brown, and the abdomen is plain buff. She has a buff face and throat. Some abnormal females may later in their lifetime develop some male plumage. Both males and females have yellow legs and yellow bills. Distribution and habitat The golden pheasant is native to forests in mountainous areas of western China, but feral populations have been established in the United Kingdom, Canada, the United States, Mexico, Colombia, Peru, Bolivia, Chile, Argentina, Uruguay, the Falkland Islands, Germany, Belgium, the Netherlands, France, Ireland, Australia and New Zealand. In England they may be found in East Anglia in the dense forest landscape of the Breckland as well as Tresco on the Isles of Scilly. Golden pheasants were introduced to Maui, in Hawaii, at some point before their first detection in 1996. The original birds were released in The Nature Conservancy's Waikamoi Preserve, where the founder population has showed evidence of reproductive behavior. Secondary groups were later recorded in Hanawï Natural Area Reserve and Haleakalä National Park, where they most probably arrived through dispersal from Waikamoi. Overall, the pheasants inhabit areas between on the windward slope of the island. Ecology Golden pheasants feed on the ground on grain, leaves and invertebrates, but they roost in trees at night. During winter, flocks tend to forage close to human settlements at the edge of forest, taking primarily wheat leaves and seeds. While they can fly clumsily in short bursts, they prefer to run and spend most of their time on the ground. This type of flying is commonly known as "flapping flight" and is due to a lack of a deep layer of M. pectoralis pars thoracicus and the tendon that attaches to it. This muscle is commonly attributed to the stabilization of flight in other birds; however, the absence of this deep layer causes this mode of "flapping flight" is simply a mechanism that it shares with other ground birds in order to escape predators. However, they would rather prefer to simply run away and hide from their predators rather than to fly. Golden pheasants lay 8 to 12 eggs at a time and will then incubate these for around 22–23 days. They tend to eat berries, grubs, seeds and other types of vegetation. The male has a metallic call in the breeding season. In captivity The golden pheasant is commonly found in zoos and aviaries, but often as hybrid specimens that have the similar Lady Amherst's pheasant in their lineage. There are also different mutations of the golden pheasant known from birds in captivity, including the dark-throated, yellow, cinnamon, salmon, peach, splash, mahogany and silver. In aviculture, the wild type is referred to as "red-golden" to differentiate it from these mutations. The coloration in the feathers can be an indication to the genetic quality of the male golden pheasant. Hue, brightness, and chroma are usually measured to see color differences. results show that heterozygosity of the most polymorphic major histocompatibility complex locus was highly related with the chroma and brightness of the feathers. Gallery
Biology and health sciences
Galliformes
Animals
296942
https://en.wikipedia.org/wiki/Double%20counting%20%28proof%20technique%29
Double counting (proof technique)
In combinatorics, double counting, also called counting in two ways, is a combinatorial proof technique for showing that two expressions are equal by demonstrating that they are two ways of counting the size of one set. In this technique, which call "one of the most important tools in combinatorics", one describes a finite set from two perspectives leading to two distinct expressions for the size of the set. Since both expressions equal the size of the same set, they equal each other. Examples Multiplication (of natural numbers) commutes This is a simple example of double counting, often used when teaching multiplication to young children. In this context, multiplication of natural numbers is introduced as repeated addition, and is then shown to be commutative by counting, in two different ways, a number of items arranged in a rectangular grid. Suppose the grid has rows and columns. We first count the items by summing rows of items each, then a second time by summing columns of items each, thus showing that, for these particular values of and , . Forming committees One example of the double counting method counts the number of ways in which a committee can be formed from people, allowing any number of the people (even zero of them) to be part of the committee. That is, one counts the number of subsets that an -element set may have. One method for forming a committee is to ask each person to choose whether or not to join it. Each person has two choices – yes or no – and these choices are independent of those of the other people. Therefore there are possibilities. Alternatively, one may observe that the size of the committee must be some number between 0 and . For each possible size , the number of ways in which a committee of people can be formed from people is the binomial coefficient Therefore the total number of possible committees is the sum of binomial coefficients over . Equating the two expressions gives the identity a special case of the binomial theorem. A similar double counting method can be used to prove the more general identity Handshaking lemma Another theorem that is commonly proven with a double counting argument states that every undirected graph contains an even number of vertices of odd degree. That is, the number of vertices that have an odd number of incident edges must be even. In more colloquial terms, in a party of people some of whom shake hands, an even number of people must have shaken an odd number of other people's hands; for this reason, the result is known as the handshaking lemma. To prove this by double counting, let be the degree of vertex . The number of vertex-edge incidences in the graph may be counted in two different ways: by summing the degrees of the vertices, or by counting two incidences for every edge. Therefore where is the number of edges. The sum of the degrees of the vertices is therefore an even number, which could not happen if an odd number of the vertices had odd degree. This fact, with this proof, appears in the 1736 paper of Leonhard Euler on the Seven Bridges of Königsberg that first began the study of graph theory. Counting trees What is the number of different trees that can be formed from a set of distinct vertices? Cayley's formula gives the answer . list four proofs of this fact; they write of the fourth, a double counting proof due to Jim Pitman, that it is "the most beautiful of them all." Pitman's proof counts in two different ways the number of different sequences of directed edges that can be added to an empty graph on vertices to form from it a rooted tree. The directed edges point away from the root. One way to form such a sequence is to start with one of the possible unrooted trees, choose one of its vertices as root, and choose one of the possible sequences in which to add its (directed) edges. Therefore, the total number of sequences that can be formed in this way is . Another way to count these edge sequences is to consider adding the edges one by one to an empty graph, and to count the number of choices available at each step. If one has added a collection of edges already, so that the graph formed by these edges is a rooted forest with trees, there are choices for the next edge to add: its starting vertex can be any one of the vertices of the graph, and its ending vertex can be any one of the roots other than the root of the tree containing the starting vertex. Therefore, if one multiplies together the number of choices from the first step, the second step, etc., the total number of choices is Equating these two formulas for the number of edge sequences results in Cayley's formula: and As Aigner and Ziegler describe, the formula and the proof can be generalized to count the number of rooted forests with trees, for any
Mathematics
Combinatorics
null
296994
https://en.wikipedia.org/wiki/Bottlenose%20dolphin
Bottlenose dolphin
The bottlenose dolphin is a toothed whale in the genus Tursiops. They are common, cosmopolitan members of the family Delphinidae, the family of oceanic dolphins. Molecular studies show the genus contains three species: the common bottlenose dolphin (Tursiops truncatus), the Indo-Pacific bottlenose dolphin (Tursiops aduncus), and Tamanend's bottlenose dolphin (Tursiops erebennus). Others, like the Burrunan dolphin (Tursiops (aduncus) australis), may be alternately considered their own species or be subspecies of T. aduncus. Bottlenose dolphins inhabit warm and temperate seas worldwide, being found everywhere except for the Arctic and Antarctic Circle regions. Their name derives from the Latin tursio (dolphin) and truncatus for the truncated teeth (the type specimen was old and had worn down teeth; this is not a typical characteristic of most members of the species). Numerous investigations of bottlenose dolphin intelligence have been conducted, examining mimicry, use of artificial language, object categorization, and self-recognition. They can use tools (sponging; using marine sponges to forage for food sources they normally could not access) and transmit cultural knowledge from generation to generation, and their considerable intelligence has driven interaction with humans. Bottlenose dolphins gained popularity from aquarium shows and television programs such as Flipper. They have also been trained by militaries to locate sea mines or detect and mark enemy divers. In some areas, they cooperate with local fishermen by driving fish into their nets and eating the fish that escape. Some encounters with humans are harmful to the dolphins: people hunt them for food, and dolphins are killed inadvertently as a bycatch of tuna fishing and by getting caught in crab traps. Bottlenose dolphins have the third largest encephalization levels of any mammal on Earth (humans have the largest, followed by Northern Right whale dolphins), sharing close ratios with those of humans and other cetaceans, while being twice as high of other great apes. This more than likely contributes to their high intelligence. Taxonomy Scientists have been long aware of the fact that the Tursiops dolphins might consist of more than one species, as there is extensive variation in color and morphology along its range. In the past, most studies used morphology to evaluate differences between and within species, but in the late 20th century, combining morphological and molecular genetics allowed much greater insight into this previously intractable problem. Since the late 1990s and early 2000s, most researchers acknowledged the existence of two species: the common bottlenose dolphin (T. truncatus), found in coastal and oceanic habitats of most tropical to temperate oceans, and the Indo-Pacific bottlenose dolphin (T. aduncus), that lives in coastal waters around India, northern Australia, South China, the Red Sea, and the eastern coast of Africa. In 2011, a third distinct species was described, the Burrunan dolphin (T. (aduncus) australis), found in the Port Phillip and Gippsland Lakes areas of Victoria, Australia, after research showed it was distinct from T. truncatus and T. aduncus, both in morphology and genetics. Also, evidence has been accumulating to validate the existence of a separate species, Lahille's bottlenose dolphin, T. gephyreus, that occurs in coastal waters of Argentina, Uruguay and southern Brazil. Other sources accept the Pacific bottlenose dolphin (T. t. gillii or T. gillii), that inhabits the Pacific, and has a black line from the eye to the forehead. T. gillii, first described in 1873, is currently considered a junior synonym of T. truncatus. Additionally, T. nuuanu was described in 1911 for bottlenose dolphins along the Pacific coast in Central America. An analysis of T. gillii and T. nuuanu specimens supported T. gillii as a synonym of T. truncatus, while T. nuuanu was recognized as a subspecies. In general, genetic variation between populations is significant, even among nearby populations. As a result of this genetic variation, other distinct species currently considered to be populations of common bottlenose dolphin are possible. Much of the discussion and doubts about its taxonomy is related to the existence of two ecotypes of bottlenose dolphins in many part of its distribution. For example, the two ecotypes of the common bottlenose dolphin within the western North Atlantic are represented by the shallower water or coastal ecotype and the more offshore ecotype. Their ranges overlap, but they have been shown to be genetically distinct. In 2022, Costa et al. established morphologic, genetic, and evolutionary divergence between the two ecotypes in the western North Atlantic, resurrecting Tursiops erebennus for the coastal form while the offshore form was retained in T. truncatus. The Society for Marine Mammalogy's Committee on Taxonomy presently recognizes three species of bottlenose dolphin: T. truncatus, T. aduncus, and T. erebennus. They also recognize three subspecies of common bottlenose dolphin in addition to the nominotypical subspecies: the Black Sea bottlenose dolphin (T. t. ponticus), Lahille's bottlenose dolphin (T. t. gephyreus), and the Eastern Tropical Pacific bottlenose dolphin (T. t. nuuanu). The IUCN, on their Red List of endangered species, currently recognises only two species of bottlenose dolphins. The American Society of Mammalogists also recognizes only two species. While acknowledging the studies describing T. australis, it classifies it within T. aduncus. Some recent genetic evidence suggests the Indo-Pacific bottlenose dolphin belongs in the genus Stenella, since it is more like the Atlantic spotted dolphin (Stenella frontalis) than the common bottlenose dolphin. However, more recent studies indicate that this is a consequence of reticulate evolution (such as past hybridization between Stenella and ancestral Tursiops) and incomplete lineage sorting, and thus support T. truncatus and T. aduncus belonging to the same genus. Hybrids Bottlenose dolphins have been known to hybridize with other dolphin species. Hybrids with Risso's dolphin occur both in the wild and in captivity. The best known hybrid is the wholphin, a false killer whale-bottlenose dolphin hybrid. The wholphin is fertile, and two currently live at the Sea Life Park in Hawaii. The first was born in 1985 to a female bottlenose. Wholphins also exist in the wild. In captivity, a bottlenose dolphin and a rough-toothed dolphin hybridized. A common dolphin-bottlenose dolphin hybrid born in captivity lives at SeaWorld California. Other hybrids live in captivity around the world and in the wild, such as a bottlenose dolphin-Atlantic spotted dolphin hybrid. Fossil species Bottlenose dolphins appeared during the Miocene. Known fossil species include Tursiops osennae (late Miocene to early Pliocene) from the Piacenzian coastal mudstone, and Tursiops miocaenus (Miocene) from the Burdigalian marine sandstone, all in Italy. Description The bottlenose dolphin weighs an average of , but can range from . It can reach a length of just over . Its color varies considerably, is usually dark gray on the back and lighter gray on the flanks, but it can be bluish-grey, brownish-grey, or even nearly black, and is often darker on the back from the rostrum to behind the dorsal fin. This is called countershading and is a form of camouflage. Older dolphins sometimes have a few spots. Bottlenose dolphins can live for more than 40 years. Females typically live 5–10 years longer than males, with some females exceeding 60 years. This extreme age is rare and less than 2% of all Bottlenose dolphins will live longer than 60 years. Bottlenose dolphins can jump to a height of 6 metres (20 feet) in the air. Anatomy Their elongated upper and lower jaws form what is called a rostrum, or snout, which gives the animal its common name. The real, functional nose is the blowhole on top of its head; the nasal septum is visible when the blowhole is open. Bottlenose dolphins have 18 to 28 conical teeth on each side of each jaw. The flukes (lobes of the tail) and dorsal fin are formed of dense connective tissue and do not contain bone or muscle. The dorsal fin usually shows phenotypic variations that help discriminate among populations. The animal propels itself by moving the flukes up and down. The pectoral flippers (at the sides of the body) are for steering; they contain bones homologous to the forelimbs of land mammals. A bottlenose dolphin discovered in Japan has two additional pectoral fins, or "hind legs", at the tail, about the size of a human's pair of hands. Scientists believe a mutation caused the ancient trait to reassert itself as a form of atavism. Physiology and senses In colder waters, they have more body fat and blood, and are more suited to deeper diving. Typically, 18%–20% of their body weight is blubber. Most research in this area has been restricted to the North Atlantic Ocean. Bottlenose dolphins typically swim at , but are capable of bursts of up to . The higher speeds can only be sustained for a short time. Senses The dolphin's search for food is aided by a form of sonar known as echolocation: it locates objects by producing sounds and listening for the echoes. Clicking sounds are emitted in a focused beam in front of the dolphin. When the clicking sounds hit an object in the water, like a fish or rock, they bounce off and come back to the dolphin as echoes. Echolocation tells the dolphins the shape, size, speed, distance, and location of the object. To hear the returning echo, they have two small ear openings behind the eyes, but most sound waves are transmitted to the inner ear through the lower jaw. As the object of interest is approached, the echo becomes booming, and the dolphins adjust by decreasing the intensity of the emitted sounds. (This contrasts with bats and sonar, which reduce the sensitivity of the sound receptor.) The interclick interval also decreases as the animal nears the target. Evidently, the dolphin waits for each click's echo before clicking again. Echolocation details, such as signal strength, spectral qualities, and discrimination, are well understood by researchers. Bottlenose dolphins are also able to extract shape information, suggesting they are able to form an "echoic image" or sound picture of their targets. They also have electroreception. The calves are born with two slender rows of whiskers along their snout, which fall off soon after birth, leaving behind a series of dimples known as vibrissal pits able to sense electric fields. Dolphins have sharp eyesight. The eyes are located at the sides of the head and have a tapetum lucidum, or reflecting membrane, at the back of the retina, which aids vision in dim light. Their horseshoe-shaped, double-slit pupils enable dolphins to have good vision both in air and underwater, despite the different indices of refraction of these media. When under water, the eyeball's lens serves to focus light, whereas in the in-air environment, the typically bright light serves to contract the specialized pupil, resulting in sharpness from a smaller aperture (similar to a pinhole camera). By contrast, a bottlenose's sense of smell is poor, because its blowhole, the analog to the nose, is closed when underwater and it opens only for breathing. Like other toothed whales, it has no olfactory nerves or olfactory lobe in the brain. Bottlenose dolphins are able to detect salty, sweet, bitter (quinine sulphate), and sour (citric acid) tastes, but this has not been well-studied. Anecdotally, some individuals in captivity have been noted to have preferences for food fish types, although it is not clear if taste mediates this preference. In 2022, a study at the University of St Andrews in Scotland found that dolphins were able to identify their "friends" and family members by the taste of their urine in the water. Communication Bottlenose dolphins communicate through burst pulsed sounds, whistles, and body language. Examples of body language include leaping out of the water, snapping jaws, slapping the tail on the surface and butting heads. Sounds and gestures help keep track of other dolphins in the group, and alert other dolphins to danger and nearby food. Lacking vocal cords, they produce sounds using six air sacs near their blow hole. Each animal has a uniquely identifying, frequency-modulated narrow-band signature vocalization (signature whistle). Signature whistles, which are in a higher frequency range than humans can hear, have an important role in facilitating mother–calf contact. In the Sarasota Dolphin Research Program's library of recordings were 19 female common bottlenose dolphins (Tursiops truncatus) producing signature whistles both with and without the presence of their dependent calf. In all 19 cases, the mother dolphin changed the same signature whistle when the calf was present, by reaching a higher frequency, or using a wider frequency range. Similarly, humans use higher fundamental frequencies and a wider pitch range to inflect child–directed speech (CDS). This has rarely been discovered in other species. The researchers stated that CDS benefits for humans are cueing the child to pay attention, long-term bonding, and promoting the development of lifelong vocal learning, with parallels in these bottlenose dolphins in an example of convergent evolution. Researchers from the Bottlenose Dolphin Research Institute (BDRI), based in Sardinia (Italy) have now shown whistles and burst pulsed sounds are vital to the animals' social life and mirror their behaviors. The tonal whistle sounds (the most melodious ones) allow dolphins to stay in contact with each other (above all, mothers and offspring), and to coordinate hunting strategies. The burst-pulsed sounds (which are more complex and varied than the whistles) are used "to avoid physical aggression in situations of high excitement", such as when they are competing for the same piece of food, for example. The dolphins emit these strident sounds when in the presence of other individuals moving towards the same prey. The "least dominant" one soon moves away to avoid confrontation. Other communication uses about 30 distinguishable sounds, and although famously proposed by John C. Lilly in the 1950s, no "dolphin language" has been found. However, Herman, Richards, and Wolz demonstrated comprehension of an artificial language by two bottlenose dolphins (named Akeakamai and Phoenix) in the period of skepticism toward animal language following Herbert Terrace's critique. Intelligence Cognition Cognitive abilities that have been investigated include concept formation, sensory skills, and mental representations. Such research has been ongoing since the 1970s. This includes: acoustic and behavioral mimicry, comprehension of novel sequences in an artificial language, memory, monitoring of self behavior, discrimination and matching, comprehension of symbols for various body parts, comprehension of pointing gestures and gaze (as made by dolphins or humans), mirror self-recognition, and numerical values. Tool use and culture At least some wild bottlenose dolphins use tools. In Shark Bay, off Western Australia, dolphins place a marine sponge on their rostrum, presumably to protect it when searching for food on the sandy sea bottom. This has only been observed in this bay (first in 1997), and is predominantly practiced by females. A 2005 study showed mothers most likely teach the behavior to their offspring, evincing culture (behavior learned from other species members). Mud plume feeding is a feeding technique performed by a small community of bottlenose dolphins over shallow seagrass beds (less than 1 m) in the Florida Keys in the United States. The behavior involves creation of a U-shaped plume of mud in the water column and then rushing through the plume to capture fish. Along the beaches and tidal marshes of South Carolina and Georgia in the United States, bottlenose dolphins cooperatively herd prey fish onto steep and sandy banks in a practice known as "strand feeding". Groups of between two and six dolphins are regularly observed creating a bow wave to force the fish out of the water. The dolphins follow the fish, stranding themselves briefly, to eat their prey before twisting their bodies back and forth in order to slide back into the water. While initially documented in South Carolina and Georgia, strand feeding has also been observed in Louisiana, Texas, Baja California, Ecuador, and Australia. Some Mauritanian dolphins cooperate with human fishermen. The dolphins drive a school of fish towards the shore, where humans await with nets. In the confusion of casting nets, the dolphins catch a large number of fish as well. Intraspecies cooperative foraging has also been observed. These behaviors may also be transmitted via teaching. Controversially, Rendell and Whitehead have proposed a structure for the study of cetacean culture. Similar cases have been observed in Laguna, Santa Catarina in Brazil since during 19th century as well. Near Adelaide, in South Australia, several bottlenose dolphins "tail-walk", whereby they elevate the upper part of their bodies vertically out of the water, and propel themselves along the surface with powerful tail movements. Tail-walking mostly arises via human training in dolphinaria. In the 1980s, a female from the local population was kept at a local dolphinarium for three weeks, and the scientist suggests she copied the tail-walking behavior from other dolphins. Two other wild adult female dolphins copied it from her, and the behaviour has continued through generations until 2022. A study conducted by the University of Chicago showed that bottlenose dolphins can remember whistles of other dolphins they had lived with after 20 years of separation. Each dolphin has a unique whistle that functions like a name, allowing the marine mammals to keep close social bonds. The new research shows that dolphins have the longest memory yet known in any species other than humans. The bottlenose dolphins of John's Pass in Boca Ciega Bay, St. Petersburg, Florida, exhibit a rare form of self-decoration and social object use called grass-wearing. Self-decoration by wearing grass appears to be an attention-getting device rather than purely play and varies from a single blade to large clusters of grass. John's Pass dolphins self-decorate with grass primarily when they form new social groups or engage in procreative activities. Grass-wearing behavior among these dolphins is a local behavioral tradition that could constitute a cultural difference from other communities. Cortical neurons Some researchers hypothesize that the number of nerve cells (neurons) in the cortex of the brain predicts intelligence in mammals. A 2019 study estimated the number of neurons in the cerebral cortex of three common bottlenose dolphins and found numbers ranging from 11.7 to 15.2 billion neurons. The human average being approximately 16 billion, this is likely within the range found in the human population. Life history Bottlenose dolphins have a lifespan of 40–60 years. Females can outlive males and live for 60 years or more. Dolphins start to reproduce aged 5 to 15 years. Respiration and sleep The bottlenose dolphin has a single blowhole located on the dorsal surface of the head consisting of a hole and a muscular flap. The flap is closed during muscle relaxation and opens during contraction. Dolphins are voluntary breathers, who must deliberately surface and open their blowholes to get air. They can store almost twice as much oxygen in proportion to their body weight as a human can: the dolphin can store 36 milliliters (ml) of oxygen per kg of body weight, compared with 20 ml per kg for humans. This is an adaptation to diving. The bottlenose dolphin typically rises to the surface to breathe through its blowhole two to three times per minute, although it can remain submerged for up to 20 minutes. Dolphins can breathe while "half-asleep". During the sleeping cycle, one brain hemisphere remains active, while the other hemisphere shuts down. The active hemisphere handles surfacing and breathing behavior. The daily sleeping cycle lasts for about 8 hours, in increments of minutes to hours. During the sleeping cycle, they remain near the surface, swimming slowly or "logging", and occasionally closing one eye. Reproduction Both sexes have genital slits on the underside of their bodies. The male can retract and conceal his penis through his slit. The female's slit houses her vagina and anus. Females have two mammary slits, each housing one nipple, one on each side of the genital slit. The ability to stow their reproductive organs (especially in males) allows for maximum hydrodynamics. The breeding season produces significant physiological changes in males. At that time, the testes enlarge, enabling them to hold more sperm. Large amounts of sperm allow a male to wash away the previous suitor's sperm, while leaving some of his own for fertilization. Also, sperm concentration markedly increases. Having less sperm for out-of-season social mating means it wastes less. This suggests sperm production is energetically expensive. Males have large testes in relation to their body size. During the breeding season, males compete for access to females. Such competition can take the form of fighting other males or of herding females to prevent access by other males. Male Sarasota Bay common bottlenose dolphins with the defensive advantages of male pair-bonding range more widely than unpaired males, and encounter more unrelated females. In Shark Bay, male Indo-Pacific bottlenose dolphins have been observed working in pairs or larger groups to follow and/or restrict the movement of a female for weeks at a time, waiting for her to become sexually receptive. These coalitions, also known as male reproductive alliances, will fight with other coalitions for control of females. Humans and bottlenose dolphins are the only species that share this type of "gang formation" habit as a form of cooperation. Mating occurs belly to belly. Dolphins have been observed engaging in intercourse when the females are not in their estrous cycles and cannot produce young, suggesting they may mate for pleasure. The gestation period averages 12 months. Births can occur at any time of year, although peaks occur in warmer months. The young are born in shallow water, sometimes assisted by a (possibly male) "midwife", and usually only a single calf is born. Twins are possible, but rare. Newborn bottlenose dolphins are long and weigh , with Indo-Pacific bottlenose dolphin infants being generally smaller than common bottlenose dolphin infants. To accelerate nursing, the mother can eject milk from her mammary glands. The calf suckles for 18 months to up to 8 years, and continues to closely associate with its mother for several years after weaning. Females sexually mature at ages 5–13, males at ages 9–14. Females reproduce every two to six years. Reproduction is moderately seasonal (September–January), peaking from October to December. Calf loss between August and December is followed by rapid conception (1–2 months), whereas conception is delayed (2–9 months) if calf loss occurs between January and July. Weaning ages ranged from 2.7 to 8.0 years, but 66.7% (42 calves) were weaned by their fourth birthday. Females tended to wean mid-pregnancy. Group size was unrelated to water depth or female reproductive success, but reproductive success was predicted by water depth. Shallow water may allow mothers and calves to detect and avoid predatory sharks. Alternatively, or additionally, prey density may be higher in shallow water compared to deep water. Georgetown University professor Janet Mann argues the strong personal behavior among male calves is about bond formation and benefits the species in an evolutionary context. She cites studies showing these dolphins as adults are inseparable, and that early bonds aid protection, as well as in locating females. Female bottlenose dolphins have to expend additional energy in carrying out parental care, e.g., infant-carrying behavior. Dolphins do not physically hold their infants but line up in an echelon position with infants swimming beside them. This position creates a change of water flow pattern from the infant which minimizes separation between the mother and infant, but also increases the mother's surface area and creates a drag for the swimmer. This also leaves less energy to use in swimming speed, foraging, and predator evasion. Social interaction Adult males live mostly alone or in groups of two to three, and join pods for short periods of time. Adult females and young dolphins normally live in groups of up to 15 animals. Males give strong mutual support if other males help them, even if they are not friends. However, they live in fission-fusion societies of varying group size, within which individuals change associations, often on a daily or hourly basis. Group compositions are usually determined by sex, age, reproductive condition, familial relations and affiliation histories. In a dolphin community near Sarasota, Florida, the most common group types are adult females with their recent offspring, older subadults of both sexes, and adult males either alone or in bonded pairs. Smaller groups can join to form larger groups of 100 or more, and occasionally exceed 1,000. The social strategies of marine mammals such as bottlenose dolphins "provide interesting parallels" with the social strategies of elephants and chimpanzees. Bottlenose dolphins studied by Bottlenose Dolphin Research Institute researchers off the island of Sardinia show random social behavior while feeding, and their social behavior does not depend on feeding activity. In Sardinia, the presence of a floating marine fin-fish farm has been linked to a change in bottlenose dolphin distribution as a result of high fish density around the floating cages in the farming area. Ecology Feeding Fish is one of the main items in the dolphin diet. They also eat shrimps, squid, mollusks, and cuttlefish, and only swallow the soft parts. They eat 22 pounds of fish a day. When they encounter a shoal of fish, they work as a team to herd them towards the shore to maximize the harvest. They also hunt alone, often targeting bottom-dwelling species. The bottlenose dolphin sometimes hits a fish with its fluke, sometimes knocking it out of the water, using a strategy called "fish whacking". "Strand feeding" is an inherited feeding technique used by bottlenose dolphins near and around coastal regions of Georgia and South Carolina. When a pod finds a school of fish, they will circle the school and trap the fish in a mini whirlpool. Then, the dolphins will charge at the school and push their bodies up onto a mud-flat, forcing the fish on the mud-flat, as well. The dolphins then crawl around on their sides, consuming the fish they washed up on shore. This happens only during low tides. One type of feeding behavior seen in bottlenose dolphins is mud ring feeding. Bottlenose dolphins conflict with small-scale coastal commercial fisheries in some Mediterranean areas. Common bottlenose dolphins are probably attracted to fishing nets because they offer a concentrated food source. Relations with other species Dolphins can exhibit altruistic behaviour toward other sea creatures. On Mahia Beach, New Zealand, on March 10, 2008, two pygmy sperm whales, a female and calf, stranded on the beach. Rescuers attempted to refloat them four times. Shortly, a playful bottlenose dolphin known to local residents as Moko arrived and, after apparently vocalizing at the whales, led them along a sandbar to the open sea, saving them from imminent euthanasia. In 2019 a female was observed caring for a juvenile melon-headed whale, the first reported instance of a bottlenose dolphin adopting a non-conspecific infant. The bottlenose dolphin can behave aggressively. Males fight for rank and access to females. During mating season, males compete vigorously with each other through displays of toughness and size, with a series of acts, such as head-butting. They display aggression towards sharks and smaller dolphin species. At least one population, off Scotland, has practiced infanticide, and also has attacked and killed harbour porpoises. University of Aberdeen researchers say the dolphins do not eat their victims, but are simply competing for food. However, Dr. Read of Duke University, a porpoise expert researching similar cases of porpoise killings that had occurred in Virginia in 1996 and 1997, holds a different view. He states dolphins and porpoises feed on different types of fish, thus food competition is an unlikely cause of the killings. Similar behaviour has been observed in Ireland. In the first half of July 2014, four attacks with three porpoise fatalities were observed and caught on video by the Cardigan Bay Marine Wildlife Centre in the Cardigan Bay, Wales. The bottlenose dolphin sometimes forms mixed species groups with other species from the dolphin family, particularly larger species, such as the short-finned pilot whale, the false killer whale and Risso's dolphin. They also interact with smaller species, such as the Atlantic spotted dolphin and the rough-toothed dolphin. While interactions with smaller species are sometimes affiliative, they can also be hostile. Predators Some large shark species, such as the tiger shark, the dusky shark, the great white shark and the bull shark, prey on the bottlenose dolphin, especially calves. The bottlenose dolphin is capable of defending itself by charging the predator; dolphin 'mobbing' behavior of sharks can occasionally prove fatal for the shark. Targeting a single adult dolphin can be dangerous for a shark of similar size. Killer whale populations in New Zealand and Peru have been observed preying on bottlenose dolphins, but this seems rare, and other orcas may swim with dolphins. Swimming in pods allows dolphins to better defend themselves against predators. Bottlenose dolphins either use complex evasive strategies to outswim their predators, or mobbing techniques to batter the predator to death or force it to flee. Relation to humans Interaction The species sometimes shows curiosity towards humans in or near water. Occasionally, they rescue injured divers by raising them to the surface. They also do this to help injured members of their own species. In November 2004, a dramatic report of dolphin intervention came from New Zealand. Four lifeguards, swimming off the coast near Whangārei, were approached by a shark (reportedly a great white shark). Bottlenose dolphins herded the swimmers together and surrounded them for 40 minutes, preventing the shark from attacking, as they slowly swam to shore. In coastal regions, dolphins run the risk of colliding with boats. Researchers of the Bottlenose Dolphin Research Institute first quantified data about solitary bottlenose dolphin diving behavior in the presence and absence of boats. Dolphins responded more to tourist than fishing vessels. Driving behavior, speed, engine type and separation distance all affected dolphin safety. However, dolphins in these areas can also coexist with humans. For example, in the town of Laguna in south Brazil, a pod of bottlenose dolphins resides in the estuary, and some of its members cooperate with humans. These cooperating dolphins are individually recognized by the local fishermen, who name them. The fishermen typically stand up to their knees in the shallow waters or sit in canoes, waiting for the dolphins. Now and then, one or more dolphins appear, driving the fish towards the line of fishermen. One dolphin then displays a unique body movement outside the water, which serves as a signal to the fishermen to cast their nets (the entire sequence is shown here, and a detailed description of the signal's characteristics is available here). In this unique form of cooperation, the dolphins gain because the fish are disoriented and because the fish cannot escape to shallow water where the larger dolphins cannot swim. Likewise, studies show that fishermen casting their nets following the unique signal catch more fish than when fishing alone, without the help of the dolphins. The dolphins were not trained for this behavior; the collaboration began before 1847. Similar cooperative fisheries also exist in Mauritania, Africa. Commercial 'dolphin encounter' enterprises and tours operate in many countries. The documentary film The Cove documents how dolphins are captured and sold to some of these enterprises (particularly in Asia) while the remaining pod is slaughtered. In addition to such endeavors, the individuals swim with and surface near surfers at the beach. Bottlenose dolphins perform in many aquaria, generating controversy. Animal welfare activists and certain scientists have claimed that the dolphins do not have adequate space or receive adequate care or stimulation. However, others, notably SeaWorld, counter by claiming that the dolphins are properly cared for, have much environmental stimulation and enjoy interacting with humans. Eight bottlenose dolphins that lived at the Marine Life Aquarium in Gulfport, Mississippi were swept away from their aquarium pool during Hurricane Katrina. They were later found in the Gulf of Mexico and returned to captivity. The military of the United States and Russia train bottlenose dolphins as military dolphins for wartime tasks, such as locating sea mines and detecting enemy divers. The U.S.'s program is the U.S. Navy Marine Mammal Program, located in San Diego. Tião was a well-known solitary male bottlenose dolphin that was first spotted in the town of São Sebastião in Brazil around 1994 and frequently allowed humans to interact with him. The dolphin became infamous for killing a swimmer and injuring many others, which later earned him the nickname "Killer Dolphin". Cultural influence The popular television show Flipper, created by Ivan Tors, portrayed a bottlenose dolphin in a friendly relationship with two boys, Sandy and Bud. A seagoing "Lassie", Flipper understood English and was a hero: "Go tell Dad we're in trouble, Flipper! Hurry!" The show's theme song contains the lyric "no one you see / is smarter than he". The television show was based on a 1963 film, with a sequel, Flipper's New Adventure (1964), and was remade as a feature film in 1996, starring Elijah Wood and Paul Hogan, as well as a second TV series running from 1995 to 2000, starring Jessica Alba. Other television appearances by bottlenose dolphins include Wonder Woman, Highway to Heaven, Dolphin Cove, seaQuest DSV, and The Penguins of Madagascar, in which a dolphin, Doctor Blowhole, is a villain. In the HBO movie Zeus and Roxanne, a female bottlenose dolphin befriends a male dog, and in Secrets of the Bermuda Triangle (1996 Ian Toynton movie), a girl named Annie (played by Lisa Jakub) swims with dolphins. Human and dolphin interaction segments, shot on location in the Florida Keys with Dolphin Research Center, are featured on Sesame Street and on a Halloween episode of The Simpsons, Treehouse of Horror XI. Dolphin Tale, directed by Charles Martin Smith, starring Nathan Gamble, Ashley Judd, Harry Connick Jr., Morgan Freeman, Cozi Zuehlsdorff and Kris Kristofferson, is based on the real-life story of the dolphin Winter, who was rescued from a crab trap in December 2005 and lost her tail, but learned to swim with a prosthetic one. Dolphin Tale 2, a sequel to the 2011 film, featured another rescued dolphin named Hope and an appearance by Bethany Hamilton. The sequel was released on September 12, 2014. The NFL's Miami Dolphins uses the bottlenose dolphin as its mascot and team logo. Factual descriptions of the dolphins date back into antiquity — the writings of Aristotle, Oppian and Pliny the Elder all mention the species. Threats Between 1950 and 2020, about four million of dolphins have drowned in fishing nets. Tuna fishing crews have been the most responsible for the largest number of deaths. In 1972, the U.S. government passed a law limiting the number of dolphins that could be killed yearly by tuna fishing crews. Dolphins in the United Kingdom have also been found to contain high levels of pollutants in their tissues. Heavy metals including mercury, PCBs and DDT are of great concern. These pollutants can cause harm in dolphins growth development, reproduction, and immunity. Since the mid-1990s, hundreds of dolphins have been trained to perform in shows presented by aquariums, zoos, and amusement parks. Scientists conduct various types of research to understand the dolphin's communication system. The man-made chemical perfluorooctanesulfonic acid (PFOS) may be compromising the immune system of bottlenose dolphins. PFOS affects the immune system of male mice at a concentration of 91.5 ppb, while PFOS has been reported in bottlenose dolphins in excess of 1 ppm. High levels of metal contaminants have been measured in tissues in many areas of the globe. A recent study found high levels of cadmium and mercury in bottlenose dolphins from South Australia, levels which were later found to be associated with kidney malformations, indicating possible health effects of high heavy metal concentrations in dolphins. Conservation Bottlenose dolphins are not endangered. Their future is stable because of their abundance and adaptability. However, specific populations are threatened due to various environmental changes. The population in the Moray Firth in Scotland is estimated to consist of around 190 individuals, and are under threat from harassment, traumatic injury, water pollution and reduction in food availability. Likewise, an isolated population in Doubtful Sound, New Zealand, is in decline due to calf loss coincident to an increase in warm freshwater discharge into the fiord. Less local climate change, such as increasing water temperature may also play a role but has never been shown to be the case. One of the largest coastal populations of bottlenose dolphins in Shark Bay, Western Australia was forecast to be stable with little variation in mortality over time (Manlik et al. 2016). In United States waters, hunting and harassing of marine mammals is forbidden in almost all circumstances, from passage of the Marine Mammal Protection Act of 1972.
Biology and health sciences
Toothed whale
Animals
297066
https://en.wikipedia.org/wiki/Transistor%20radio
Transistor radio
A transistor radio is a small portable radio receiver that uses transistor-based circuitry. Previous portable radios used vacuum tubes, which were bulky, fragile, had a limited lifetime, consumed excessive power and required large heavy batteries. Following the invention of the transistor in 1947—which revolutionized the field of consumer electronics by introducing small but powerful, convenient hand-held devices—the Regency TR-1 was released in 1954 becoming the first commercial transistor radio. The mass-market success of the smaller and cheaper Sony TR-63, released in 1957, led to the transistor radio becoming the most popular electronic communication device of the 1960s and 1970s. Transistor radios are still commonly used as car radios. Billions of transistor radios are estimated to have been sold worldwide between the 1950s and 2012. The pocket size of transistor radios sparked a change in popular music listening habits, allowing people to listen to music anywhere they went. Beginning around 1980, however, cheap AM transistor radios were superseded initially by the boombox and the Sony Walkman, and later on by digitally-based devices with higher audio quality such as portable CD players, personal audio players, MP3 players and (eventually) by smartphones, many of which contain FM radios. A transistor is a semiconductor device that amplifies and acts as an electronic switch. Background Before the transistor was invented, radios used vacuum tubes. Although portable vacuum tube radios were produced, they were typically bulky and heavy. The need for a low voltage high current source to power the filaments of the tubes and high voltage for the anode potential typically required two batteries. Vacuum tubes were also inefficient and fragile compared to transistors and had a limited lifetime. Bell Laboratories demonstrated the first transistor on December 23, 1947. The scientific team at Bell Laboratories responsible for the solid-state amplifier included William Shockley, Walter Houser Brattain, and John Bardeen After obtaining patent protection, the company held a news conference on June 30, 1948, at which a prototype transistor radio was demonstrated. There are many claimants to the title of the first company to produce practical transistor radios, often incorrectly attributed to Sony (originally Tokyo Telecommunications Engineering Corporation). Texas Instruments had demonstrated all-transistor AM (amplitude modulation) radios as early as May 25, 1954, but their performance was well below that of equivalent vacuum tube models. A workable all-transistor radio was demonstrated in August 1953 at the Düsseldorf Radio Fair by the German firm Intermetall. It was built with four of Intermetall's hand-made transistors, based upon the 1948 invention of the "Transistor"-germanium point-contact transistor by Herbert Mataré and Heinrich Welker. However, as with the early Texas Instruments units (and others) only prototypes were ever built; it was never put into commercial production. RCA had demonstrated a prototype transistor radio as early as 1952, and it is likely that they and the other radio makers were planning transistor radios of their own, but Texas Instruments and Regency Division of I.D.E.A., were the first to offer a production model starting in October 1954. The use of transistors instead of vacuum tubes as the amplifier elements meant that the device was much smaller, required far less power to operate than a tube radio, and was more resistant to physical shock. Since the transistor's base element draws current, its input impedance is low in contrast to the high input impedance of the vacuum tubes. It also allowed "instant-on" operation, since there were no filaments to heat up. The typical portable tube radio of the fifties was about the size and weight of a lunchbox and contained several heavy, non-rechargeable batteries—one or more so-called "A" batteries to heat the tube filaments and a large 45- to 90-volt "B" battery to power the signal circuits. By comparison, the transistor radio could fit in a pocket and weighed half a pound or less, and was powered by standard flashlight batteries or a single compact battery. The 9-volt battery was introduced for powering transistor radios. Early commercial transistor radios Regency TR-1 Two companies working together, Texas Instruments of Dallas, and Industrial Development Engineering Associates (I.D.E.A.) of Indianapolis, Indiana, were behind the unveiling of the Regency TR-1, the world's first commercially produced transistor radio. Previously, Texas Instruments was producing instrumentation for the oil industry and locating devices for the U.S. Navy and I.D.E.A. built home television antenna boosters. The two companies worked together on the TR-1, looking to grow revenues for their respective companies by breaking into this new product area. In May 1954, Texas Instruments had designed and built a prototype and was looking for an established radio manufacturer to develop and market a radio using their transistors. The Chief Project Engineer for the radio design at Texas Instruments' headquarters in Dallas, Texas was Paul D. Davis Jr., who had a degree in Electrical Engineering from Southern Methodist University. He was assigned the project due to his experience with radio engineering in World War II. None of the major radio makers including RCA, GE, Philco, and Emerson were interested. The President of I.D.E.A. at the time, Ed Tudor, jumped at the opportunity to manufacture the TR-1, predicting sales of the transistor radios at "20 million radios in three years". The Regency TR-1 was announced on October 18, 1954, by the Regency Division of I.D.E.A., was put on sale in November 1954 and was the first practical transistor radio made in any significant numbers. Billboard reported in 1954 that "the radio has only four transistors. One acts as a combination mixer-oscillator, one as an audio amplifier, and two as intermediate-frequency amplifiers." One year after the release of the TR-1 sales approached the 100,000 mark. The look and size of the TR-1 were well received, but with only four transistors the sound quality was poor, and the reviews of the TR-1's performance were typically adverse. The Regency TR-1 was patented by Richard C. Koch, former Project Engineer of I.D.E.A. Raytheon 8-TP-1 In February 1955, the second transistor radio, the 8-TP-1, was introduced by Raytheon. It was larger than the TR-1, including a four-inch speaker and eight transistors, four more than the TR-1, so the sound quality was much better. An additional benefit of the 8-TP-1 was its efficient battery consumption; the 8-TP-1 cost 1/6 cent per hour to operate, while the TR-1 cost 40 times as much. While the Raytheon radio cost $30 more than the RCA 6-BX-63 tube radio, the latter used $38 of batteries over the same time that the 8-TP-1 used 60 cents. In July 1955 the first positive review of a transistor radio appeared in the Consumer Reports. Noting the 8-TP-1's high sound quality and very low battery cost, the magazine stated that "The transistors in this set have not been used in an effort to build the smallest radio on the market, and good performance has not been sacrificed". Following the success of the 8-TP-1, Zenith, RCA, DeWald, Westinghouse, and Crosley produced many additional transistor radio models. The TR-1 remained the only shirt pocket-sized radio; rivals made "coat-pocket radios" that Consumer Reports also reviewed as not performing well. Chrysler Mopar 914HR Chrysler and Philco announced that they had developed and produced the world's first all-transistor car radio in the April 28th 1955 edition of the Wall Street Journal. Chrysler made the all-transistor car radio, Mopar model 914HR, available as an "option" in fall 1955 for its new line of 1956 Chrysler and Imperial cars, which hit the showroom floor on October 21, 1955. The all-transistor car radio was a $150 option (). Japanese transistor radios While on a trip to the United States in 1952, Masaru Ibuka, founder of Tokyo Telecommunications Engineering Corporation (now Sony), discovered that AT&T was about to make licensing available for the transistor. Ibuka and his partner, physicist Akio Morita, convinced the Japanese Ministry of International Trade and Industry (MITI) to finance the $25,000 licensing fee (equivalent to $ today). For several months Ibuka traveled around the United States borrowing ideas from the American transistor manufacturers. Improving upon the ideas, Tokyo Telecommunications Engineering Corporation made its first functional transistor radio in 1954. Within five years, Tokyo Telecommunications Engineering Corporation grew from seven employees to approximately five hundred. Other Japanese companies soon followed their entry into the American market and the grand total of electronic products exported from Japan in 1958 increased 2.5 times in comparison to 1957. Sony TR-55 In August 1955, while still a small company, Tokyo Telecommunications Engineering Corporation introduced their TR-55 five-transistor radio under the new brand name Sony. With this radio, Sony became the first company to manufacture the transistors and other components they used to construct the radio. The TR-55 was also the first transistor radio to utilize all miniature components. It's estimated that only 5,000 to 10,000 units were produced. Sony TR-63 The TR-63 was introduced by Sony to the United States in December 1957. The TR-63 was narrower and shorter than the original Regency TR-1. Like the TR-1 it was offered in four colors: lemon, green, red, and black. In addition to its smaller size, the TR-63 had a small tuning capacitor and required a new battery design to produce the proper voltage. It used the nine-volt battery, which would become the standard for transistor radios. Approximately 100,000 units of the TR-63 were imported in 1957. This "pocketable" (the term "pocketable" was a matter of some interpretation, as Sony allegedly had special shirts made with oversized pockets for their salesmen) model proved highly successful. This should be treated with caution. A restored Sony TR63 readily fits a common shirt pocket. The TR-63 was the first transistor radio to sell in the millions, leading to the mass-market penetration of transistor radios. The TR-63 went on to sell seven million units worldwide by the mid-1960s. With the visible success of the TR-63, Japanese competitors such as Toshiba and Sharp Corporation joined the market. By 1959, in the United States market, there were more than six million transistor radio sets produced by Japanese companies that represented $62 million in revenue. The success of transistor radios led to transistors replacing vacuum tubes as the dominant electronic technology in the late 1950s. The transistor radio went on to become the most popular electronic communication device of the 1960s and 1970s. Billions of transistor radios are estimated to have been sold worldwide between the 1950s and 2012. Pricing Prior to the Regency TR-1, transistors were difficult to produce. Only one in five transistors that were produced worked as expected (only a 20% yield) and as a result the price remained extremely high. When it was released in 1954, the Regency TR-1 cost $49.95 (equivalent to $ today) and sold about 150,000 units. Raytheon and Zenith Electronics transistor radios soon followed and were priced even higher. In 1955, Raytheon's 8-TR-1 was priced at $80 (equivalent to $ today). By November 1956 a transistor radio small enough to wear on the wrist and a claimed battery life of 100 hours cost $29.95. Sony's TR-63, released in December 1957, cost $39.95 (equivalent to $ today). Following the success of the TR-63 Sony continued to make their transistor radios smaller. Because of the extremely low labor costs in Japan, Japanese transistor radios began selling for as low as $25. By 1962, the TR-63 cost as low as $15 (equivalent to $ today), which led to American manufacturers dropping prices of transistor radios down to $15 as well. In popular culture Rock 'n roll music became popular at the same time as transistor radios. Parents found that purchasing a small transistor radio was a way for children to listen to their music without using the family tube radio. Sony and other Japanese companies were much faster than Americans to focus on stylish, pocket-sized radios for the youth market, helping them to dominate the radio market. American companies began using lower-cost Japanese components but their radios were less attractive or sophisticated. By 1964 no transistor radio with only US components was available; by the mid-1960s the Japanese radio components had also been supplanted by even less-expensive manufacturing in Korea, Taiwan, and Hong Kong. The Zenith Trans-Oceanic 7000 was, until 1970, the last transistor radio manufactured in the US. Transistor radios were extremely successful because of three social forces—a large number of young people due to the post–World War II baby boom, a public with disposable income amidst a period of prosperity, and the growing popularity of rock 'n' roll music. The influence of the transistor radio during this period is shown by its appearance in popular films, songs, and books of the time, such as the movie Lolita. Inexpensive transistor radios running on batteries enabled many in impoverished rural areas to become regular radio listeners for the first time. Music broadcast from New Orleans and received in Jamaica through transistor radios inspired the development of ska, and less directly, reggae music. In the late 1950s, transistor radios took on more elaborate designs as a result of heated competition. Eventually, transistor radios doubled as novelty items. The small components of transistor radios that became smaller over time were used to make anything from "Jimmy Carter Peanut-shaped" radios to "Gun-shaped" radios to "Mork from Ork Eggship-shaped" radios. Corporations used transistor radios to advertise their business. "Charlie the Tuna-shaped" radios could be purchased from Star-Kist for an insignificant amount of money giving their company visibility amongst the public. These novelty radios are now bought and sold as collectors' items amongst modern-day collectors. Rise of portable audio players Since the 1980s, the popularity of radio-only portable devices declined with the rise of portable audio players which allowed users to carry and listen to tape-recorded music. This began in the late 1970s with boom boxes and portable cassette players such as the Sony Walkman, followed by portable CD players, digital audio players, and smartphones.
Technology
Broadcasting
null
297071
https://en.wikipedia.org/wiki/Wild%20turkey
Wild turkey
The wild turkey (Meleagris gallopavo) is an upland game bird native to North America, one of two extant species of turkey and the heaviest member of the order Galliformes. It is the ancestor to the domestic turkey (M. g. domesticus), which was originally derived from a southern Mexican subspecies of wild turkey (not the related ocellated turkey). Description An adult male (tom or gobbler) normally weighs from and measures in length. The adult female (hen) is typically much smaller at and is long. Per two large studies, the average weight of adult males is and the average weight of adult females is . The record-sized adult male wild turkey, according to the National Wild Turkey Federation, weighed , with records of tom turkeys weighing over uncommon but not rare. Considering its maximum and average weight, it is among the heaviest flying birds in the world. The wings are relatively small, as is typical of the galliform order, and the wingspan ranges from . The wing chord is only . The bill is also relatively small, as adults measure in culmen length. The tarsus of the wild turkey is quite long and sturdy, measuring from . The tail is also relatively long, ranging from . Fully-grown wild turkeys have long, reddish-yellow to grayish-green legs. Each foot has three front toes, with a shorter, rear-facing toe; males have a spur behind each of their lower legs, used to spar with other males. The body feathers are generally blackish and dark, sometimes gray-brown, overall, with a coppery sheen that becomes more complex in older males. Mature males have a large, featherless, reddish head and red throat, with red wattles on the throat and neck. The head has fleshy, unique growths called caruncles, which may be used to identify certain birds from one another. When toms are excited, a fleshy flap on the bill (called a snood) expands, and this, the wattles and the bare skin of the head and neck all become red with enhanced flow of blood to the head. Tail feathers are of the same length in adults but of different lengths in juveniles. Males have a long, dark, fan-shaped tail and glossy, bronze wings. As with many other species of Galliformes, turkeys exhibit strong sexual dimorphism. The male is substantially larger than the female, and his feathers have areas of red, purple, green, copper, bronze, and gold iridescence. The preen gland (uropygial gland) is also larger in males compared to females. In contrast to the majority of other birds, they are colonized by bacteria of unknown function (Corynebacterium uropygiale). Males typically have at least one "beard", a tuft of coarse hair-like filaments (mesofiloplumes), growing from the center of the breast. Beards grow continuously during the turkey's lifespan and a one-year-old male has a beard up to long. Approximately 10% of females have a beard, usually shorter and thinner than that of the male. Females have feathers that are duller overall, in shades of brown and gray. Parasites can dull the coloration of both sexes; in males, vivid coloration may serve as a signal of health. The primary wing feathers have white bars. Turkeys have approximately 5,000 to 6,000 feathers. Juvenile males are called jakes; the difference between jakes and toms is that jakes have very short "beards" and tail fans with longer feathers in the middle. The tom's tail fan feathers are uniform in length. The turkey has the second-highest maximum average weight of any North American bird, after the trumpeter swan (Cygnus buccinator). By average mass, however, several other American birds surpass the mean weight of the turkey, including the American white pelican (Pelecanus erythrorhynchos), the tundra swan (Cygnus columbianus columbianus), the endangered California condor (Gymnogyps californianus), and whooping crane (Grus americana). Habitat Wild turkeys prefer hardwood and mixed conifer-hardwood forests with scattered openings such as pastures, fields, orchards and seasonal marshes. They seemingly can adapt to virtually any dense native plant community as long as coverage and openings are widely available. Open, mature forest with a variety of interspersion of tree species appear to be preferred. In the Northeast of North America, turkeys are most profuse in hardwood timber of oak-hickory (Quercus-Carya) and forests of red oak (Quercus rubra), beech (Fagus grandifolia), cherry (Prunus serotina) and white ash (Fraxinus americana). Best ranges for turkeys in the Coastal Plain and Piedmont sections have an interspersion of clearings, farms, and plantations with preferred habitat along principal rivers and in cypress (Taxodium distichum) and tupelo (Nyssa sylvatica) swamps. In the Appalachian Plateau and Cumberland Plateau birds occupy mixed forest of oaks and pines on southern and western slopes, also hickory with diverse understories. Bald cypress and sweet gum (Liquidambar styraciflua) swamps of south Florida; also hardwood of Cliftonia (a heath) and oak in north-central Florida. Lykes Fisheating Creek area of south Florida has up to 51% cypress, 12% hardwood hammocks, 17% glades of short grasses with isolated live oak (Quercus virginiana); nesting in neighboring prairies. Original habitat here was mainly longleaf pine (Pinus palustris) with turkey oak (Quercus laevis) and slash pine (Pinus elliottii) "flatwoods", now mainly replaced by slash pine plantations. In California, turkeys live in a wide range of habitats; acorns are a favorite food, in addition to wild oats (Avena barbata), drawing turkeys to areas of open oak forest and oak savanna across the central areas of the state. They frequent the lower-elevation oak woodlands of the Sierra Nevada foothills and Coast Ranges, and the central coast north through Mendocino County, which is primarily open conifer forest with various species of ferns growing in the understory. They can also be found in the conifer foothills and fern-heavy forested areas of the Klamath Mountains and Cascade Range in the northern areas of the state. In San Diego County, turkeys tend to be found farther from the coast, usually a minimum of 30–50 miles inland, at reasonably higher elevation; there is a healthy turkey population inhabiting the montane conifer woods and open oak forest habitats of the Cleveland National Forest, a region which borders on high desert and generally receives very minimal annual precipitation. Turkeys in these areas can be found in dense thickets of manzanita (Arctostaphylos), often growing on arid hillsides, for shelter and nesting sites, as well as rocky and boulder-strewn chaparral foothills. Behavior Flight Despite their weight, wild turkeys, unlike their domesticated counterparts, are agile, fast fliers. In ideal habitat of open woodland or wooded grasslands, they may fly beneath the canopy top and find perches. They usually fly close to the ground for no more than 400 m (a quarter mile). Wild turkeys have very good eyesight, but their vision is very poor at night. They will generally not see a predator until it is too late. At twilight most turkeys will head for the trees and roost well off the ground: it is safer to sleep there in numbers than to risk being victim to predators who hunt by night. Because wild turkeys do not migrate, in snowier parts of the species's habitat like the Northeast, Rockies, much of Canada, and the Midwest, it is very important for this bird to learn to select large conifer trees where they can fly onto the branches and shelter from blizzards. Vocalizations Wild turkeys have many calls: assembly call, gobble, plain yelp, purr, cluck and purr, cluck, cutt, excited yelp, fly-down cackle, tree call, kee kee run, and putt. In early spring, males older than a year old and, occasionally to a lesser extent, males younger than a year old gobble to announce their presence to females and competing males. The gobble of a wild turkey can be heard up to a mile away. Males also emit a low-pitched "drumming" sound, produced by the movement of air in the air sac in the chest, similar to the booming of a prairie chicken. In addition they produce a sound known as the "spit", which is a sharp expulsion of air from this air sac. Foraging Wild turkeys are omnivorous, foraging on the ground or climbing shrubs and small trees to feed. They prefer eating acorns, nuts, and other hard mast of various trees, including hazel, chestnut, hickory, and pinyon pine, as well as various seeds, berries such as juniper and bearberry, buds, leaves, fern fronds, roots, and insects. Turkeys also occasionally consume amphibians such as salamanders and small reptiles such as lizards and small snakes. Poults have been observed eating insects, berries, and seeds. Wild turkeys often feed in cow pastures, sometimes visit backyard bird feeders, and favor croplands after harvest to scavenge seeds on the ground. Turkeys are also known to eat a wide variety of grasses. Turkey populations can reach large numbers in small areas because of their ability to forage for different types of food. Early morning and late afternoon are the desired times for eating. Social structure and mating Males are polygamous, mating with as many hens as they can. Male wild turkeys display for females by puffing out their feathers, spreading out their tails, and dragging their wings. This behavior is most commonly referred to as strutting. Their heads and necks are colored with red, white, and blue. The color can change with the turkey's mood, with a solid white head and neck being the most excited. They use gobbling, drumming/booming, and spitting as signs of social dominance, and to attract females. Courtship begins during the months of March and April, which is when turkeys are still flocked together in winter areas. Males may be seen courting in groups, often with the dominant male gobbling, spreading his tail feathers (strutting), drumming/booming, and spitting. In a study, the average dominant male that courted as part of a pair of males fathered six more eggs than males that courted alone. Genetic analysis of pairs of males courting together shows that they are close relatives, with half of their genetic material being identical. The theory behind team-courtship is that the less-dominant male has a greater chance of passing along shared genetic material than if he were courting alone. When mating is finished, females search for nest sites. Nests are shallow dirt depressions engulfed with woody vegetation. Hens lay a clutch of 10–14 eggs, usually one per day. The eggs are incubated for at least 28 days. The poults are precocial and nidifugous, leaving the nest in about 12–24 hours. Turkeys are a ground nesting bird, and because of this they are heavily predated on; reproductively-active wild turkeys have a lower annual survival rate due to predation of nests. Positive relationships with other wild species Turkeys will occasionally forage with deer and squirrels, and may even play with them. By foraging together, each can help the other watch for predators with their different senses: the deer with their improved olfactory sense, the turkey with its superior sight, and squirrels providing an additional set of eyes from the air. Predators Predators of eggs and nestlings include raccoons (Procyon lotor), Virginia opossums (Didelphis virginiana), striped skunks (Mephitis mephitis), spotted skunks (Spilogale ssp.), red foxes (Vulpes vulpes), gray foxes (Urocyon citnereoargenteus), groundhogs (Marmota monax), among other rodents. Predators of poults in addition to nestlings and eggs also include several species of snake, namely rat snakes (Elaphe ssp.), gopher snakes (Pituophis catenifer), and pinesnakes (Pituophis ssp.). Avian predators of poults include raptors such as bald eagles (Haliaeetus leucocephalus), barred owl (Strix varia), red-shouldered (Buteo lineatus), red-tailed (Buteo jamaicensis), white-tailed (Geranoaetus albicaudatus), Harris's hawks (Parabuteo unicinctus), Cooper's hawk (Accipiter cooperii), and broad-winged hawk (Buteo platypterus) (both likely of very small poults). Mortality of poults is greatest in the first 14 days of life, especially of those roosting on the ground, decreasing most notably after half a year, when they attain near adult sizes. In addition to poults, hens and adult-sized fledglings (but not, as far as is known, adult male toms) are vulnerable to predation by great horned owls (Bubo virginianus), American goshawk (Accipiter atricapillus), domestic dogs (Canis familiaris), domestic cats (Felis catus), and red foxes (Vulpes vulpes). Predators of both adults and poults include coyotes (Canis latrans), gray wolves (Canis lupus), bobcats (Lynx rufus), cougars (Puma concolor), Canada lynx (Lynx canadensis), golden eagles (Aquila chrysaetos), and possibly American black bears (Ursus americanus), which also will eat the eggs if they find them. The American alligator (Alligator mississippiensis) is a predator to all turkeys of all ages in the Southeast and will eat them if they get too close to water. Humans are now the leading predator of adult turkeys. When approached by potential predators, turkeys and their poults usually run rather than fly away, though they may also fly short distances if pressed. Another alternative behaviour, common in Galliformes, is that when surprised with no time to flee, the poulets hide under the wings and body of the hen while she sits tight and still. Presumably, the hen has vocal and behavioural signals that trigger the poults to instinctively run to the hen for cover. Occasionally, if cornered, adult turkeys may try to fight off predators and large male toms can be especially aggressive in self-defense. When fighting off predators, turkeys may kick with their legs, using the spurs on their back of the legs as a weapon, bite with their beak, and ram with their relatively large bodies and may be able to deter predators up to the size of mid-sized mammals. Hens have been observed chasing off at least two species of hawks in flight when their poults are threatened. Wild turkeys are not usually aggressive towards humans, but can be frightened or provoked to behave with aggression. They are most likely to attack if startled, cornered, harassed, or if approached too closely. Attacks and potential injuries can usually be avoided by giving wild turkeys a respectful amount of space and keeping outdoor spaces clean and undisturbed. Also, turkeys that are habituated to seeing people, at places like parks or campgrounds, can be tame and will even feed from the hands of people. Male toms occasionally will attack parked cars and reflective surfaces, thinking they see another turkey and must defend their territory. Range and population The Californian turkey (Meleagris californica) is an extinct species of turkey indigenous to the Pleistocene and early Holocene of California. It became extinct about 10,000 years ago. The present Californian wild turkey population derives from wild birds re-introduced during the 1960s and 1970s from other areas by game officials. They proliferated after 2000 to become an everyday sight in the East Bay by 2015. At the beginning of the 20th century the range and numbers of wild turkeys had plummeted due to hunting and loss of habitat. When Europeans arrived in the New World, they were found from the southeastern US to Mexico. Turkeys were first domesticated by native peoples in Mexico and brought back to Europe during colonization. European settlers brought domesticated turkeys to the northern portions of North America during the 17th century. Habitat loss and market hunting were major factors in the decline of wild populations for the next two centuries. Game managers estimate that the entire population of wild turkeys in the United States was as low as 30,000 by the late 1930s. By the 1940s, it was almost totally extirpated from Canada and had become localized in pockets in the United States, in the north-east effectively restricted to the Appalachians, only as far north as central Pennsylvania. Early attempts used hand reared birds, a practice that failed miserably as the birds were unable to survive in the wild at all and many had imprinted far too much on humans to effectively survive. Game officials later made efforts to protect and encourage the breeding of the surviving wild population. They would wait for numbers to grow, catch the surplus birds with a device that would have a projectile net that would ensnare the creature, move it to another unoccupied territory, and repeat the cycle. Over time this included some in the western states where it was not native. There is evidence that the bird does well when near farmland, which provides grain and also berry-bearing shrubs at its edges. As wild turkey numbers rebounded, hunting became legal in 49 U.S. states (excluding Alaska). In 1973, the total U.S. population was estimated to be 1.3 million, and current estimates place the entire wild turkey population at 7 million individuals. Since the 1980s, "trap and transfer" projects have reintroduced wild turkeys to several provinces of Canada as well, sometimes from across the border in the United States. They appear to be very successful as of 2018 as wild turkeys have multiplied rapidly and flourished in places where they were not expected to survive by Canadian scientists, often quite far north of their original expected range. Attempts to introduce the wild turkey to Britain as a game bird in the 18th century were not successful. George II is said to have had a flock of a few thousand in Richmond Park near London, but they were too easy for local poachers to destroy, and the fights with poachers became too dangerous for the gamekeepers. They were hunted with dogs and then shot out of trees where they took refuge. Several other populations, introduced or escaped, have survived for periods elsewhere in Britain and Ireland, but seem to have died out, perhaps from a combination of lack of winter feed and poaching. Small populations, probably descended from farm as well as wild stock, in the Czech Republic and Germany have been more successful, and there are wild populations of some size following introductions in Hawaii and New Zealand. Subspecies There are subtle differences in the coloration, habitat, and behavior of the different subspecies of wild turkeys. The six subspecies are: Eastern wild turkey (Meleagris gallopavo silvestris) (Vieillot, 1817) This was the turkey subspecies Europeans first encountered in the wild: by the Puritans, the founders of Jamestown, the Dutch who lived in New York, and by the Acadians. Its range is one of the largest of all subspecies, covering the entire eastern half of the United States from Maine in the north to northern Florida and extending as far west as Minnesota, Illinois, and into Missouri. In Canada, its range extends into Southeastern Manitoba, Ontario, Southwestern Quebec (including Pontiac, Quebec and the lower half of the Western Quebec Seismic Zone), and the Maritime Provinces. They number from 5.1 to 5.3 million birds. They were first named 'forest turkey' in 1817, and can grow up to tall. The upper tail coverts are tipped with chestnut brown. Males can reach in weight. The eastern wild turkey is heavily hunted in the Eastern USA and is the most hunted wild turkey subspecies. Osceola wild turkey or Florida wild turkey (M. g. osceola) (Scott, 1890) Most common in the Florida peninsula, they number from 80,000 to 100,000 birds. This bird is named for the famous Seminole leader Osceola, and was first described in 1890. It is smaller and darker than the eastern wild turkey. The wing feathers are very dark with smaller amounts of the white barring seen on other subspecies. Their overall body feathers are an iridescent green-purple color. They are often found in scrub patches of palmetto and occasionally near swamps, where amphibian prey is abundant. Osceola turkeys are the smallest subspecies weighing . Rio Grande wild turkey (M. g. intermedia) (Sennett, 1879) The Rio Grande wild turkey ranges through Texas to Oklahoma, Kansas, New Mexico, Colorado, Oregon, Utah, and was introduced to central and western California, as well as parts of a few northeastern states. It was also introduced to Hawaii in the late 1950s. Population estimates for this subspecies are around 1,000,000. This subspecies, native to the central plain states, was first described in 1879, and has relatively long legs, better adapted to a prairie habitat. Its body feathers often have a green-coppery sheen. The tips of the tail and lower back feathers are a buff-to-very light tan color. Its habitats are brush areas next to streams, rivers or mesquite, pine and scrub oak forests. The Rio Grande turkey is gregarious. Merriam's wild turkey (M. g. merriami) (Nelson, 1900) The Merriam's wild turkey ranges through the Rocky Mountains and the neighboring prairies of Wyoming, Montana and South Dakota, as well as much of the high mesa country of New Mexico, Arizona, southern Utah and the Navajo Nation, with number from 334,460 to 344,460 birds. The subspecies has also been introduced into Oregon. The initial releases of Merriam's turkeys in 1961 resulted in establishing a remnant population of Merriam's turkeys along the east-slope of Mt. Hood and natural immigration of turkeys from Idaho has established Merriam's flocks along the eastern border of Oregon. Merriam's wild turkeys live in ponderosa pine and mountainous regions. The subspecies was named in 1900 in honor of Clinton Hart Merriam, the first chief of the U.S. Biological Survey. The tail and lower back feathers have white tips and purple and bronze reflections. Gould's wild turkey (M. g. mexicana) (Gould, 1856) Native from the central valleys to the northern mountains of Mexico and the southernmost parts of Arizona and New Mexico. Gould's wild turkeys are heavily protected and regulated. The subspecies was first described in 1856. They exist in small numbers in the U.S. but are abundant in northwestern portions of Mexico. A small population has been established in southern Arizona. Gould's are the largest of the six subspecies. They have longer legs, larger feet, and longer tail feathers. The main colors of the body feathers are copper and greenish-gold. This subspecies is heavily protected owing to its skittish nature and threatened status. South Mexican wild turkey (M. g. gallopavo) (Linnaeus, 1758) The south Mexican wild turkey is considered the nominate subspecies, and the only one that is not found in the United States or Canada. In central Mexico, archaeological M. gallopavo bones have been identified at sites dating to 800–100 BC. It is unclear whether these early specimens represent wild or domestic individuals, but domestic turkeys were likely established in central Mexico by the first half of the Classic Period (c. AD 200–1000). Late Preclassic (300 BC–AD 100) turkey remains identified at the archaeological site of El Mirador (Petén, Guatemala) represent the earliest evidence of the export of the south Mexican wild turkey (Meleagris gallopavo gallopavo) to the ancient Maya world. The south Mexican wild subspecies, M. g. gallopavo, was domesticated either in Mexico or by Preclassic peoples in Mesoamerica, giving rise to the domestic turkey (M. g. domesticus). The Spaniards brought this tamed subspecies back to Europe with them in the mid-16th century; from Spain it spread to France and later Britain as a farmyard animal, usually becoming the centerpiece of a feast for the well-to-do. By 1620 it was common enough so that Pilgrim settlers of Massachusetts could bring turkeys with them from England, unaware that it had a larger close relative already occupying the forests of Massachusetts. It is one of the smallest subspecies and is best known in Spanish from its Aztec-derived name, . This wild turkey subspecies is thought to be critically endangered, as of 2010. Benjamin Franklin and the myth of U.S. national bird suggestion The idea that Benjamin Franklin preferred the turkey as the national bird of the United States comes from a letter he wrote to his daughter Sarah Bache on 26 January 1784. The main subject of the letter is a criticism of the Society of the Cincinnati, which he likened to a chivalric order, which contradicted the ideals of the newly founded American republic. In one section of the letter, Franklin remarked on the appearance of the bald eagle on the Society's crest: Franklin never publicly voiced opposition to the bald eagle as a national symbol, nor did he ever publicly suggest the turkey as a national symbol. Significance to Native Americans The wild turkey, throughout its range, plays a significant role in the cultures of many Native American tribes all over North America. It is a favorite meal in eastern tribes. Eastern Native American tribes consumed both the eggs and meat, sometimes turning the latter into a type of jerky to preserve it and make it last through cold weather. They provided habitat by burning down portions of forests to create meadows which would attract mating birds, and thus give a clear shot to hunters. The feathers of turkeys also often made their way into the rituals and headgear of many tribes. Many leaders, such as Catawba chiefs, traditionally wore turkey feather headdresses. Significant peoples of several tribes, including Muscogee Creek and Wampanoag, wore turkey feather cloaks. The turkey clan is one of the three Lenape clans. Movements of wild turkeys inspired the Caddo tribe's turkey dance. The Navajo people of Northeastern Arizona, New Mexico and Utah call the turkey and relate the bird to the corn and seeds which The Turkey in Navajo folklore brought from the Third Navajo World. It is one of the Navajos' sacred birds, with the Navajo people using the feathers and parts in multiple traditional ceremonies.
Biology and health sciences
Galliformes
Animals
297117
https://en.wikipedia.org/wiki/Hydrofluoric%20acid
Hydrofluoric acid
Hydrofluoric acid is a solution of hydrogen fluoride (HF) in water. Solutions of HF are colorless, acidic and highly corrosive. A common concentration is 49% (48-52%) but there are also stronger solutions (e.g. 70%) and pure HF has a boiling point near room temperature. It is used to make most fluorine-containing compounds; examples include the commonly used pharmaceutical antidepressant medication fluoxetine (Prozac) and the material PTFE (Teflon). Elemental fluorine is produced from it. It is commonly used to etch glass and silicon wafers. Uses Production of organofluorine compounds The principal use of hydrofluoric acid is in organofluorine chemistry. Many organofluorine compounds are prepared using HF as the fluorine source, including Teflon, fluoropolymers, fluorocarbons, and refrigerants such as freon. Many pharmaceuticals contain fluorine. Production of inorganic fluorides Most high-volume inorganic fluoride compounds are prepared from hydrofluoric acid. Foremost are Na3AlF6, cryolite, and AlF3, aluminium trifluoride. A molten mixture of these solids serves as a high-temperature solvent for the production of metallic aluminium. Other inorganic fluorides prepared from hydrofluoric acid include sodium fluoride and uranium hexafluoride. Etchant, cleaner It is used in the semiconductor industry as a major component of Wright etch and buffered oxide etch, which are used to clean silicon wafers. In a similar manner it is also used to etch glass by treatment with silicon dioxide to form gaseous or water-soluble silicon fluorides. It can also be used to polish and frost glass. SiO2 + 4 HF → SiF4(g) + 2 H2O SiO2 + 6 HF → H2SiF6 + 2 H2O A 5% to 9% hydrofluoric acid gel is also commonly used to etch all ceramic dental restorations to improve bonding. For similar reasons, dilute hydrofluoric acid is a component of household rust stain remover, in car washes in "wheel cleaner" compounds, in ceramic and fabric rust inhibitors, and in water spot removers. Because of its ability to dissolve iron oxides as well as silica-based contaminants, hydrofluoric acid is used in pre-commissioning boilers that produce high-pressure steam. Hydrofluoric acid is also useful for dissolving rock samples (usually powdered) prior to analysis. In similar manner, this acid is used in acid macerations to extract organic fossils from silicate rocks. Fossiliferous rock may be immersed directly into the acid, or a cellulose nitrate film may be applied (dissolved in amyl acetate), which adheres to the organic component and allows the rock to be dissolved around it. Oil refining In a standard oil refinery process known as alkylation, isobutane is alkylated with low-molecular-weight alkenes (primarily a mixture of propylene and butylene) in the presence of an acid catalyst derived from hydrofluoric acid. The catalyst protonates the alkenes (propylene, butylene) to produce reactive carbocations, which alkylate isobutane. The reaction is carried out at mild temperatures (0 and 30 °C) in a two-phase reaction. Production Hydrofluoric acid was first prepared in 1771, by Carl Wilhelm Scheele. It is now mainly produced by treatment of the mineral fluorite, CaF2, with concentrated sulfuric acid at approximately 265 °C. CaF2 + H2SO4 → 2 HF + CaSO4 The acid is also a by-product of the production of phosphoric acid from apatite and fluoroapatite. Digestion of the mineral with sulfuric acid at elevated temperatures releases a mixture of gases, including hydrogen fluoride, which may be recovered. Because of its high reactivity toward glass, hydrofluoric acid is stored in fluorinated plastic (often PTFE) containers. Properties In dilute aqueous solution hydrogen fluoride behaves as a weak acid, Infrared spectroscopy has been used to show that, in solution, dissociation is accompanied by formation of the ion pair ·F−. + HF ⋅F−pKa = 3.17 This ion pair has been characterized in the crystalline state at very low temperature. Further association has been characterized both in solution and in the solid state. HF + F− log K = 0.6 It is assumed that polymerization occurs as the concentration increases. This assumption is supported by the isolation of a salt of a tetrameric anion and by low-temperature X-ray crystallography. The species that are present in concentrated aqueous solutions of hydrogen fluoride have not all been characterized; in addition to which is known the formation of other polymeric species, , is highly likely. The Hammett acidity function, H0, for 100% HF was first reported as -10.2, while later compilations show -11, comparable to values near -12 for pure sulfuric acid. Acidity Unlike other hydrohalic acids, such as hydrochloric acid, hydrogen fluoride is only a weak acid in dilute aqueous solution. This is in part a result of the strength of the hydrogen–fluorine bond, but also of other factors such as the tendency of HF, , and anions to form clusters. At high concentrations, HF molecules undergo homoassociation to form polyatomic ions (such as bifluoride, ) and protons, thus greatly increasing the acidity. This leads to protonation of very strong acids like hydrochloric, sulfuric, or nitric acids when using concentrated hydrofluoric acid solutions. Although hydrofluoric acid is regarded as a weak acid, it is very corrosive, even attacking glass when hydrated. Dilute solutions are weakly acidic with an acid ionization constant (or ), in contrast to corresponding solutions of the other hydrogen halides, which are strong acids (). However concentrated solutions of hydrogen fluoride are much more strongly acidic than implied by this value, as shown by measurements of the Hammett acidity function H0(or "effective pH"). During self ionization of 100% liquid HF the H0 was first measured as −10.2 and later compiled as −11, comparable to values near −12 for sulfuric acid. In thermodynamic terms, HF solutions are highly non-ideal, with the activity of HF increasing much more rapidly than its concentration. The weak acidity in dilute solution is sometimes attributed to the high H—F bond strength, which combines with the high dissolution enthalpy of HF to outweigh the more negative enthalpy of hydration of the fluoride ion. Paul Giguère and Sylvia Turrell have shown by infrared spectroscopy that the predominant solute species in dilute solution is the hydrogen-bonded ion pair ·F−. + HF ⋅F− With increasing concentration of HF the concentration of the hydrogen difluoride ion also increases. The reaction 3 HF + H2F+ is an example of homoconjugation. Health and safety In addition to being a highly corrosive liquid, hydrofluoric acid is also a powerful contact poison. Since it can penetrate tissue, poisoning can occur readily through exposure of skin or eyes, inhalation, or ingestion. Symptoms of exposure to hydrofluoric acid may not be immediately evident, and this can provide false reassurance to victims, causing them to delay medical treatment. Despite its irritating vapor, HF may reach dangerous levels without an obvious odor. It interferes with nerve function, meaning that burns may not initially be painful. Accidental exposures can go unnoticed, delaying treatment and increasing the extent and seriousness of the injury. Symptoms of HF exposure include irritation of the eyes, skin, nose, and throat, eye and skin burns, rhinitis, bronchitis, pulmonary edema (fluid buildup in the lungs), and bone damage due to HF strongly interacting with calcium in bones. In a concentrated form, HF can cause severe tissue destruction through lesions and mucous membrane damage, but dilute HF is still dangerous because of its high lipid affinity, leading to cellular death of nerves, blood vessels, tendons, bones, and other tissues. Hydrofluoric burns are treated with a calcium gluconate gel. In popular culture In the episodes "Cat's in the Bag..." and "Box Cutter" of the crime drama television series Breaking Bad, Walter White and Jesse Pinkman use hydrofluoric acid to chemically disincorporate bodies of gangsters.
Physical sciences
Inorganic compounds
null
297203
https://en.wikipedia.org/wiki/Blast%20furnace
Blast furnace
A blast furnace is a type of metallurgical furnace used for smelting to produce industrial metals, generally pig iron, but also others such as lead or copper. Blast refers to the combustion air being supplied above atmospheric pressure. In a blast furnace, fuel (coke), ores, and flux (limestone) are continuously supplied through the top of the furnace, while a hot blast of air (sometimes with oxygen enrichment) is blown into the lower section of the furnace through a series of pipes called tuyeres, so that the chemical reactions take place throughout the furnace as the material falls downward. The end products are usually molten metal and slag phases tapped from the bottom, and waste gases (flue gas) exiting from the top of the furnace. The downward flow of the ore along with the flux in contact with an upflow of hot, carbon monoxide-rich combustion gases is a countercurrent exchange and chemical reaction process. In contrast, air furnaces (such as reverberatory furnaces) are naturally aspirated, usually by the convection of hot gases in a chimney flue. According to this broad definition, bloomeries for iron, blowing houses for tin, and smelt mills for lead would be classified as blast furnaces. However, the term has usually been limited to those used for smelting iron ore to produce pig iron, an intermediate material used in the production of commercial iron and steel, and the shaft furnaces used in combination with sinter plants in base metals smelting. Blast furnaces are estimated to have been responsible for over 4% of global greenhouse gas emissions between 1900 and 2015, and are difficult to decarbonize. Process engineering and chemistry Blast furnaces operate on the principle of chemical reduction whereby carbon monoxide converts iron oxides to elemental iron. Blast furnaces differ from bloomeries and reverberatory furnaces in that in a blast furnace, flue gas is in direct contact with the ore and iron, allowing carbon monoxide to diffuse into the ore and reduce the iron oxide. The blast furnace operates as a countercurrent exchange process whereas a bloomery does not. Another difference is that bloomeries operate as a batch process whereas blast furnaces operate continuously for long periods. Continuous operation is also preferred because blast furnaces are difficult to start and stop. Also, the carbon in pig iron lowers the melting point below that of steel or pure iron; in contrast, iron does not melt in a bloomery. Silica has to be removed from the pig iron. It reacts with calcium oxide (burned limestone) and forms silicates, which float to the surface of the molten pig iron as slag. Historically, to prevent contamination from sulfur, the best quality iron was produced with charcoal. In a blast furnace, a downward-moving column of ore, flux, coke (or charcoal) and their reaction products must be sufficiently porous for the flue gas to pass through, upwards. To ensure this permeability the particle size of the coke or charcoal is of great relevance. Therefore, the coke must be strong enough so it will not be crushed by the weight of the material above it. Besides the physical strength of its particles, the coke must also be low in sulfur, phosphorus, and ash. The main chemical reaction producing the molten iron is: Fe2O3 + 3CO → 2Fe + 3CO2 This reaction might be divided into multiple steps, with the first being that preheated air blown into the furnace reacts with the carbon in the form of coke to produce carbon monoxide and heat: 2 C(s) + O2(g) → 2 CO(g) Hot carbon monoxide is the reducing agent for the iron ore and reacts with the iron oxide to produce molten iron and carbon dioxide. Depending on the temperature in the different parts of the furnace (warmest at the bottom) the iron is reduced in several steps. At the top, where the temperature usually is in the range between 200 °C and 700 °C, the iron oxide is partially reduced to iron(II,III) oxide, Fe3O4. 3 Fe2O3(s) + CO(g) → 2 Fe3O4(s) + CO2(g) The temperatures 850 °C, further down in the furnace, the iron(II,III) is reduced further to iron(II) oxide: Fe3O4(s) + CO(g) → 3 FeO(s) + CO2(g) Hot carbon dioxide, unreacted carbon monoxide, and nitrogen from the air pass up through the furnace as fresh feed material travels down into the reaction zone. As the material travels downward, the counter-current gases both preheat the feed charge and decompose the limestone to calcium oxide and carbon dioxide: CaCO3(s) → CaO(s) + CO2(g) The calcium oxide formed by decomposition reacts with various acidic impurities in the iron (notably silica), to form a fayalitic slag which is essentially calcium silicate, : SiO2 + CaO → CaSiO3 As the iron(II) oxide moves down to the area with higher temperatures, ranging up to 1200 °C degrees, it is reduced further to iron metal: FeO(s) + CO(g) → Fe(s) + CO2(g) The carbon dioxide formed in this process is re-reduced to carbon monoxide by the coke: C(s) + CO2(g) → 2 CO(g) The temperature-dependent equilibrium controlling the gas atmosphere in the furnace is called the Boudouard reaction: 2CO CO2 + C The pig iron produced by the blast furnace has a relatively high carbon content of around 4–5% and usually contains too much sulphur, making it very brittle, and of limited immediate commercial use. Some pig iron is used to make cast iron. The majority of pig iron produced by blast furnaces undergoes further processing to reduce the carbon and sulphur content and produce various grades of steel used for construction materials, automobiles, ships and machinery. Desulphurisation usually takes place during the transport of the liquid steel to the steelworks. This is done by adding calcium oxide, which reacts with the iron sulfide contained in the pig iron to form calcium sulfide (called lime desulfurization). In a further process step, the so-called basic oxygen steelmaking, the carbon is oxidized by blowing oxygen onto the liquid pig iron to form crude steel. History Cast iron has been found in China dating to the 5th century BC, but the earliest extant blast furnaces in China date to the 1st century AD and in the West from the High Middle Ages. They spread from the region around Namur in Wallonia (Belgium) in the late 15th century, being introduced to England in 1491. The fuel used in these was invariably charcoal. The successful substitution of coke for charcoal is widely attributed to English inventor Abraham Darby in 1709. The efficiency of the process was further enhanced by the practice of preheating the combustion air (hot blast), patented by Scottish inventor James Beaumont Neilson in 1828. China Archaeological evidence shows that bloomeries appeared in China around 800 BC. Originally it was thought that the Chinese started casting iron right from the beginning, but this theory has since been debunked by the discovery of 'more than ten' iron digging implements found in the tomb of Duke Jing of Qin (d. 537 BC), whose tomb is located in Fengxiang County, Shaanxi (a museum exists on the site today). There is however no evidence of the bloomery in China after the appearance of the blast furnace and cast iron. In China, blast furnaces produced cast iron, which was then either converted into finished implements in a cupola furnace, or turned into wrought iron in a fining hearth. Although cast iron farm tools and weapons were widespread in China by the 5th century BC, employing workforces of over 200 men in iron smelters from the 3rd century onward, the earliest blast furnaces constructed were attributed to the Han dynasty in the 1st century AD. These early furnaces had clay walls and used phosphorus-containing minerals as a flux. Chinese blast furnaces ranged from around two to ten meters in height, depending on the region. The largest ones were found in modern Sichuan and Guangdong, while the 'dwarf" blast furnaces were found in Dabieshan. In construction, they are both around the same level of technological sophistication. The effectiveness of the Chinese human and horse powered blast furnaces was enhanced during this period by the engineer Du Shi (c. AD 31), who applied the power of waterwheels to piston-bellows in forging cast iron. Early water-driven reciprocators for operating blast furnaces were built according to the structure of horse powered reciprocators that already existed. That is, the circular motion of the wheel, be it horse driven or water driven, was transferred by the combination of a belt drive, a crank-and-connecting-rod, other connecting rods, and various shafts, into the reciprocal motion necessary to operate a push bellow. Donald Wagner suggests that early blast furnace and cast iron production evolved from furnaces used to melt bronze. Certainly, though, iron was essential to military success by the time the State of Qin had unified China (221 BC). Usage of the blast and cupola furnace remained widespread during the Song and Tang dynasties. By the 11th century, the Song dynasty Chinese iron industry made a switch of resources from charcoal to coke in casting iron and steel, sparing thousands of acres of woodland from felling. This may have happened as early as the 4th century AD. The primary advantage of the early blast furnace was in large scale production and making iron implements more readily available to peasants. Cast iron is more brittle than wrought iron or steel, which required additional fining and then cementation or co-fusion to produce, but for menial activities such as farming it sufficed. By using the blast furnace, it was possible to produce larger quantities of tools such as ploughshares more efficiently than the bloomery. In areas where quality was important, such as warfare, wrought iron and steel were preferred. Nearly all Han period weapons are made of wrought iron or steel, with the exception of axe-heads, of which many are made of cast iron. Blast furnaces were also later used to produce gunpowder weapons such as cast iron bomb shells and cast iron cannons during the Song dynasty. Medieval Europe The simplest forge, known as the Corsican, was used prior to the advent of Christianity. Examples of improved bloomeries are the Stuckofen, sometimes called wolf-furnace, which remained until the beginning of the 19th century. Instead of using natural draught, air was pumped in by a trompe, resulting in better quality iron and an increased capacity. This pumping of air in with bellows is known as cold blast, and it increases the fuel efficiency of the bloomery and improves yield. They can also be built bigger than natural draught bloomeries. Oldest European blast furnaces The oldest known blast furnaces in the West were built in Durstel in Switzerland, the Märkische Sauerland in Germany, and at Lapphyttan in Sweden, where the complex was active between 1205 and 1300. At Noraskog in the Swedish parish of Järnboås, traces of even earlier blast furnaces have been found, possibly from around 1100. These early blast furnaces, like the Chinese examples, were very inefficient compared to those used today. The iron from the Lapphyttan complex was used to produce balls of wrought iron known as osmonds, and these were traded internationally – a possible reference occurs in a treaty with Novgorod from 1203 and several certain references in accounts of English customs from the 1250s and 1320s. Other furnaces of the 13th to 15th centuries have been identified in Westphalia. The technology required for blast furnaces may have either been transferred from China, or may have been an indigenous innovation. Al-Qazvini in the 13th century and other travellers subsequently noted an iron industry in the Alburz Mountains to the south of the Caspian Sea. This is close to the silk route, so that the use of technology derived from China is conceivable. Much later descriptions record blast furnaces about three metres high. As the Varangian Rus' people from Scandinavia traded with the Caspian (using their Volga trade route), it is possible that the technology reached Sweden by this means. The Vikings are known to have used double bellows, which greatly increases the volumetric flow of the blast. The Caspian region may also have been the source for the design of the furnace at Ferriere, described by Filarete, involving a water-powered bellows at Semogo in Valdidentro in northern Italy in 1226. In a two-stage process the molten iron was tapped twice a day into water, thereby granulating it. Cistercian contributions The General Chapter of the Cistercian monks spread some technological advances across Europe. This may have included the blast furnace, as the Cistercians are known to have been skilled metallurgists. According to Jean Gimpel, their high level of industrial technology facilitated the diffusion of new techniques: "Every monastery had a model factory, often as large as the church and only several feet away, and waterpower drove the machinery of the various industries located on its floor." Iron ore deposits were often donated to the monks along with forges to extract the iron, and after a time surpluses were offered for sale. The Cistercians became the leading iron producers in Champagne, France, from the mid-13th century to the 17th century, also using the phosphate-rich slag from their furnaces as an agricultural fertilizer. Archaeologists are still discovering the extent of Cistercian technology. At Laskill, an outstation of Rievaulx Abbey and the only medieval blast furnace so far identified in Britain, the slag produced was low in iron content. Slag from other furnaces of the time contained a substantial concentration of iron, whereas Laskill is believed to have produced cast iron quite efficiently. Its date is not yet clear, but it probably did not survive until Henry VIII's Dissolution of the Monasteries in the late 1530s, as an agreement (immediately after that) concerning the "smythes" with the Earl of Rutland in 1541 refers to blooms. Nevertheless, the means by which the blast furnace spread in medieval Europe has not finally been determined. Origin and spread of early modern blast furnaces Due to the increased demand for iron for casting cannons, the blast furnace came into widespread use in France in the mid 15th century. The direct ancestor of those used in France and England was in the Namur region, in what is now Wallonia (Belgium). From there, they spread first to the Pays de Bray on the eastern boundary of Normandy and from there to the Weald of Sussex, where the first furnace (called Queenstock) in Buxted was built in about 1491, followed by one at Newbridge in Ashdown Forest in 1496. They remained few in number until about 1530 but many were built in the following decades in the Weald, where the iron industry perhaps reached its peak about 1590. Most of the pig iron from these furnaces was taken to finery forges for the production of bar iron. The first British furnaces outside the Weald appeared during the 1550s, and many were built in the remainder of that century and the following ones. The output of the industry probably peaked about 1620, and was followed by a slow decline until the early 18th century. This was apparently because it was more economic to import iron from Sweden and elsewhere than to make it in some more remote British locations. Charcoal that was economically available to the industry was probably being consumed as fast as the wood to make it grew. The first blast furnace in Russia opened in 1637 near Tula and was called the Gorodishche Works. The blast furnace spread from there to central Russia and then finally to the Urals. Coke blast furnaces In 1709, at Coalbrookdale in Shropshire, England, Abraham Darby began to fuel a blast furnace with coke instead of charcoal. Coke's initial advantage was its lower cost, mainly because making coke required much less labor than cutting trees and making charcoal, but using coke also overcame localized shortages of wood, especially in Britain and eleswhere in Europe. Metallurgical grade coke will bear heavier weight than charcoal, allowing larger furnaces. A disadvantage is that coke contains more impurities than charcoal, with sulfur being especially detrimental to the iron's quality. Coke's impurities were more of a problem before hot blast reduced the amount of coke required and before furnace temperatures were hot enough to make slag from limestone free flowing. (Limestone ties up sulphur. Manganese may also be added to tie up sulphur.) Coke iron was initially only used for foundry work, making pots and other cast iron goods. Foundry work was a minor branch of the industry, but Darby's son built a new furnace at nearby Horsehay, and began to supply the owners of finery forges with coke pig iron for the production of bar iron. Coke pig iron was by this time cheaper to produce than charcoal pig iron. The use of a coal-derived fuel in the iron industry was a key factor in the British Industrial Revolution. However, in many areas of the world charcoal was cheaper while coke was more expensive even after the Industrial Revolution: e. g., in the US charcoal-fueled iron production fell in share to about a half ca. 1850 but still continued to increase in absolute terms until ca. 1890, while in João Monlevade in the Brazilian Highlands charcoal-fired blast furnaces were built as late as the 1930s and only phased out in 2000. Darby's original blast furnace has been archaeologically excavated and can be seen in situ at Coalbrookdale, part of the Ironbridge Gorge Museums. Cast iron from the furnace was used to make girders for the world's first cast iron bridge in 1779. The Iron Bridge crosses the River Severn at Coalbrookdale and remains in use for pedestrians. Steam-powered blast The steam engine was applied to power blast air, overcoming a shortage of water power in areas where coal and iron ore were located. This was first done at Coalbrookdale where a steam engine replaced a horse-powered pump in 1742. Such engines were used to pump water to a reservoir above the furnace. The first engines used to blow cylinders directly was supplied by Boulton and Watt to John Wilkinson's New Willey Furnace. This powered a cast iron blowing cylinder, which had been invented by his father Isaac Wilkinson. He patented such cylinders in 1736, to replace the leather bellows, which wore out quickly. Isaac was granted a second patent, also for blowing cylinders, in 1757. The steam engine and cast iron blowing cylinder led to a large increase in British iron production in the late 18th century. Hot blast Hot blast was the single most important advance in fuel efficiency of the blast furnace and was one of the most important technologies developed during the Industrial Revolution. Hot blast was patented by James Beaumont Neilson at Wilsontown Ironworks in Scotland in 1828. Within a few years of the introduction, hot blast was developed to the point where fuel consumption was cut by one-third using coke or two-thirds using coal, while furnace capacity was also significantly increased. Within a few decades, the practice was to have a "stove" as large as the furnace next to it into which the waste gas (containing CO) from the furnace was directed and burnt. The resultant heat was used to preheat the air blown into the furnace. Hot blast enabled the use of raw anthracite coal, which was difficult to light, in the blast furnace. Anthracite was first tried successfully by George Crane at Ynyscedwyn Ironworks in south Wales in 1837. It was taken up in America by the Lehigh Crane Iron Company at Catasauqua, Pennsylvania, in 1839. Anthracite use declined when very high capacity blast furnaces requiring coke were built in the 1870s. Modern applications of the blast furnace Iron blast furnaces The blast furnace remains an important part of modern iron production. Modern furnaces are highly efficient, including Cowper stoves to pre-heat the blast air and employ recovery systems to extract the heat from the hot gases exiting the furnace. Competition in industry drives higher production rates. The largest blast furnace in the world is in South Korea, with a volume around . It can produce around of iron per year. This is a great increase from the typical 18th-century furnaces, which averaged about per year. Variations of the blast furnace, such as the Swedish electric blast furnace, have been developed in countries which have no native coal resources. According to Global Energy Monitor, the blast furnace is likely to become obsolete to meet climate change objectives of reducing carbon dioxide emission, but BHP disagrees. An alternative process involving direct reduced iron (DRI) is likely to succeed it, but this also needs to use a blast furnace to melt the iron and remove the gangue (impurities) unless the ore is very high quality. Oxygen blast furnace The oxygen blast furnace (OBF) process, developed from the 1970s to the 1990s, has been extensively studied theoretically because of the potentials of promising energy conservation and emission reduction. This type may be the most suitable for use with CCS. The main blast furnace has of three levels; the reduction zone (), slag formation zone (), and the combustion zone (). OBFs are usually combined with top gas recycling. The problem with this, besides significant oxygen expenditures, is the uneven distribution of gas recycled from the top of the furnace to the middle, which collides with the hot gas from below. As of 2023, the technology is only practiced on the experimental level in Sweden, Japan and China. Blast furnaces in copper and lead smelting Blast furnaces are currently rarely used in copper smelting, but modern lead smelting blast furnaces are much shorter than iron blast furnaces and are rectangular in shape. Modern lead blast furnaces are constructed using water-cooled steel or copper jackets for the walls, and have no refractory linings in the side walls. The base of the furnace is a hearth of refractory material (bricks or castable refractory). Lead blast furnaces are often open-topped rather than having the charging bell used in iron blast furnaces. The blast furnace used at the Nyrstar Port Pirie lead smelter differs from most other lead blast furnaces in that it has a double row of tuyeres rather than the single row normally used. The lower shaft of the furnace has a chair shape with the lower part of the shaft being narrower than the upper. The lower row of tuyeres being located in the narrow part of the shaft. This allows the upper part of the shaft to be wider than the standard. Zinc blast furnaces The blast furnaces used in the Imperial Smelting Process ("ISP") were developed from the standard lead blast furnace, but are fully sealed. This is because the zinc produced by these furnaces is recovered as metal from the vapor phase, and the presence of oxygen in the off-gas would result in the formation of zinc oxide. Blast furnaces used in the ISP have a more intense operation than standard lead blast furnaces, with higher air blast rates per m2 of hearth area and a higher coke consumption. Zinc production with the ISP is more expensive than with electrolytic zinc plants, so several smelters operating this technology have closed in recent years. However, ISP furnaces have the advantage of being able to treat zinc concentrates containing higher levels of lead than can electrolytic zinc plants. Manufacture of stone wool Stone wool or rock wool is a spun mineral fibre used as an insulation product and in hydroponics. It is manufactured in a blast furnace fed with diabase rock which contains very low levels of metal oxides. The resultant slag is drawn off and spun to form the rock wool product. Very small amounts of metals are also produced which are an unwanted by-product. Modern iron process Modern furnaces are equipped with an array of supporting facilities to increase efficiency, such as ore storage yards where barges are unloaded. The raw materials are transferred to the stockhouse complex by ore bridges, or rail hoppers and ore transfer cars. Rail-mounted scale cars or computer controlled weight hoppers weigh out the various raw materials to yield the desired hot metal and slag chemistry. The raw materials are brought to the top of the blast furnace via a skip car powered by winches or conveyor belts. There are different ways in which the raw materials are charged into the blast furnace. Some blast furnaces use a "double bell" system where two "bells" are used to control the entry of raw material into the blast furnace. The purpose of the two bells is to minimize the loss of hot gases in the blast furnace. First, the raw materials are emptied into the upper or small bell which then opens to empty the charge into the large bell. The small bell then closes, to seal the blast furnace, while the large bell rotates to provide specific distribution of materials before dispensing the charge into the blast furnace. A more recent design is to use a "bell-less" system. These systems use multiple hoppers to contain each raw material, which is then discharged into the blast furnace through valves. These valves are more accurate at controlling how much of each constituent is added, as compared to the skip or conveyor system, thereby increasing the efficiency of the furnace. Some of these bell-less systems also implement a discharge chute in the throat of the furnace (as with the Paul Wurth top) in order to precisely control where the charge is placed. The iron making blast furnace itself is built in the form of a tall structure, lined with refractory brick, and profiled to allow for expansion of the charged materials as they heat during their descent, and subsequent reduction in size as melting starts to occur. Coke, limestone flux, and iron ore (iron oxide) are charged into the top of the furnace in a precise filling order which helps control gas flow and the chemical reactions inside the furnace. Four "uptakes" allow the hot, dirty gas high in carbon monoxide content to exit the furnace throat, while "bleeder valves" protect the top of the furnace from sudden gas pressure surges. The coarse particles in the exhaust gas settle in the "dust catcher" and are dumped into a railroad car or truck for disposal, while the gas itself flows through a venturi scrubber and/or electrostatic precipitators and a gas cooler to reduce the temperature of the cleaned gas. The "casthouse" at the bottom half of the furnace contains the bustle pipe, water cooled copper tuyeres and the equipment for casting the liquid iron and slag. Once a "taphole" is drilled through the refractory clay plug, liquid iron and slag flow down a trough through a "skimmer" opening, separating the iron and slag. Modern, larger blast furnaces may have as many as four tapholes and two casthouses. Once the pig iron and slag has been tapped, the taphole is again plugged with refractory clay. The tuyeres are used to implement a hot blast, which is used to increase the efficiency of the blast furnace. The hot blast is directed into the furnace through water-cooled copper nozzles called tuyeres near the base. The hot blast temperature can be from depending on the stove design and condition. The temperatures they deal with may be . Oil, tar, natural gas, powdered coal and oxygen can also be injected into the furnace at tuyere level to combine with the coke to release additional energy and increase the percentage of reducing gases present which is necessary to increase productivity. The exhaust gasses of a blast furnace are generally cleaned in the dust collector – such as an inertial separator, a baghouse, or an electrostatic precipitator. Each type of dust collector has strengths and weaknesses – some collect fine particles, some coarse particles, some collect electrically charged particles. Effective exhaust clearing relies on multiple stages of treatment. Waste heat is usually collected from the exhaust gases, for example by the use of a Cowper stove, a variety of heat exchanger. Environmental impact Fossil fuel (coke, natural gas) use in blast furnaces is a source of greenhouse gas emissions and the blast furnace is the most emission intensive stage of the steel making process. Fuels and reductants such as plastic waste, biomass and hydrogen are being used by steelmakers as possible alternatives to fossil fuels, although cost and availability remain a challenge and deployment is limited. Electric arc furnaces (EAF) are cited as an alternative steel production path which avoids the use of blast furnaces, however, depending on the characteristics of the steel product required the two furnace types are not always interchangeable. Furthermore, EAFs utilize steel scrap as a feedstock but estimates suggest that there will not be enough scrap available to meet future steel demand. Using hydrogen gas as a reductant to produce DRI (so called H2-DRI) from iron ore, which is then used as a feedstock for an EAF provides a technologically feasible, low emission alternative to blast furnaces. The H2-DRI EAF production route is in a fledgling state, with just one plant in operation. A 2000 report from the International Energy Agency Greenhouse Gas Technical Collaboration Programme (IEAGHG) shows that 70% of the emissions from integrated steel plants arise directly from the blast furnace gas (BFG). By treating BFG with carbon capture technology prior to use for heat exchange and energy recovery within the plant, a portion of these emissions can be abated. The report estimates that chemical absorption would cost $35/t of , plus $8–20/t for transportation and storage. At the time, this would have increased steel production costs by 15–20%, presenting a barrier to decarbonisation for steelmakers which typically operate with margins of 8–10%. As of 2024 no blast furnaces have been equipped with carbon capture technology. ULCOS (Ultra Low CO2 Steelmaking) was a European programme exploring processes to reduce blast furnace emissions by at least 50%. Technologies identified include carbon capture and storage (CCS) and alternative energy sources and reductants such as hydrogen, electricity and biomass. Preserved historic blast furnaces Historically it was normal procedure for a decommissioned blast furnace to be demolished and either replaced with a newer, improved one, or to have the entire site demolished and treated for follow-up use of the area. In recent decades, several countries have realized the historic value of blast furnaces and have transformed them into museums. Examples can be found in the Czech Republic, France, Germany, Japan, Luxembourg, Poland, Romania, Mexico, Russia, Spain, United Kingdom, and United States. Gallery
Technology
Metallurgy
null
297350
https://en.wikipedia.org/wiki/Pharmacy
Pharmacy
Pharmacy is the science and practice of discovering, producing, preparing, dispensing, reviewing and monitoring medications, aiming to ensure the safe, effective, and affordable use of medicines. It is a miscellaneous science as it links health sciences with pharmaceutical sciences and natural sciences. The professional practice is becoming more clinically oriented as most of the drugs are now manufactured by pharmaceutical industries. Based on the setting, pharmacy practice is either classified as community or institutional pharmacy. Providing direct patient care in the community of institutional pharmacies is considered clinical pharmacy. The scope of pharmacy practice includes more traditional roles such as compounding and dispensing of medications. It also includes more modern services related to health care including clinical services, reviewing medications for safety and efficacy, and providing drug information with patient counselling. Pharmacists, therefore, are experts on drug therapy and are the primary health professionals who optimize the use of medication for the benefit of the patients. An establishment in which pharmacy (in the first sense) is practiced is called a pharmacy (this term is more common in the United States) or chemists (which is more common in Great Britain, though pharmacy is also used). In the United States and Canada, drugstores commonly sell medicines, as well as miscellaneous items such as confectionery, cosmetics, office supplies, toys, hair care products and magazines, and occasionally refreshments and groceries. In its investigation of herbal and chemical ingredients, the work of the apothecary may be regarded as a precursor of the modern sciences of chemistry and pharmacology, prior to the formulation of the scientific method. Disciplines The field of pharmacy can generally be divided into various disciplines: Pharmaceutics and Computational Pharmaceutics Pharmacokinetics and Pharmacodynamics Medicinal Chemistry and Pharmacognosy Pharmacology Pharmacy Practice Pharmacoinformatics Pharmacogenomics The boundaries between these disciplines and with other sciences, such as biochemistry, are not always clear-cut. Often, collaborative teams from various disciplines (pharmacists and other scientists) work together toward the introduction of new therapeutics and methods for patient care. However, pharmacy is not a basic or biomedical science in its typical form. Medicinal chemistry is also a distinct branch of synthetic chemistry combining pharmacology, organic chemistry, and chemical biology. Pharmacology is sometimes considered the fourth discipline of pharmacy. Although pharmacology is essential to the study of pharmacy, it is not specific to pharmacy. Both disciplines are distinct. Those who wish to practice both pharmacy (patient-oriented) and pharmacology (a biomedical science requiring the scientific method) receive separate training and degrees unique to either discipline. Pharmacoinformatics is considered another new discipline, for systematic drug discovery and development with efficiency and safety. Pharmacogenomics is the study of genetic-linked variants that effect patient clinical responses, allergies, and metabolism of drugs. Professionals The World Health Organization estimates that there are at least 2.6 million pharmacists and other pharmaceutical personnel worldwide. Pharmacists Pharmacists are healthcare professionals with specialized education and training who perform various roles to ensure optimal health outcomes for their patients through the quality use of medicines. Pharmacists may also be small business proprietors, owning the pharmacy in which they practice. Since pharmacists know about the mode of action of a particular drug, and its metabolism and physiological effects on the human body in great detail, they play an important role in optimization of drug treatment for an individual. Pharmacists are represented internationally by the International Pharmaceutical Federation (FIP), an NGO linked with World Health Organization (WHO). They are represented at the national level by professional organisations such as the Royal Pharmaceutical Society in the UK, Pharmaceutical Society of Australia (PSA), Canadian Pharmacists Association (CPhA), Indian Pharmacist Association (IPA), Pakistan Pharmacists Association (PPA), American Pharmacists Association (APhA), and the Malaysian Pharmaceutical Society (MPS). In some cases, the representative body is also the registering body, which is responsible for the regulation and ethics of the profession. In the United States, specializations in pharmacy practice recognized by the Board of Pharmacy Specialties include: cardiovascular, infectious disease, oncology, pharmacotherapy, nuclear, nutrition, and psychiatry. The Commission for Certification in Geriatric Pharmacy certifies pharmacists in geriatric pharmacy practice. The American Board of Applied Toxicology certifies pharmacists and other medical professionals in applied toxicology. Pharmacy support staff Pharmacy technicians Pharmacy technicians support the work of pharmacists and other health professionals by performing a variety of pharmacy-related functions, including dispensing prescription drugs and other medical devices to patients and instructing on their use. They may also perform administrative duties in pharmaceutical practice, such as reviewing prescription requests with medic's offices and insurance companies to ensure correct medications are provided and payment is received. Legislation requires the supervision of certain pharmacy technician's activities by a pharmacist. The majority of pharmacy technicians work in community pharmacies. In hospital pharmacies, pharmacy technicians may be managed by other senior pharmacy technicians. In the UK the role of a PhT in hospital pharmacy has grown and responsibility has been passed on to them to manage the pharmacy department and specialized areas in pharmacy practice allowing pharmacists the time to specialize in their expert field as medication consultants spending more time working with patients and in research. Pharmacy technicians are registered with the General Pharmaceutical Council (GPhC). The GPhC is the regulator of pharmacists, pharmacy technicians, and pharmacy premises. In the US, pharmacy technicians perform their duties under the supervision of pharmacists. Although they may perform, under supervision, most dispensing, compounding and other tasks, they are not generally allowed to perform the role of counseling patients on the proper use of their medications. Some states have a legally mandated pharmacist-to-pharmacy technician ratio. Dispensing assistants Dispensing assistants are commonly referred to as "dispensers" and in community pharmacies perform largely the same tasks as a pharmacy technician. They work under the supervision of pharmacists and are involved in preparing (dispensing and labelling) medicines for provision to patients. Healthcare assistants/medicines counter assistants In the UK, this group of staff can sell certain medicines (including pharmacy only and general sales list medicines) over the counter. They cannot prepare prescription-only medicines for supply to patients. History The earliest known compilation of medicinal substances was the Sushruta Samhita, an Indian Ayurvedic treatise attributed to Sushruta in the 6th century BC. However, the earliest text as preserved dates to the 3rd or 4th century AD. Many Sumerian (4th millennium BC – early 2nd millennium BC) cuneiform clay tablets record prescriptions for medicine. Ancient Egyptian pharmacological knowledge was recorded in various papyri such as the Ebers Papyrus of 1550 BC, and the Edwin Smith Papyrus of the 16th century BC. In Ancient Greece, Diocles of Carystus (4th century BC) was one of several men studying the medicinal properties of plants. He wrote several treatises on the topic. The Greek physician Pedanius Dioscorides is famous for writing a five-volume book in his native Greek Περί ύλης ιατρικής in the 1st century AD. The Latin translation (Concerning medical substances) was used as a basis for many medieval texts and was built upon by many middle eastern scientists during the Islamic Golden Age, themselves deriving their knowledge from earlier Greek Byzantine medicine. Pharmacy in China dates at least to the earliest known Chinese manual, the Shennong Bencao Jing (The Divine Farmer's Herb-Root Classic), dating back to the 1st century AD. It was compiled during the Han dynasty and was attributed to the mythical Shennong. Earlier literature included lists of prescriptions for specific ailments, exemplified by a manuscript "Recipes for 52 Ailments", found in the Mawangdui, sealed in 168 BC. In Japan, at the end of the Asuka period (538–710) and the early Nara period (710–794), the men who fulfilled roles similar to those of modern pharmacists were highly respected. The place of pharmacists in society was expressly defined in the Taihō Code (701) and re-stated in the Yōrō Code (718). Ranked positions in the pre-Heian Imperial court were established; and this organizational structure remained largely intact until the Meiji Restoration (1868). In this highly stable hierarchy, the pharmacists—and even pharmacist assistants—were assigned status superior to all others in health-related fields such as physicians and acupuncturists. In the Imperial household, the pharmacist was even ranked above the two personal physicians of the Emperor. There is a stone sign for a pharmacy shop with a tripod, a mortar, and a pestle opposite one for a doctor in the Arcadian Way in Ephesus near Kusadasi in Turkey. The current Ephesus dates back to 400 BC and was the site of the Temple of Artemis, one of the seven wonders of the world. In Baghdad the first pharmacies, or drug stores, were established in 754, under the Abbasid Caliphate during the Islamic Golden Age. By the 9th century, these pharmacies were state-regulated. The advances made in the Middle East in botany and chemistry led medicine in medieval Islam substantially to develop pharmacology. Muhammad ibn Zakarīya Rāzi (Rhazes) (865–915), for instance, acted to promote the medical uses of chemical compounds. Abu al-Qasim al-Zahrawi (Abulcasis) (936–1013) pioneered the preparation of medicines by sublimation and distillation. His Liber servitoris is of particular interest, as it provides the reader with recipes and explains how to prepare the "simples" from which were compounded the complex drugs then generally used. Sabur Ibn Sahl (d 869), was, however, the first physician to record his findings in a pharmacopoeia, describing a large variety of drugs and remedies for ailments. Al-Biruni (973–1050) wrote one of the most valuable Islamic works on pharmacology, entitled Kitab al-Saydalah (The Book of Drugs), in which he detailed the properties of drugs and outlined the role of pharmacy and the functions and duties of the pharmacist. Avicenna, too, described no less than 700 preparations, their properties, modes of action, and their indications. He devoted in fact a whole volume to simple drugs in The Canon of Medicine. Of great impact were also the works by al-Maridini of Baghdad and Cairo, and Ibn al-Wafid (1008–1074), both of which were printed in Latin more than fifty times, appearing as De Medicinis universalibus et particularibus by 'Mesue' the younger, and the Medicamentis simplicibus by 'Abenguefit'. Peter of Abano (1250–1316) translated and added a supplement to the work of al-Maridini under the title De Veneris. Al-Muwaffaq's contributions in the field are also pioneering. Living in the 10th century, he wrote The foundations of the true properties of Remedies, amongst others describing arsenious oxide, and being acquainted with silicic acid. He made clear distinction between sodium carbonate and potassium carbonate, and drew attention to the poisonous nature of copper compounds, especially copper vitriol, and also lead compounds. He also describes the distillation of sea-water for drinking. In Europe, pharmacy-like shops began to appear during the 12th century. In 1240, emperor Frederic II issued a decree by which the physician's and the apothecary's professions were separated. There are pharmacies in Europe that have been in operation since medieval times. In Florence, Italy, the director of the museum in the former Santa Maria Novella pharmacy says that the pharmacy there dates back to 1221. In Trier (Germany), the Löwen-Apotheke is in operation since 1241, the oldest pharmacy in Europe in continuous operation. In Dubrovnik (Croatia), a pharmacy that first opened in 1317 is located inside the Franciscan monastery: it is the 2nd oldest pharmacy in Europe that is still operating. In the Town Hall Square of Tallinn (Estonia), there is a pharmacy dating from at least 1422. The medieval Esteve Pharmacy, located in Llívia, a Catalan enclave close to Puigcerdà, is a museum: the building dates back to the 15th century and the museum keeps albarellos from the 16th and 17th centuries, old prescription books and antique drugs. Practice areas Pharmacists practice in a variety of areas including community pharmacies, infusion pharmacies, hospitals, clinics, insurance companies, medical communication companies, research facilities, pharmaceutical companies, extended care facilities, psychiatric hospitals, and regulatory agencies. Pharmacists themselves may have expertise in a medical specialty. Community pharmacy A pharmacy (also known as a chemist in Australia, New Zealand and the British Isles; or drugstore in North America; retail pharmacy in industry terminology; or apothecary, historically) is where most pharmacists practice the profession of pharmacy. It is the community pharmacy in which the dichotomy of the profession exists; health professionals who are also retailers. Community pharmacies usually consist of a retail storefront with a dispensary, where medications are stored and dispensed. According to Sharif Kaf al-Ghazal, the opening of the first drugstores are recorded by Muslim pharmacists in Baghdad in 754 AD. Hospital pharmacy Pharmacies within hospitals differ considerably from community pharmacies. Some pharmacists in hospital pharmacies may have more complex clinical medication management issues, and pharmacists in community pharmacies often have more complex business and customer relations issues. Because of the complexity of medications including specific indications, effectiveness of treatment regimens, safety of medications (i.e., drug interactions) and patient compliance issues (in the hospital and at home), many pharmacists practicing in hospitals gain more education and training after pharmacy school through a pharmacy practice residency, sometimes followed by another residency in a specific area. Those pharmacists are often referred to as clinical pharmacists and they often specialize in various disciplines of pharmacy. For example, there are pharmacists who specialize in hematology/oncology, HIV/AIDS, infectious disease, critical care, emergency medicine, toxicology, nuclear pharmacy, pain management, psychiatry, anti-coagulation clinics, herbal medicine, neurology/epilepsy management, pediatrics, neonatal pharmacists and more. Hospital pharmacies can often be found within the premises of the hospital. Hospital pharmacies usually stock a larger range of medications, including more specialized medications, than would be feasible in the community setting. Most hospital medications are unit-dose, or a single dose of medicine. Hospital pharmacists and trained pharmacy technicians compound sterile products for patients including total parenteral nutrition (TPN), and other medications are given intravenously. That is a complex process that requires adequate training of personnel, quality assurance of products, and adequate facilities. Several hospital pharmacies have decided to outsource high-risk preparations and some other compounding functions to companies who specialize in compounding. The high cost of medications and drug-related technology and the potential impact of medications and pharmacy services on patient-care outcomes and patient safety require hospital pharmacies to perform at the highest level possible. Clinical pharmacy Pharmacists provide direct patient care services that optimize the use of medication and promotes health, wellness, and disease prevention. Clinical pharmacists care for patients in all health care settings, but the clinical pharmacy movement initially began inside hospitals and clinics. Clinical pharmacists often collaborate with physicians and other healthcare professionals to improve pharmaceutical care. Clinical pharmacists are now an integral part of the interdisciplinary approach to patient care. They often participate in patient care rounds for drug product selection. In the UK clinical pharmacists can also prescribe some medications for patients on the NHS or privately, after completing a non-medical prescribers course to become an Independent Prescriber. The clinical pharmacist's role involves creating a comprehensive drug therapy plan for patient-specific problems, identifying goals of therapy, and reviewing all prescribed medications prior to dispensing and administration to the patient. The review process often involves an evaluation of the appropriateness of drug therapy (e.g., drug choice, dose, route, frequency, and duration of therapy) and its efficacy. Research shows that pharmacist led strategies reduce errors related to medication use. The pharmacist must also consider potential drug interactions, adverse drug reactions, and patient drug allergies while they design and initiate a drug therapy plan. Ambulatory care pharmacy Since the emergence of modern clinical pharmacy, ambulatory care pharmacy practice has emerged as a unique pharmacy practice setting. Ambulatory care pharmacy is based primarily on pharmacotherapy services that a pharmacist provides in a clinic. Pharmacists in this setting often do not dispense drugs, but rather see patients in-office visits to manage chronic disease states. In the U.S. federal health care system (including the VA, the Indian Health Service, and NIH) ambulatory care pharmacists are given full independent prescribing authority. In some states, such as North Carolina and New Mexico, these pharmacist clinicians are given collaborative prescriptive and diagnostic authority. In 2011 the board of Pharmaceutical Specialties approved ambulatory care pharmacy practice as a separate board certification. The official designation for pharmacists who pass the ambulatory care pharmacy specialty certification exam will be Board Certified Ambulatory Care Pharmacist and these pharmacists will carry the initials BCACP. Compounding pharmacy/industrial pharmacy Compounding involves preparing drugs in forms that are different from the generic prescription standard. This may include altering the strength, ingredients, or dosage form. Compounding is a way to create custom drugs for patients who may not be able to take the medication in its standard form, such as due to an allergy or difficulty swallowing. Compounding is necessary for these patients to still be able to properly get the prescriptions they need. One area of compounding is preparing drugs in new dosage forms. For example, if a drug manufacturer only provides a drug as a tablet, a compounding pharmacist might make a medicated lollipop that contains the drug. Patients who have difficulty swallowing the tablet may prefer to suck the medicated lollipop instead. Another form of compounding is by mixing different strengths (g, mg, mcg) of capsules or tablets to yield the desired amount of medication indicated by the physician, physician assistant, nurse practitioner, or clinical pharmacist practitioner. This form of compounding is found at community or hospital pharmacies or in-home administration therapy. Compounding pharmacies specialize in compounding, although many also dispense the same non-compounded drugs that patients can obtain from community pharmacies. Consultant pharmacy Consultant pharmacy practice focuses more on medication regimen review (i.e. "cognitive services") than on actual dispensing of drugs. Consultant pharmacists most typically work in nursing homes, but are increasingly branching into other institutions and non-institutional settings. Traditionally consultant pharmacists were usually independent business owners, though in the United States many now work for a large pharmacy management company such as Omnicare, Kindred Healthcare or PharMerica. This trend may be gradually reversing as consultant pharmacists begin to work directly with patients, primarily because many elderly people are now taking numerous medications but continue to live outside of institutional settings. Some community pharmacies employ consultant pharmacists and/or provide consulting services. The main principle of consultant pharmacy is developed by Hepler and Strand in 1990. Veterinary pharmacy Veterinary pharmacies, sometimes called animal pharmacies, may fall in the category of hospital pharmacy, retail pharmacy or mail-order pharmacy. Veterinary pharmacies stock different varieties and different strengths of medications to fulfill the pharmaceutical needs of animals. Because the needs of animals, as well as the regulations on veterinary medicine, are often very different from those related to people, in some jurisdictions veterinary pharmacy may be kept separate from regular pharmacies. Nuclear pharmacy Nuclear pharmacy focuses on preparing radioactive materials for diagnostic tests and for treating certain diseases. Nuclear pharmacists undergo additional training specific to handling radioactive materials, and unlike in community and hospital pharmacies, nuclear pharmacists typically do not interact directly with patients. Military pharmacy Military pharmacy is a different working environment to civilian practise because military pharmacy technicians perform duties such as evaluating medication orders, preparing medication orders, and dispensing medications. This would be illegal in civilian pharmacies because these duties are required to be performed by a licensed registered pharmacist. In the US military, state laws that prevent technicians from counseling patients or doing the final medication check prior to dispensing to patients (rather than a pharmacist solely responsible for these duties) do not apply. Pharmacy informatics Pharmacy informatics is the combination of pharmacy practice science and applied information science. Pharmacy informaticists work in many practice areas of pharmacy, however, they may also work in information technology departments or for healthcare information technology vendor companies. As a practice area and specialist domain, pharmacy informatics is growing quickly to meet the needs of major national and international patient information projects and health system interoperability goals. Pharmacists in this area are trained to participate in medication management system development, deployment, and optimization. Specialty pharmacy Specialty pharmacies supply high-cost injectable, oral, infused, or inhaled medications that are used for chronic and complex disease states such as cancer, hepatitis, and rheumatoid arthritis. Unlike a traditional community pharmacy where prescriptions for any common medication can be brought in and filled, specialty pharmacies carry novel medications that need to be properly stored, administered, carefully monitored, and clinically managed. In addition to supplying these drugs, specialty pharmacies also provide lab monitoring, adherence counseling, and assist patients with cost-containment strategies needed to obtain their expensive specialty drugs. In the US, it is currently the fastest-growing sector of the pharmaceutical industry with 19 of 28 newly FDA approved medications in 2013 being specialty drugs. Due to the demand for clinicians who can properly manage these specific patient populations, the Specialty Pharmacy Certification Board has developed a new certification exam to certify specialty pharmacists. Along with the 100 questions computerized multiple-choice exam, pharmacists must also complete 3,000 hours of specialty pharmacy practice within the past three years as well as 30 hours of specialty pharmacist continuing education within the past two years. Pharmaceutical sciences The pharmaceutical sciences are a group of interdisciplinary areas of study concerned with the design, manufacturing, action, delivery, and classification of drugs. They apply knowledge from chemistry (inorganic, physical, biochemical and analytical), biology (anatomy, physiology, biochemistry, cell biology, and molecular biology), epidemiology, statistics, chemometrics, mathematics, physics, and chemical engineering. The pharmaceutical sciences are further subdivided into several specific specialties, with four main branches: Pharmacology: the study of the biochemical and physiological effects of drugs on human beings. Pharmacodynamics: the study of the cellular and molecular interactions of drugs with their receptors. Simply "What the drug does to the body" Pharmacokinetics: the study of the factors that control the concentration of drug at various sites in the body. Simply "What the body does to the drug" Pharmaceutical toxicology: the study of the harmful or toxic effects of drugs. Pharmacogenomics: the study of the inheritance of characteristic patterns of interaction between drugs and organisms. Pharmaceutical chemistry: the study of drug design to optimize pharmacokinetics and pharmacodynamics, and synthesis of new drug molecules (Medicinal Chemistry). Pharmaceutics: the study and design of drug formulation for optimum delivery, stability, pharmacokinetics, and patient acceptance. Pharmacognosy: the study of medicines derived from natural sources. As new discoveries advance and extend the pharmaceutical sciences, subspecialties continue to be added to this list. Importantly, as knowledge advances, boundaries between these specialty areas of pharmaceutical sciences are beginning to blur. Many fundamental concepts are common to all pharmaceutical sciences. These shared fundamental concepts further the understanding of their applicability to all aspects of pharmaceutical research and drug therapy. Pharmacocybernetics (also known as pharma-cybernetics, cybernetic pharmacy, and cyber pharmacy) is an emerging field that describes the science of supporting drugs and medications use through the application and evaluation of informatics and internet technologies, so as to improve the pharmaceutical care of patients. Society and culture Etymology The word pharmacy is derived from Old French farmacie "substance, such as a food or in the form of a medicine which has a laxative effect" from Medieval Latin pharmacia from Greek pharmakeia () "a medicine", which itself derives from pharmakon (), meaning "drug, poison, spell" (which is etymologically related to pharmakos). Separation of prescribing and dispensing Separation of prescribing and dispensing, also called dispensing separation, is a practice in medicine and pharmacy in which the physician who provides a medical prescription is independent from the pharmacist who provides the prescription drug. In the Western world there are centuries of tradition for separating pharmacists from physicians. In Asian countries, it is traditional for physicians to also provide drugs. In contemporary time researchers and health policy analysts have more deeply considered these traditions and their effects. Advocates for separation and advocates for combining make similar claims for each of their conflicting perspectives, saying that separating or combining reduces conflict of interest in the healthcare industry, unnecessary health care, and lowers costs, while the opposite causes those things. Research in various places reports mixed outcomes in different circumstances. Environmental impacts In 2022 the Organisation for Economic Co-operation and Development proposed that pharmaceutical companies should be required to collect and destroy unused or expired medicines that they have put on the market in order to reduce public health risks around the misuse of medicines obtained from waste bins, the development of antimicrobial resistant bacteria from the discharge of antibiotics into environmental systems and "economic losses" from wasted healthcare resources. Potentially harmful concentrations of pharmaceutical waste has been detected in more than a quarter of water samples taken from 258 rivers around the world. OECD recommend that medicines should be collected separately from household waste and that "marketplaces and redistribution platforms for unused close-to-expiry-date medicines" should be set up. Such extended producer responsibility schemes are already running in France, Spain and Portugal. The future of pharmacy In the coming decades, pharmacists are expected to become more integral within the health care system. Rather than simply dispensing medication, pharmacists are increasingly expected to be compensated for their patient care skills. In particular, Medication Therapy Management (MTM) includes the clinical services that pharmacists can provide for their patients. Such services include a thorough analysis of all medication (prescription, non-prescription, and herbals) currently being taken by an individual. The result is a reconciliation of medication and patient education resulting in increased patient health outcomes and decreased costs to the health care system. This shift has already commenced in some countries; for instance, pharmacists in Australia receive remuneration from the Australian Government for conducting comprehensive Home Medicines Reviews. In Canada, pharmacists in certain provinces have limited prescribing rights (as in Alberta and British Columbia) or are remunerated by their provincial government for expanded services such as medications reviews (Medschecks in Ontario). In the United Kingdom, pharmacists who undertake additional training are obtaining prescribing rights and this is because of pharmacy education. They are also being paid for by the government for medicine use reviews. In Scotland, the pharmacist can write prescriptions for Scottish registered patients of their regular medications, for the majority of drugs, except for controlled drugs, when the patient is unable to see their doctor, as could happen if they are away from home or the doctor is unavailable. In the United States, pharmaceutical care or clinical pharmacy has had an evolving influence on the practice of pharmacy. Moreover, the Doctor of Pharmacy (Pharm. D.) degree is now required before entering practice and some pharmacists now complete one or two years of residency or fellowship training following graduation. In addition, consultant pharmacists, who traditionally operated primarily in nursing homes, are now expanding into direct consultation with patients, under the banner of "senior care pharmacy". In addition to patient care, pharmacies will be a focal point for medical adherence initiatives. There is enough evidence to show that integrated pharmacy based initiatives significantly impact adherence for chronic patients. For example, a study published in NIH shows "pharmacy based interventions improved patients' medication adherence rates by 2.1 percent and increased physicians' initiation rates by 38 percent, compared to the control group". Pharmacy journals List of pharmaceutical sciences journals Symbols The symbols most commonly associated with pharmacy are the mortar and pestle (North America) and the (medical prescription) character, which is often written as "Rx" in typed text; the green Greek cross in France, Argentina, the United Kingdom, Belgium, Ireland, Italy, Spain, and India; the Bowl of Hygieia (only) often used in the Netherlands but may be seen combined with other symbols elsewhere. Other common symbols include conical measures, and (in the US) caduceuses, in their logos. A red stylized letter A in used Germany and Austria (from Apotheke, the German word for pharmacy, from the same Greek root as the English word "apothecary"). The show globe was used in the US until the early 20th century; the Gaper in the Netherlands is increasingly rare.
Biology and health sciences
Drugs and pharmacology
null
297382
https://en.wikipedia.org/wiki/Quantitative%20genetics
Quantitative genetics
Quantitative genetics is the study of quantitative traits, which are phenotypes that vary continuously—such as height or mass—as opposed to phenotypes and gene-products that are discretely identifiable—such as eye-colour, or the presence of a particular biochemical. Both of these branches of genetics use the frequencies of different alleles of a gene in breeding populations (gamodemes), and combine them with concepts from simple Mendelian inheritance to analyze inheritance patterns across generations and descendant lines. While population genetics can focus on particular genes and their subsequent metabolic products, quantitative genetics focuses more on the outward phenotypes, and makes only summaries of the underlying genetics. Due to the continuous distribution of phenotypic values, quantitative genetics must employ many other statistical methods (such as the effect size, the mean and the variance) to link phenotypes (attributes) to genotypes. Some phenotypes may be analyzed either as discrete categories or as continuous phenotypes, depending on the definition of cut-off points, or on the metric used to quantify them. Mendel himself had to discuss this matter in his famous paper, especially with respect to his peas' attribute tall/dwarf, which actually was derived by adding a cut-off point to "length of stem". Analysis of quantitative trait loci, or QTLs, is a more recent addition to quantitative genetics, linking it more directly to molecular genetics. Gene effects In diploid organisms, the average genotypic "value" (locus value) may be defined by the allele "effect" together with a dominance effect, and also by how genes interact with genes at other loci (epistasis). The founder of quantitative genetics - Sir Ronald Fisher - perceived much of this when he proposed the first mathematics of this branch of genetics. Being a statistician, he defined the gene effects as deviations from a central value—enabling the use of statistical concepts such as mean and variance, which use this idea. The central value he chose for the gene was the midpoint between the two opposing homozygotes at the one locus. The deviation from there to the "greater" homozygous genotype can be named "+a"; and therefore it is "-a" from that same midpoint to the "lesser" homozygote genotype. This is the "allele" effect mentioned above. The heterozygote deviation from the same midpoint can be named "d", this being the "dominance" effect referred to above. The diagram depicts the idea. However, in reality we measure phenotypes, and the figure also shows how observed phenotypes relate to the gene effects. Formal definitions of these effects recognize this phenotypic focus. Epistasis has been approached statistically as interaction (i.e., inconsistencies), but epigenetics suggests a new approach may be needed. If 0<d<a, the dominance is regarded as partial or incomplete—while d=a indicates full or classical dominance. Previously, d>a was known as "over-dominance". Mendel's pea attribute "length of stem" provides us with a good example. Mendel stated that the tall true-breeding parents ranged from 6–7 feet in stem length (183 – 213 cm), giving a median of 198 cm (= P1). The short parents ranged from 0.75 to 1.25 feet in stem length (23 – 46 cm), with a rounded median of 34 cm (= P2). Their hybrid ranged from 6–7.5 feet in length (183–229 cm), with a median of 206 cm (= F1). The mean of P1 and P2 is 116 cm, this being the phenotypic value of the homozygotes midpoint (mp). The allele affect (a) is [P1-mp] = 82 cm = -[P2-mp]. The dominance effect (d) is [F1-mp] = 90 cm. This historical example illustrates clearly how phenotype values and gene effects are linked. Allele and genotype frequencies To obtain means, variances and other statistics, both quantities and their occurrences are required. The gene effects (above) provide the framework for quantities: and the frequencies of the contrasting alleles in the fertilization gamete-pool provide the information on occurrences. Commonly, the frequency of the allele causing "more" in the phenotype (including dominance) is given the symbol p, while the frequency of the contrasting allele is q. An initial assumption made when establishing the algebra was that the parental population was infinite and random mating, which was made simply to facilitate the derivation. The subsequent mathematical development also implied that the frequency distribution within the effective gamete-pool was uniform: there were no local perturbations where p and q varied. Looking at the diagrammatic analysis of sexual reproduction, this is the same as declaring that pP = pg = p; and similarly for q. This mating system, dependent upon these assumptions, became known as "panmixia". Panmixia rarely actually occurs in nature, as gamete distribution may be limited, for example by dispersal restrictions or by behaviour, or by chance sampling (those local perturbations mentioned above). It is well known that there is a huge wastage of gametes in Nature, which is why the diagram depicts a potential gamete-pool separately to the actual gamete-pool. Only the latter sets the definitive frequencies for the zygotes: this is the true "gamodeme" ("gamo" refers to the gametes, and "deme" derives from Greek for "population"). But, under Fisher's assumptions, the gamodeme can be effectively extended back to the potential gamete-pool, and even back to the parental base-population (the "source" population). The random sampling arising when small "actual" gamete-pools are sampled from a large "potential" gamete-pool is known as genetic drift, and is considered subsequently. While panmixia may not be widely extant, the potential for it does occur, although it may be only ephemeral because of those local perturbations. It has been shown, for example, that the F2 derived from random fertilization of F1 individuals (an allogamous F2), following hybridization, is an origin of a new potentially panmictic population. It has also been shown that if panmictic random fertilization occurred continually, it would maintain the same allele and genotype frequencies across each successive panmictic sexual generation—this being the Hardy Weinberg equilibrium. However, as soon as genetic drift was initiated by local random sampling of gametes, the equilibrium would cease. Random fertilization Male and female gametes within the actual fertilizing pool are considered usually to have the same frequencies for their corresponding alleles. (Exceptions have been considered.) This means that when p male gametes carrying the A allele randomly fertilize p female gametes carrying that same allele, the resulting zygote has genotype AA, and, under random fertilization, the combination occurs with a frequency of p x p (= p2). Similarly, the zygote aa occurs with a frequency of q2. Heterozygotes (Aa) can arise in two ways: when p male (A allele) randomly fertilize q female (a allele) gametes, and vice versa. The resulting frequency for the heterozygous zygotes is thus 2pq. Notice that such a population is never more than half heterozygous, this maximum occurring when p=q= 0.5. In summary then, under random fertilization, the zygote (genotype) frequencies are the quadratic expansion of the gametic (allelic) frequencies: . (The "=1" states that the frequencies are in fraction form, not percentages; and that there are no omissions within the framework proposed.) Notice that "random fertilization" and "panmixia" are not synonyms. Mendel's research cross – a contrast Mendel's pea experiments were constructed by establishing true-breeding parents with "opposite" phenotypes for each attribute. This meant that each opposite parent was homozygous for its respective allele only. In our example, "tall vs dwarf", the tall parent would be genotype TT with p = 1 (and q = 0); while the dwarf parent would be genotype tt with q = 1 (and p = 0). After controlled crossing, their hybrid is Tt, with p = q = . However, the frequency of this heterozygote = 1, because this is the F1 of an artificial cross: it has not arisen through random fertilization. The F2 generation was produced by natural self-pollination of the F1 (with monitoring against insect contamination), resulting in p = q = being maintained. Such an F2 is said to be "autogamous". However, the genotype frequencies (0.25 TT, 0.5 Tt, 0.25 tt) have arisen through a mating system very different from random fertilization, and therefore the use of the quadratic expansion has been avoided. The numerical values obtained were the same as those for random fertilization only because this is the special case of having originally crossed homozygous opposite parents. We can notice that, because of the dominance of T- [frequency (0.25 + 0.5)] over tt [frequency 0.25], the 3:1 ratio is still obtained. A cross such as Mendel's, where true-breeding (largely homozygous) opposite parents are crossed in a controlled way to produce an F1, is a special case of hybrid structure. The F1 is often regarded as "entirely heterozygous" for the gene under consideration. However, this is an over-simplification and does not apply generally—for example when individual parents are not homozygous, or when populations inter-hybridise to form hybrid swarms. The general properties of intra-species hybrids (F1) and F2 (both "autogamous" and "allogamous") are considered in a later section. Self fertilization – an alternative Having noticed that the pea is naturally self-pollinated, we cannot continue to use it as an example for illustrating random fertilization properties. Self-fertilization ("selfing") is a major alternative to random fertilization, especially within Plants. Most of the Earth's cereals are naturally self-pollinated (rice, wheat, barley, for example), as well as the pulses. Considering the millions of individuals of each of these on Earth at any time, it is obvious that self-fertilization is at least as significant as random fertilization. Self-fertilization is the most intensive form of inbreeding, which arises whenever there is restricted independence in the genetical origins of gametes. Such reduction in independence arises if parents are already related, and/or from genetic drift or other spatial restrictions on gamete dispersal. Path analysis demonstrates that these are tantamount to the same thing. Arising from this background, the inbreeding coefficient (often symbolized as F or f) quantifies the effect of inbreeding from whatever cause. There are several formal definitions of f, and some of these are considered in later sections. For the present, note that for a long-term self-fertilized species f = 1. Natural self-fertilized populations are not single " pure lines ", however, but mixtures of such lines. This becomes particularly obvious when considering more than one gene at a time. Therefore, allele frequencies (p and q) other than 1 or 0 are still relevant in these cases (refer back to the Mendel Cross section). The genotype frequencies take a different form, however. In general, the genotype frequencies become for AA and for Aa and for aa. Notice that the frequency of the heterozygote declines in proportion to f. When f = 1, these three frequencies become respectively p, 0 and q Conversely, when f = 0, they reduce to the random-fertilization quadratic expansion shown previously. Population mean The population mean shifts the central reference point from the homozygote midpoint (mp) to the mean of a sexually reproduced population. This is important not only to relocate the focus into the natural world, but also to use a measure of central tendency used by Statistics/Biometrics. In particular, the square of this mean is the Correction Factor, which is used to obtain the genotypic variances later. For each genotype in turn, its allele effect is multiplied by its genotype frequency; and the products are accumulated across all genotypes in the model. Some algebraic simplification usually follows to reach a succinct result. The mean after random fertilization The contribution of AA is , that of Aa is , and that of aa is . Gathering together the two a terms and accumulating over all, the result is: . Simplification is achieved by noting that , and by recalling that , thereby reducing the right-hand term to . The succinct result is therefore . This defines the population mean as an "offset" from the homozygote midpoint (recall a and d are defined as deviations from that midpoint). The Figure depicts G across all values of p for several values of d, including one case of slight over-dominance. Notice that G is often negative, thereby emphasizing that it is itself a deviation (from mp). Finally, to obtain the actual Population Mean in "phenotypic space", the midpoint value is added to this offset: . An example arises from data on ear length in maize. Assuming for now that one gene only is represented, a = 5.45 cm, d = 0.12 cm [virtually "0", really], mp = 12.05 cm. Further assuming that p = 0.6 and q = 0.4 in this example population, then:G = 5.45 (0.6 − 0.4) + (0.48)0.12 = 1.15 cm (rounded); andP = 1.15 + 12.05 = 13.20 cm (rounded). The mean after long-term self-fertilization The contribution of AA is , while that of aa is . [See above for the frequencies.] Gathering these two a terms together leads to an immediately very simple final result: . As before, . Often, "G(f=1)" is abbreviated to "G1". Mendel's peas can provide us with the allele effects and midpoint (see previously); and a mixed self-pollinated population with p = 0.6 and q = 0.4 provides example frequencies. Thus:G(f=1) = 82 (0.6 − .04) = 59.6 cm (rounded); andP(f=1) = 59.6 + 116 = 175.6 cm (rounded). The mean – generalized fertilization A general formula incorporates the inbreeding coefficient f, and can then accommodate any situation. The procedure is exactly the same as before, using the weighted genotype frequencies given earlier. After translation into our symbols, and further rearrangement: Here, G0 is G, which was given earlier. (Often, when dealing with inbreeding, "G0" is preferred to "G".) Supposing that the maize example [given earlier] had been constrained on a holme (a narrow riparian meadow), and had partial inbreeding to the extent of f = 0.25, then, using the third version (above) of Gf: G0.25 = 1.15 − 0.25 (0.48) 0.12 = 1.136 cm (rounded), with P0.25 = 13.194 cm (rounded). There is hardly any effect from inbreeding in this example, which arises because there was virtually no dominance in this attribute (d → 0). Examination of all three versions of Gf reveals that this would lead to trivial change in the Population mean. Where dominance was notable, however, there would be considerable change. Genetic drift Genetic drift was introduced when discussing the likelihood of panmixia being widely extant as a natural fertilization pattern. [See section on Allele and Genotype frequencies.] Here the sampling of gametes from the potential gamodeme is discussed in more detail. The sampling involves random fertilization between pairs of random gametes, each of which may contain either an A or an a allele. The sampling is therefore binomial sampling. Each sampling "packet" involves 2N alleles, and produces N zygotes (a "progeny" or a "line") as a result. During the course of the reproductive period, this sampling is repeated over and over, so that the final result is a mixture of sample progenies. The result is dispersed random fertilization These events, and the overall end-result, are examined here with an illustrative example. The "base" allele frequencies of the example are those of the potential gamodeme: the frequency of A is pg = 0.75, while the frequency of a is qg = 0.25. [White label "1" in the diagram.] Five example actual gamodemes are binomially sampled out of this base (s = the number of samples = 5), and each sample is designated with an "index" k: with k = 1 .... s sequentially. (These are the sampling "packets" referred to in the previous paragraph.) The number of gametes involved in fertilization varies from sample to sample, and is given as 2Nk [at white label "2" in the diagram]. The total (Σ) number of gametes sampled overall is 52 [white label "3" in the diagram]. Because each sample has its own size, weights are needed to obtain averages (and other statistics) when obtaining the overall results. These are , and are given at white label "4" in the diagram. The sample gamodemes – genetic drift Following completion of these five binomial sampling events, the resultant actual gamodemes each contained different allele frequencies—(pk and qk). [These are given at white label "5" in the diagram.] This outcome is actually the genetic drift itself. Notice that two samples (k = 1 and 5) happen to have the same frequencies as the base (potential) gamodeme. Another (k = 3) happens to have the p and q "reversed". Sample (k = 2) happens to be an "extreme" case, with pk = 0.9 and qk = 0.1; while the remaining sample (k = 4) is "middle of the range" in its allele frequencies. All of these results have arisen only by "chance", through binomial sampling. Having occurred, however, they set in place all the downstream properties of the progenies. Because sampling involves chance, the probabilities ( k ) of obtaining each of these samples become of interest. These binomial probabilities depend on the starting frequencies (pg and qg) and the sample size (2Nk). They are tedious to obtain, but are of considerable interest. [See white label "6" in the diagram.] The two samples (k = 1, 5), with the allele frequencies the same as in the potential gamodeme, had higher "chances" of occurring than the other samples. Their binomial probabilities did differ, however, because of their different sample sizes (2Nk). The "reversal" sample (k = 3) had a very low Probability of occurring, confirming perhaps what might be expected. The "extreme" allele frequency gamodeme (k = 2) was not "rare", however; and the "middle of the range" sample (k=4) was rare. These same Probabilities apply also to the progeny of these fertilizations. Here, some summarizing can begin. The overall allele frequencies in the progenies bulk are supplied by weighted averages of the appropriate frequencies of the individual samples. That is: and . (Notice that k is replaced by • for the overall result—a common practice.) The results for the example are p• = 0.631 and q• = 0.369 [black label "5" in the diagram]. These values are quite different to the starting ones (pg and qg) [white label "1"]. The sample allele frequencies also have variance as well as an average. This has been obtained using the sum of squares (SS) method [See to the right of black label "5" in the diagram]. [Further discussion on this variance occurs in the section below on Extensive genetic drift.] The progeny lines – dispersion The genotype frequencies of the five sample progenies are obtained from the usual quadratic expansion of their respective allele frequencies (random fertilization). The results are given at the diagram's white label "7" for the homozygotes, and at white label "8" for the heterozygotes. Re-arrangement in this manner prepares the way for monitoring inbreeding levels. This can be done either by examining the level of total homozygosis [(p2k + q2k) = (1 − 2pkqk)], or by examining the level of heterozygosis (2pkqk), as they are complementary. Notice that samples k= 1, 3, 5 all had the same level of heterozygosis, despite one being the "mirror image" of the others with respect to allele frequencies. The "extreme" allele-frequency case (k= 2) had the most homozygosis (least heterozygosis) of any sample. The "middle of the range" case (k= 4) had the least homozygosity (most heterozygosity): they were each equal at 0.50, in fact. The overall summary can continue by obtaining the weighted average of the respective genotype frequencies for the progeny bulk. Thus, for AA, it is , for Aa, it is and for aa, it is . The example results are given at black label "7" for the homozygotes, and at black label "8" for the heterozygote. Note that the heterozygosity mean is 0.3588, which the next section uses to examine inbreeding resulting from this genetic drift. The next focus of interest is the dispersion itself, which refers to the "spreading apart" of the progenies' population means. These are obtained as [see section on the Population mean], for each sample progeny in turn, using the example gene effects given at white label "9" in the diagram. Then, each is obtained also [at white label "10" in the diagram]. Notice that the "best" line (k = 2) had the highest allele frequency for the "more" allele (A) (it also had the highest level of homozygosity). The worst progeny (k = 3) had the highest frequency for the "less" allele (a), which accounted for its poor performance. This "poor" line was less homozygous than the "best" line; and it shared the same level of homozygosity, in fact, as the two second-best lines (k = 1, 5). The progeny line with both the "more" and the "less" alleles present in equal frequency (k = 4) had a mean below the overall average (see next paragraph), and had the lowest level of homozygosity. These results reveal the fact that the alleles most prevalent in the "gene-pool" (also called the "germplasm") determine performance, not the level of homozygosity per se. Binomial sampling alone effects this dispersion. The overall summary can now be concluded by obtaining and . The example result for P• is 36.94 (black label "10" in the diagram). This later is used to quantify inbreeding depression overall, from the gamete sampling. [See the next section.] However, recall that some "non-depressed" progeny means have been identified already (k = 1, 2, 5). This is an enigma of inbreeding—while there may be "depression" overall, there are usually superior lines among the gamodeme samplings. The equivalent post-dispersion panmictic – inbreeding Included in the overall summary were the average allele frequencies in the mixture of progeny lines (p• and q•). These can now be used to construct a hypothetical panmictic equivalent. This can be regarded as a "reference" to assess the changes wrought by the gamete sampling. The example appends such a panmictic to the right of the Diagram. The frequency of AA is therefore (p•)2 = 0.3979. This is less than that found in the dispersed bulk (0.4513 at black label "7"). Similarly, for aa, (q•)2 = 0.1303—again less than the equivalent in the progenies bulk (0.1898). Clearly, genetic drift has increased the overall level of homozygosis by the amount (0.6411 − 0.5342) = 0.1069. In a complementary approach, the heterozygosity could be used instead. The panmictic equivalent for Aa is 2 p• q• = 0.4658, which is higher than that in the sampled bulk (0.3588) [black label "8"]. The sampling has caused the heterozygosity to decrease by 0.1070, which differs trivially from the earlier estimate because of rounding errors. The inbreeding coefficient (f) was introduced in the early section on Self Fertilization. Here, a formal definition of it is considered: f is the probability that two "same" alleles (that is A and A, or a and a), which fertilize together are of common ancestral origin—or (more formally) f is the probability that two homologous alleles are autozygous. Consider any random gamete in the potential gamodeme that has its syngamy partner restricted by binomial sampling. The probability that that second gamete is homologous autozygous to the first is 1/(2N), the reciprocal of the gamodeme size. For the five example progenies, these quantities are 0.1, 0.0833, 0.1, 0.0833 and 0.125 respectively, and their weighted average is 0.0961. This is the inbreeding coefficient of the example progenies bulk, provided it is unbiased with respect to the full binomial distribution. An example based upon s = 5 is likely to be biased, however, when compared to an appropriate entire binomial distribution based upon the sample number (s) approaching infinity (s → ∞). Another derived definition of f for the full Distribution is that f also equals the rise in homozygosity, which equals the fall in heterozygosity. For the example, these frequency changes are 0.1069 and 0.1070, respectively. This result is different to the above, indicating that bias with respect to the full underlying distribution is present in the example. For the example itself, these latter values are the better ones to use, namely f• = 0.10695. The population mean of the equivalent panmictic is found as [a (p•-q•) + 2 p•q• d] + mp. Using the example gene effects (white label "9" in the diagram), this mean is 37.87. The equivalent mean in the dispersed bulk is 36.94 (black label "10"), which is depressed by the amount 0.93. This is the inbreeding depression from this Genetic Drift. However, as noted previously, three progenies were not depressed (k = 1, 2, 5), and had means even greater than that of the panmictic equivalent. These are the lines a plant breeder looks for in a line selection programme. Extensive binomial sampling – is panmixia restored? If the number of binomial samples is large (s → ∞ ), then p• → pg and q• → qg. It might be queried whether panmixia would effectively re-appear under these circumstances. However, the sampling of allele frequencies has still occurred, with the result that σ2p, q ≠ 0. In fact, as s → ∞, the , which is the variance of the whole binomial distribution. Furthermore, the "Wahlund equations" show that the progeny-bulk homozygote frequencies can be obtained as the sums of their respective average values (p2• or q2•) plus σ2p, q. Likewise, the bulk heterozygote frequency is (2 p• q•) minus twice the σ2p, q. The variance arising from the binomial sampling is conspicuously present. Thus, even when s → ∞, the progeny-bulk genotype frequencies still reveal increased homozygosis, and decreased heterozygosis, there is still dispersion of progeny means, and still inbreeding and inbreeding depression. That is, panmixia is not re-attained once lost because of genetic drift (binomial sampling). However, a new potential panmixia can be initiated via an allogamous F2 following hybridization. Continued genetic drift – increased dispersion and inbreeding Previous discussion on genetic drift examined just one cycle (generation) of the process. When the sampling continues over successive generations, conspicuous changes occur in σ2p, q and f. Furthermore, another "index" is needed to keep track of "time": t = 1 .... y where y = the number of "years" (generations) considered. The methodology often is to add the current binomial increment (Δ = "de novo") to what has occurred previously. The entire Binomial Distribution is examined here. [There is no further benefit to be had from an abbreviated example.] Dispersion via σ2p,q Earlier this variance (σ 2p,q) was seen to be:- With the extension over time, this is also the result of the first cycle, and so is (for brevity). At cycle 2, this variance is generated yet again—this time becoming the de novo variance ()—and accumulates to what was present already—the "carry-over" variance. The second cycle variance () is the weighted sum of these two components, the weights being for the de novo and = for the"carry-over". Thus, The extension to generalize to any time t, after considerable simplification, becomes:- Because it was this variation in allele frequencies that caused the "spreading apart" of the progenies' means (dispersion), the change in σ2t over the generations indicates the change in the level of the dispersion. Dispersion via f The method for examining the inbreeding coefficient is similar to that used for σ 2p,q. The same weights as before are used respectively for de novo f ( Δ f ) [recall this is 1/(2N) ] and carry-over f. Therefore, , which is similar to Equation (1) in the previous sub-section. In general, after rearrangement, The graphs to the left show levels of inbreeding over twenty generations arising from genetic drift for various actual gamodeme sizes (2N). Still further rearrangements of this general equation reveal some interesting relationships. (A) After some simplification, . The left-hand side is the difference between the current and previous levels of inbreeding: the change in inbreeding (δft). Notice, that this change in inbreeding (δft) is equal to the de novo inbreeding (Δf) only for the first cycle—when ft-1 is zero. (B) An item of note is the (1-ft-1), which is an "index of non-inbreeding". It is known as the panmictic index. . (C) Further useful relationships emerge involving the panmictic index. . (D) A key link emerges between σ 2p,q and f. Firstly... Secondly, presuming that f0 = 0, the right-hand side of this equation reduces to the section within the brackets of Equation (2) at the end of the last sub-section. That is, if initially there is no inbreeding, ! Furthermore, if this then is rearranged, . That is, when initial inbreeding is zero, the two principal viewpoints of binomial gamete sampling (genetic drift) are directly inter-convertible. Selfing within random fertilization It is easy to overlook that random fertilization includes self-fertilization. Sewall Wright showed that a proportion 1/N of random fertilizations is actually self fertilization , with the remainder (N-1)/N being cross fertilization . Following path analysis and simplification, the new view random fertilization inbreeding was found to be: . Upon further rearrangement, the earlier results from the binomial sampling were confirmed, along with some new arrangements. Two of these were potentially very useful, namely: (A) ; and (B) . The recognition that selfing may intrinsically be a part of random fertilization leads to some issues about the use of the previous random fertilization 'inbreeding coefficient'. Clearly, then, it is inappropriate for any species incapable of self fertilization, which includes plants with self-incompatibility mechanisms, dioecious plants, and bisexual animals. The equation of Wright was modified later to provide a version of random fertilization that involved only cross fertilization with no self fertilization. The proportion 1/N formerly due to selfing now defined the carry-over gene-drift inbreeding arising from the previous cycle. The new version is: . The graphs to the right depict the differences between standard random fertilization RF, and random fertilization adjusted for "cross fertilization alone" CF. As can be seen, the issue is non-trivial for small gamodeme sample sizes. It now is necessary to note that not only is "panmixia" not a synonym for "random fertilization", but also that "random fertilization" is not a synonym for "cross fertilization". Homozygosity and heterozygosity In the sub-section on "The sample gamodemes – Genetic drift", a series of gamete samplings was followed, an outcome of which was an increase in homozygosity at the expense of heterozygosity. From this viewpoint, the rise in homozygosity was due to the gamete samplings. Levels of homozygosity can be viewed also according to whether homozygotes arose allozygously or autozygously. Recall that autozygous alleles have the same allelic origin, the likelihood (frequency) of which is the inbreeding coefficient (f) by definition. The proportion arising allozygously is therefore (1-f). For the A-bearing gametes, which are present with a general frequency of p, the overall frequency of those that are autozygous is therefore (f p). Similarly, for a-bearing gametes, the autozygous frequency is (f q). These two viewpoints regarding genotype frequencies must be connected to establish consistency. Following firstly the auto/allo viewpoint, consider the allozygous component. This occurs with the frequency of (1-f), and the alleles unite according to the random fertilization quadratic expansion. Thus: Consider next the autozygous component. As these alleles are autozygous, they are effectively selfings, and produce either AA or aa genotypes, but no heterozygotes. They therefore produce "AA" homozygotes plus "aa" homozygotes. Adding these two components together results in: for the AA homozygote; for the aa homozygote; and for the Aa heterozygote. This is the same equation as that presented earlier in the section on "Self fertilization – an alternative". The reason for the decline in heterozygosity is made clear here. Heterozygotes can arise only from the allozygous component, and its frequency in the sample bulk is just (1-f): hence this must also be the factor controlling the frequency of the heterozygotes. Secondly, the sampling viewpoint is re-examined. Previously, it was noted that the decline in heterozygotes was . This decline is distributed equally towards each homozygote; and is added to their basic random fertilization expectations. Therefore, the genotype frequencies are: for the "AA" homozygote; for the "aa" homozygote; and for the heterozygote. Thirdly, the consistency between the two previous viewpoints needs establishing. It is apparent at once [from the corresponding equations above] that the heterozygote frequency is the same in both viewpoints. However, such a straightforward result is not immediately apparent for the homozygotes. Begin by considering the AA homozygote's final equation in the auto/allo paragraph above:- . Expand the brackets, and follow by re-gathering [within the resultant] the two new terms with the common-factor f in them. The result is: . Next, for the parenthesized " p20 ", a (1-q) is substituted for a p, the result becoming . Following that substitution, it is a straightforward matter of multiplying-out, simplifying and watching signs. The end result is , which is exactly the result for AA in the sampling paragraph. The two viewpoints are therefore consistent for the AA homozygote. In a like manner, the consistency of the aa viewpoints can also be shown. The two viewpoints are consistent for all classes of genotypes. Extended principles Other fertilization patterns In previous sections, dispersive random fertilization (genetic drift) has been considered comprehensively, and self-fertilization and hybridizing have been examined to varying degrees. The diagram to the left depicts the first two of these, along with another "spatially based" pattern: islands. This is a pattern of random fertilization featuring dispersed gamodemes, with the addition of "overlaps" in which non-dispersive random fertilization occurs. With the islands pattern, individual gamodeme sizes (2N) are observable, and overlaps (m) are minimal. This is one of Sewall Wright's array of possibilities. In addition to "spatially" based patterns of fertilization, there are others based on either "phenotypic" or "relationship" criteria. The phenotypic bases include assortative fertilization (between similar phenotypes) and disassortative fertilization (between opposite phenotypes). The relationship patterns include sib crossing, cousin crossing and backcrossing—and are considered in a separate section. Self fertilization may be considered both from a spatial or relationship point of view. "Islands" random fertilization The breeding population consists of s small dispersed random fertilization gamodemes of sample size ( k = 1 ... s ) with " overlaps " of proportion in which non-dispersive random fertilization occurs. The dispersive proportion is thus . The bulk population consists of weighted averages of sample sizes, allele and genotype frequencies and progeny means, as was done for genetic drift in an earlier section. However, each gamete sample size is reduced to allow for the overlaps, thus finding a effective for . For brevity, the argument is followed further with the subscripts omitted. Recall that is in general. [Here, and following, the 2N refers to the previously defined sample size, not to any "islands adjusted" version.] After simplification, Notice that when m = 0 this reduces to the previous Δ f. The reciprocal of this furnishes an estimate of the " effective for ", mentioned above. This Δf is also substituted into the previous inbreeding coefficient to obtain where t is the index over generations, as before. The effective overlap proportion can be obtained also, as The graphs to the right show the inbreeding for a gamodeme size of 2N = 50 for ordinary dispersed random fertilization (RF) (m=0), and for four overlap levels ( m = 0.0625, 0.125, 0.25, 0.5 ) of islands random fertilization. There has indeed been reduction in the inbreeding resulting from the non-dispersed random fertilization in the overlaps. It is particularly notable as m → 0.50. Sewall Wright suggested that this value should be the limit for the use of this approach. Allele shuffling – allele substitution The gene-model examines the heredity pathway from the point of view of "inputs" (alleles/gametes) and "outputs" (genotypes/zygotes), with fertilization being the "process" converting one to the other. An alternative viewpoint concentrates on the "process" itself, and considers the zygote genotypes as arising from allele shuffling. In particular, it regards the results as if one allele had "substituted" for the other during the shuffle, together with a residual that deviates from this view. This formed an integral part of Fisher's method, in addition to his use of frequencies and effects to generate his genetical statistics. A discursive derivation of the allele substitution alternative follows. Suppose that the usual random fertilization of gametes in a "base" gamodeme—consisting of p gametes (A) and q gametes (a)—is replaced by fertilization with a "flood" of gametes all containing a single allele (A or a, but not both). The zygotic results can be interpreted in terms of the "flood" allele having "substituted for" the alternative allele in the underlying "base" gamodeme. The diagram assists in following this viewpoint: the upper part pictures an A substitution, while the lower part shows an a substitution. (The diagram's "RF allele" is the allele in the "base" gamodeme.) Consider the upper part firstly. Because base A is present with a frequency of p, the substitute A fertilizes it with a frequency of p resulting in a zygote AA with an allele effect of a. Its contribution to the outcome, therefore, is the product . Similarly, when the substitute fertilizes base a (resulting in Aa with a frequency of q and heterozygote effect of d), the contribution is . The overall result of substitution by A is, therefore, . This is now oriented towards the population mean [see earlier section] by expressing it as a deviate from that mean : After some algebraic simplification, this becomes - the substitution effect of A. A parallel reasoning can be applied to the lower part of the diagram, taking care with the differences in frequencies and gene effects. The result is the substitution effect of a, which is The common factor inside the brackets is the average allele substitution effect, and is It can also be derived in a more direct way, but the result is the same. In subsequent sections, these substitution effects help define the gene-model genotypes as consisting of a partition predicted by these new effects (substitution expectations), and a residual (substitution deviations) between these expectations and the previous gene-model effects. The expectations are also called the breeding values and the deviations are also called dominance deviations. Ultimately, the variance arising from the substitution expectations becomes the so-called Additive genetic variance (σ2A) (also the Genic variance )— while that arising from the substitution deviations becomes the so-called Dominance variance (σ2D). It is noticeable that neither of these terms reflects the true meanings of these variances. The "genic variance" is less dubious than the additive genetic variance, and more in line with Fisher's own name for this partition. A less-misleading name for the dominance deviations variance is the "quasi-dominance variance" [see following sections for further discussion]. These latter terms are preferred herein. Gene effects redefined The gene-model effects (a, d and -a) are important soon in the derivation of the deviations from substitution, which were first discussed in the previous Allele Substitution section. However, they need to be redefined themselves before they become useful in that exercise. They firstly need to be re-centralized around the population mean (G), and secondly they need to be re-arranged as functions of β, the average allele substitution effect. Consider firstly the re-centralization. The re-centralized effect for AA is a• = a - G which, after simplification, becomes a• = 2q(a-pd). The similar effect for Aa is d• = d - G = a(q-p) + d(1-2pq), after simplification. Finally, the re-centralized effect for aa is (-a)• = -2p(a+qd). Secondly, consider the re-arrangement of these re-centralized effects as functions of β. Recalling from the "Allele Substitution" section that β = [a +(q-p)d], rearrangement gives a = [β -(q-p)d]. After substituting this for a in a• and simplifying, the final version becomes a•• = 2q(β-qd). Similarly, d• becomes d•• = β(q-p) + 2pqd; and (-a)• becomes (-a)•• = -2p(β+pd). Genotype substitution – expectations and deviations The zygote genotypes are the target of all this preparation. The homozygous genotype AA is a union of two substitution effects of A, one from each sex. Its substitution expectation is therefore βAA = 2βA = 2qβ (see previous sections). Similarly, the substitution expectation of Aa is βAa = βA + βa = (q-p)β; and for aa, βaa = 2βa = -2pβ. These substitution expectations of the genotypes are also called breeding values. Substitution deviations are the differences between these expectations and the gene effects after their two-stage redefinition in the previous section. Therefore, dAA = a•• - βAA = -2q2d after simplification. Similarly, dAa = d•• - βAa = 2pqd after simplification. Finally, daa = (-a)•• - βaa = -2p2d after simplification. Notice that all of these substitution deviations ultimately are functions of the gene-effect d—which accounts for the use of ["d" plus subscript] as their symbols. However, it is a serious non sequitur in logic to regard them as accounting for the dominance (heterozygosis) in the entire gene model : they are simply functions of "d" and not an audit of the "d" in the system. They are as derived: deviations from the substitution expectations! The "substitution expectations" ultimately give rise to the σ2A (the so-called "Additive" genetic variance); and the "substitution deviations" give rise to the σ2D (the so-called "Dominance" genetic variance). Be aware, however, that the average substitution effect (β) also contains "d" [see previous sections], indicating that dominance is also embedded within the "Additive" variance [see following sections on the Genotypic Variance for their derivations]. Remember also [see previous paragraph] that the "substitution deviations" do not account for the dominance in the system (being nothing more than deviations from the substitution expectations), but which happen to consist algebraically of functions of "d". More appropriate names for these respective variances might be σ2B (the "Breeding expectations" variance) and σ2δ (the "Breeding deviations" variance). However, as noted previously, "Genic" (σ 2A) and "Quasi-Dominance" (σ 2D), respectively, will be preferred herein. Genotypic variance There are two major approaches to defining and partitioning genotypic variance. One is based on the gene-model effects, while the other is based on the genotype substitution effects They are algebraically inter-convertible with each other. In this section, the basic random fertilization derivation is considered, with the effects of inbreeding and dispersion set aside. This is dealt with later to arrive at a more general solution. Until this mono-genic treatment is replaced by a multi-genic one, and until epistasis is resolved in the light of the findings of epigenetics, the Genotypic variance has only the components considered here. Gene-model approach – Mather Jinks Hayman It is convenient to follow the biometrical approach, which is based on correcting the unadjusted sum of squares (USS) by subtracting the correction factor (CF). Because all effects have been examined through frequencies, the USS can be obtained as the sum of the products of each genotype's frequency' and the square of its gene-effect. The CF in this case is the mean squared. The result is the SS, which, again because of the use of frequencies, is also immediately the variance. The , and the . The After partial simplification, The last line is in Mather's terminology. Here, σ2a is the homozygote or allelic variance, and σ2d is the heterozygote or dominance variance. The substitution deviations variance (σ2D) is also present. The (weighted_covariance)ad is abbreviated hereafter to " covad ". These components are plotted across all values of p in the accompanying figure. Notice that covad is negative for p > 0.5. Most of these components are affected by the change of central focus from homozygote mid-point (mp) to population mean (G), the latter being the basis of the Correction Factor. The covad and substitution deviation variances are simply artifacts of this shift. The allelic and dominance variances are genuine genetical partitions of the original gene-model, and are the only eu-genetical components. Even then, the algebraic formula for the allelic variance is effected by the presence of G: it is only the dominance variance (i.e. σ2d ) which is unaffected by the shift from mp to G. These insights are commonly not appreciated. Further gathering of terms [in Mather format] leads to , where . It is useful later in Diallel analysis, which is an experimental design for estimating these genetical statistics. If, following the last-given rearrangements, the first three terms are amalgamated together, rearranged further and simplified, the result is the variance of the Fisherian substitution expectation. That is: Notice particularly that σ2A is not σ2a. The first is the substitution expectations variance, while the second is the allelic variance. Notice also that σ2D (the substitution-deviations variance) is not σ2d (the dominance variance), and recall that it is an artifact arising from the use of G for the Correction Factor. [See the "blue paragraph" above.] It now will be referred to as the "quasi-dominance" variance. Also note that σ2D < σ2d ("2pq" being always a fraction); and note that (1) σ2D = 2pq σ2d, and that (2) σ2d = σ2D / (2pq). That is: it is confirmed that σ2D does not quantify the dominance variance in the model. It is σ2d which does that. However, the dominance variance (σ2d) can be estimated readily from the σ2D if 2pq is available. From the Figure, these results can be visualized as accumulating σ2a, σ2d and covad to obtain σ2A, while leaving the σ2D still separated. It is clear also in the Figure that σ2D < σ2d, as expected from the equations. The overall result (in Fisher's format) is The Fisherian components have just been derived, but their derivation via the substitution effects themselves is given also, in the next section. Allele-substitution approach – Fisher Reference to the several earlier sections on allele substitution reveals that the two ultimate effects are genotype substitution expectations and genotype substitution deviations. Notice that these are each already defined as deviations from the random fertilization population mean (G). For each genotype in turn therefore, the product of the frequency and the square of the relevant effect is obtained, and these are accumulated to obtain directly a SS and σ2. Details follow. σ2A = p2 βAA2 + 2pq βAa2 + q2 βaa2, which simplifies to σ2A = 2pqβ2—the Genic variance.σ2D = p2 dAA2 + 2pq dAa2 + q daa2, which simplifies to σ2D = (2pq)2 d2—the quasi-Dominance variance. Upon accumulating these results, σ2G = σ2A + σ2D . These components are visualized in the graphs to the right. The average allele substitution effect is graphed also, but the symbol is "α" (as is common in the citations) rather than "β" (as is used herein). Once again, however, refer to the earlier discussions about the true meanings and identities of these components. Fisher himself did not use these modern terms for his components. The substitution expectations variance he named the "genetic" variance; and the substitution deviations variance he regarded simply as the unnamed residual between the "genotypic" variance (his name for it) and his "genetic" variance.While considering origins of terms: Fisher also proposed the word "variance" for this measure of variability. See Fisher (1999), p. 311 and Fisher (1918). [The terminology and derivation used in this article are completely in accord with Fisher's own.] Mather's term for the expectations variance—"genic"—is obviously derived from Fisher's term, and avoids using "genetic" (which has become too generalized in usage to be of value in the present context). The origin is obscure of the modern misleading terms "additive" and "dominance" variances. Note that this allele-substitution approach defined the components separately, and then totaled them to obtain the final Genotypic variance. Conversely, the gene-model approach derived the whole situation (components and total) as one exercise. Bonuses arising from this were (a) the revelations about the real structure of σ2A, and (b) the real meanings and relative sizes of σ2d and σ2D (see previous sub-section). It is also apparent that a "Mather" analysis is more informative, and that a "Fisher" analysis can always be constructed from it. The opposite conversion is not possible, however, because information about covad would be missing. Dispersion and the genotypic variance In the section on genetic drift, and in other sections that discuss inbreeding, a major outcome from allele frequency sampling has been the dispersion of progeny means. This collection of means has its own average, and also has a variance: the amongst-line variance. (This is a variance of the attribute itself, not of allele frequencies.) As dispersion develops further over succeeding generations, this amongst-line variance would be expected to increase. Conversely, as homozygosity rises, the within-lines variance would be expected to decrease. The question arises therefore as to whether the total variance is changing—and, if so, in what direction. To date, these issues have been presented in terms of the genic (σ 2A ) and quasi-dominance (σ 2D ) variances rather than the gene-model components. This will be done herein as well. The crucial overview equation comes from Sewall Wright, and is the outline of the inbred genotypic variance based on a weighted average of its extremes, the weights being quadratic with respect to the inbreeding coefficient . This equation is: where is the inbreeding coefficient, is the genotypic variance at f=0, is the genotypic variance at f=1, is the population mean at f=0, and is the population mean at f=1. The component [in the equation above] outlines the reduction of variance within progeny lines. The component addresses the increase in variance amongst progeny lines. Lastly, the component is seen (in the next line) to address the quasi-dominance variance. These components can be expanded further thereby revealing additional insight. Thus:- Firstly, σ2G(0) [in the equation above] has been expanded to show its two sub-components [see section on "Genotypic variance"]. Next, the σ2G(1) has been converted to 4pqa2 , and is derived in a section following. The third component's substitution is the difference between the two "inbreeding extremes" of the population mean [see section on the "Population Mean"]. Summarising: the within-line components are and ; and the amongst-line components are and . Rearranging gives the following: The version in the last line is discussed further in a subsequent section. Similarly, Graphs to the left show these three genic variances, together with the three quasi-dominance variances, across all values of f, for p = 0.5 (at which the quasi-dominance variance is at a maximum). Graphs to the right show the Genotypic variance partitions (being the sums of the respective genic and quasi-dominance partitions) changing over ten generations with an example f = 0.10. Answering, firstly, the questions posed at the beginning about the total variances [the Σ in the graphs] : the genic variance rises linearly with the inbreeding coefficient, maximizing at twice its starting level. The quasi-dominance variance declines at the rate of (1 − f2 ) until it finishes at zero. At low levels of f, the decline is very gradual, but it accelerates with higher levels of f. Secondly, notice the other trends. It is probably intuitive that the within line variances decline to zero with continued inbreeding, and this is seen to be the case (both at the same linear rate (1-f) ). The amongst line variances both increase with inbreeding up to f = 0.5, the genic variance at the rate of 2f, and the quasi-dominance variance at the rate of (f − f2). At f > 0.5, however, the trends change. The amongst line genic variance continues its linear increase until it equals the total genic variance. But, the amongst line quasi-dominance variance now declines towards zero, because (f − f2) also declines with f > 0.5. Derivation of σ2G(1) Recall that when f=1, heterozygosity is zero, within-line variance is zero, and all genotypic variance is thus amongst-line variance and deplete of dominance variance. In other words, σ2G(1) is the variance amongst fully inbred line means. Recall further [from "The mean after self-fertilization" section] that such means (G1's, in fact) are G = a(p-q). Substituting (1-q) for the p, gives G1 = a (1 − 2q) = a − 2aq. Therefore, the σ2G(1) is the σ2(a-2aq) actually. Now, in general, the variance of a difference (x-y) is [ σ2x + σ2y − 2 covxy ]. Therefore, σ2G(1) = [ σ2a + σ22aq − 2 cov(a, 2aq) ] . But a (an allele effect) and q (an allele frequency) are independent—so this covariance is zero. Furthermore, a is a constant from one line to the next, so σ2a is also zero. Further, 2a is another constant (k), so the σ22aq is of the type σ2k X. In general, the variance σ2k X is equal to k2 σ2X . Putting all this together reveals that σ2(a-2aq) = (2a)2 σ2q . Recall [from the section on "Continued genetic drift"] that σ2q = pq f . With f=1 here within this present derivation, this becomes pq 1 (that is pq), and this is substituted into the previous. The final result is: σ2G(1) = σ2(a-2aq) = 4a2 pq = 2(2pq a2) = 2 σ2a . It follows immediately that f σ2G(1) = f 2 σ2a . [This last f comes from the initial Sewall Wright equation : it is not the f just set to "1" in the derivation concluded two lines above.] Total dispersed genic variance – σ2A(f) and βf Previous sections found that the within line genic variance is based upon the substitution-derived genic variance ( σ2A )—but the amongst line genic variance is based upon the gene model allelic variance ( σ2a ). These two cannot simply be added to get total genic variance. One approach in avoiding this problem was to re-visit the derivation of the average allele substitution effect, and to construct a version, ( β f ), that incorporates the effects of the dispersion. Crow and Kimura achieved this using the re-centered allele effects (a•, d•, (-a)• ) discussed previously ["Gene effects re-defined"]. However, this was found subsequently to under-estimate slightly the total Genic variance, and a new variance-based derivation led to a refined version. The refined version is: β f = { a2 + [(1−f ) / (1 + f )] 2(q − p ) ad + [(1-f ) / (1 + f )] (q − p )2 d2 } (1/2) Consequently, σ2A(f) = (1 + f ) 2pq βf 2 does now agree with [ (1-f) σ2A(0) + 2f σ2a(0) ] exactly. Total and partitioned dispersed quasi-dominance variances The total genic variance is of intrinsic interest in its own right. But, prior to the refinements by Gordon, it had had another important use as well. There had been no extant estimators for the "dispersed" quasi-dominance. This had been estimated as the difference between Sewall Wright's inbred genotypic variance and the total "dispersed" genic variance [see the previous sub-section]. An anomaly appeared, however, because the total quasi-dominance variance appeared to increase early in inbreeding despite the decline in heterozygosity. The refinements in the previous sub-section corrected this anomaly. At the same time, a direct solution for the total quasi-dominance variance was obtained, thus avoiding the need for the "subtraction" method of previous times. Furthermore, direct solutions for the amongst-line and within-line partitions of the quasi-dominance variance were obtained also, for the first time. [These have been presented in the section "Dispersion and the genotypic variance".] Environmental variance The environmental variance is phenotypic variability, which cannot be ascribed to genetics. This sounds simple, but the experimental design needed to separate the two needs very careful planning. Even the "external" environment can be divided into spatial and temporal components ("Sites" and "Years"); or into partitions such as "litter" or "family", and "culture" or "history". These components are very dependent upon the actual experimental model used to do the research. Such issues are very important when doing the research itself, but in this article on quantitative genetics this overview may suffice. It is an appropriate place, however, for a summary: Phenotypic variance = genotypic variances + environmental variances + genotype-environment interaction + experimental "error" variance i.e., σ2P = σ2G + σ2E + σ2GE + σ2 or σ2P = σ2A + σ2D + σ2I + σ2E + σ2GE + σ2 after partitioning the genotypic variance (G) into component variances "genic" (A), "quasi-dominance" (D), and "epistatic" (I). The environmental variance will appear in other sections, such as "Heritability" and "Correlated attributes". Heritability and repeatability The heritability of a trait is the proportion of the total (phenotypic) variance (σ2 P) that is attributable to genetic variance, whether it be the full genotypic variance, or some component of it. It quantifies the degree to which phenotypic variability is due to genetics: but the precise meaning depends upon which genetical variance partition is used in the numerator of the proportion. Research estimates of heritability have standard errors, just as have all estimated statistics. Where the numerator variance is the whole Genotypic variance ( σ2G ), the heritability is known as the "broadsense" heritability (H2). It quantifies the degree to which variability in an attribute is determined by genetics as a whole. [See section on the Genotypic variance.] If only genic variance (σ2A) is used in the numerator, the heritability may be called "narrow sense" (h2). It quantifies the extent to which phenotypic variance is determined by Fisher's substitution expectations variance. Fisher proposed that this narrow-sense heritability might be appropriate in considering the results of natural selection, focusing as it does on change-ability, that is upon "adaptation". He proposed it with regard to quantifying Darwinian evolution. Recalling that the allelic variance (σ 2a) and the dominance variance (σ 2d) are eu-genetic components of the gene-model [see section on the Genotypic variance], and that σ 2D (the substitution deviations or "quasi-dominance" variance) and covad are due to changing from the homozygote midpoint (mp) to the population mean (G), it can be seen that the real meanings of these heritabilities are obscure. The heritabilities and have unambiguous meaning. Narrow-sense heritability has been used also for predicting generally the results of artificial selection. In the latter case, however, the broadsense heritability may be more appropriate, as the whole attribute is being altered: not just adaptive capacity. Generally, advance from selection is more rapid the higher the heritability. [See section on "Selection".] In animals, heritability of reproductive traits is typically low, while heritability of disease resistance and production are moderately low to moderate, and heritability of body conformation is high. Repeatability (r2) is the proportion of phenotypic variance attributable to differences in repeated measures of the same subject, arising from later records. It is used particularly for long-lived species. This value can only be determined for traits that manifest multiple times in the organism's lifetime, such as adult body mass, metabolic rate or litter size. Individual birth mass, for example, would not have a repeatability value: but it would have a heritability value. Generally, but not always, repeatability indicates the upper level of the heritability. r2 = (s2G + s2PE)/s2P where s2PE = phenotype-environment interaction = repeatability. The above concept of repeatability is, however, problematic for traits that necessarily change greatly between measurements. For example, body mass increases greatly in many organisms between birth and adult-hood. Nonetheless, within a given age range (or life-cycle stage), repeated measures could be done, and repeatability would be meaningful within that stage. Relationship From the heredity perspective, relations are individuals that inherited genes from one or more common ancestors. Therefore, their "relationship" can be quantified on the basis of the probability that they each have inherited a copy of an allele from the common ancestor. In earlier sections, the Inbreeding coefficient has been defined as, "the probability that two same alleles ( A and A, or a and a ) have a common origin"—or, more formally, "The probability that two homologous alleles are autozygous." Previously, the emphasis was on an individual's likelihood of having two such alleles, and the coefficient was framed accordingly. It is obvious, however, that this probability of autozygosity for an individual must also be the probability that each of its two parents had this autozygous allele. In this re-focused form, the probability is called the co-ancestry coefficient for the two individuals i and j ( f ij ). In this form, it can be used to quantify the relationship between two individuals, and may also be known as the coefficient of kinship or the consanguinity coefficient. Pedigree analysis Pedigrees are diagrams of familial connections between individuals and their ancestors, and possibly between other members of the group that share genetical inheritance with them. They are relationship maps. A pedigree can be analyzed, therefore, to reveal coefficients of inbreeding and co-ancestry. Such pedigrees actually are informal depictions of path diagrams as used in path analysis, which was invented by Sewall Wright when he formulated his studies on inbreeding. Using the adjacent diagram, the probability that individuals "B" and "C" have received autozygous alleles from ancestor "A" is 1/2 (one out of the two diploid alleles). This is the "de novo" inbreeding (ΔfPed) at this step. However, the other allele may have had "carry-over" autozygosity from previous generations, so the probability of this occurring is (de novo complement multiplied by the inbreeding of ancestor A ), that is (1 − ΔfPed ) fA = (1/2) fA . Therefore, the total probability of autozygosity in B and C, following the bi-furcation of the pedigree, is the sum of these two components, namely (1/2) + (1/2)fA = (1/2) (1+f A ) . This can be viewed as the probability that two random gametes from ancestor A carry autozygous alleles, and in that context is called the coefficient of parentage ( fAA ). It appears often in the following paragraphs. Following the "B" path, the probability that any autozygous allele is "passed on" to each successive parent is again (1/2) at each step (including the last one to the "target" X ). The overall probability of transfer down the "B path" is therefore (1/2)3 . The power that (1/2) is raised to can be viewed as "the number of intermediates in the path between A and X ", nB = 3 . Similarly, for the "C path", nC = 2 , and the "transfer probability" is (1/2)2 . The combined probability of autozygous transfer from A to X is therefore [ fAA (1/2)(nB) (1/2)(nC) ] . Recalling that fAA = (1/2) (1+f A ) , fX = fPQ = (1/2)(nB + nC + 1) (1 + fA ) . In this example, assuming that fA = 0, fX = 0.0156 (rounded) = fPQ , one measure of the "relatedness" between P and Q. In this section, powers of (1/2) were used to represent the "probability of autozygosity". Later, this same method will be used to represent the proportions of ancestral gene-pools which are inherited down a pedigree [the section on "Relatedness between relatives"]. Cross-multiplication rules In the following sections on sib-crossing and similar topics, a number of "averaging rules" are useful. These derive from path analysis. The rules show that any co-ancestry coefficient can be obtained as the average of cross-over co-ancestries between appropriate grand-parental and parental combinations. Thus, referring to the adjacent diagram, Cross-multiplier 1 is that fPQ = average of ( fAC, fAD, fBC, fBD ) = (1/4) [fAC + fAD + fBC + fBD ] = fY . In a similar fashion, cross-multiplier 2 states that fPC = (1/2) [ fAC + fBC ]—while cross-multiplier 3 states that fPD = (1/2) [ fAD + fBD ] . Returning to the first multiplier, it can now be seen also to be fPQ = (1/2) [ fPC + fPD ], which, after substituting multipliers 2 and 3, resumes its original form. In much of the following, the grand-parental generation is referred to as (t-2), the parent generation as (t-1), and the "target" generation as t. Full-sib crossing (FS) The diagram to the right shows that full sib crossing is a direct application of cross-Multiplier 1, with the slight modification that parents A and B repeat (in lieu of C and D) to indicate that individuals P1 and P2 have both of their parents in common—that is they are full siblings. Individual Y is the result of the crossing of two full siblings. Therefore, fY = fP1,P2 = (1/4) [ fAA + 2 fAB + fBB ] . Recall that fAA and fBB were defined earlier (in Pedigree analysis) as coefficients of parentage, equal to (1/2)[1+fA ] and (1/2)[1+fB ] respectively, in the present context. Recognize that, in this guise, the grandparents A and B represent generation (t-2) . Thus, assuming that in any one generation all levels of inbreeding are the same, these two coefficients of parentage each represent (1/2) [1 + f(t-2) ] . Now, examine fAB . Recall that this also is fP1 or fP2 , and so represents their generation - f(t-1) . Putting it all together, ft = (1/4) [ 2 fAA + 2 fAB ] = (1/4) [ 1 + f(t-2) + 2 f(t-1) ] . That is the inbreeding coefficient for Full-Sib crossing . The graph to the left shows the rate of this inbreeding over twenty repetitive generations. The "repetition" means that the progeny after cycle t become the crossing parents that generate cycle (t+1 ), and so on successively. The graphs also show the inbreeding for random fertilization 2N=20 for comparison. Recall that this inbreeding coefficient for progeny Y is also the co-ancestry coefficient for its parents, and so is a measure of the relatedness of the two Fill siblings. Half-sib crossing (HS) Derivation of the half sib crossing takes a slightly different path to that for Full sibs. In the adjacent diagram, the two half-sibs at generation (t-1) have only one parent in common—parent "A" at generation (t-2). The cross-multiplier 1 is used again, giving fY = f(P1,P2) = (1/4) [ fAA + fAC + fBA + fBC ] . There is just one coefficient of parentage this time, but three co-ancestry coefficients at the (t-2) level (one of them—fBC—being a "dummy" and not representing an actual individual in the (t-1) generation). As before, the coefficient of parentage is (1/2)[1+fA ] , and the three co-ancestries each represent f(t-1) . Recalling that fA represents f(t-2) , the final gathering and simplifying of terms gives fY = ft = (1/8) [ 1 + f(t-2) + 6 f(t-1) ] . The graphs at left include this half-sib (HS) inbreeding over twenty successive generations. As before, this also quantifies the relatedness of the two half-sibs at generation (t-1) in its alternative form of f(P1, P2) . Self fertilization (SF) A pedigree diagram for selfing is on the right. It is so straightforward it does not require any cross-multiplication rules. It employs just the basic juxtaposition of the inbreeding coefficient and its alternative the co-ancestry coefficient; followed by recognizing that, in this case, the latter is also a coefficient of parentage. Thus, fY = f(P1, P1) = ft = (1/2) [ 1 + f(t-1) ] . This is the fastest rate of inbreeding of all types, as can be seen in the graphs above. The selfing curve is, in fact, a graph of the coefficient of parentage. Cousins crossings These are derived with methods similar to those for siblings. As before, the co-ancestry viewpoint of the inbreeding coefficient provides a measure of "relatedness" between the parents P1 and P2 in these cousin expressions. The pedigree for First Cousins (FC) is given to the right. The prime equation is fY = ft = fP1,P2 = (1/4) [ f1D + f12 + fCD + fC2 ]. After substitution with corresponding inbreeding coefficients, gathering of terms and simplifying, this becomes ft = (1/4) [ 3 f(t-1) + (1/4) [2 f(t-2) + f(t-3) + 1 ]] , which is a version for iteration—useful for observing the general pattern, and for computer programming. A "final" version is ft = (1/16) [ 12 f(t-1) + 2 f(t-2) + f(t-3) + 1 ] . The Second Cousins (SC) pedigree is on the left. Parents in the pedigree not related to the common Ancestor are indicated by numerals instead of letters. Here, the prime equation is fY = ft = fP1,P2 = (1/4) [ f3F + f34 + fEF + fE4 ]. After working through the appropriate algebra, this becomes ft = (1/4) [ 3 f(t-1) + (1/4) [3 f(t-2) + (1/4) [2 f(t-3) + f(t-4) + 1 ]]] , which is the iteration version. A "final" version is ft = (1/64) [ 48 f(t-1) + 12 f(t-2) + 2 f(t-3) + f(t-4) + 1 ] . To visualize the pattern in full cousin equations, start the series with the full sib equation re-written in iteration form: ft = (1/4)[2 f(t-1) + f(t-2) + 1 ]. Notice that this is the "essential plan" of the last term in each of the cousin iterative forms: with the small difference that the generation indices increment by "1" at each cousin "level". Now, define the cousin level as k = 1 (for First cousins), = 2 (for Second cousins), = 3 (for Third cousins), etc., etc.; and = 0 (for Full Sibs, which are "zero level cousins"). The last term can be written now as: (1/4) [ 2 f(t-(1+k)) + f(t-(2+k)) + 1] . Stacked in front of this last term are one or more iteration increments in the form (1/4) [ 3 f(t-j) + ... , where j is the iteration index and takes values from 1 ... k over the successive iterations as needed. Putting all this together provides a general formula for all levels of full cousin possible, including Full Sibs. For kth level full cousins, f{k}t = Ιterj = 1k { (1/4) [ 3 f(t-j) + }j + (1/4) [ 2 f(t-(1+k)) + f(t-(2+k)) + 1] . At the commencement of iteration, all f(t-x) are set at "0", and each has its value substituted as it is calculated through the generations. The graphs to the right show the successive inbreeding for several levels of Full Cousins. For first half-cousins (FHC), the pedigree is to the left. Notice there is just one common ancestor (individual A). Also, as for second cousins, parents not related to the common ancestor are indicated by numerals. Here, the prime equation is fY = ft = fP1,P2 = (1/4) [ f3D + f34 + fCD + fC4 ]. After working through the appropriate algebra, this becomes ft = (1/4) [ 3 f(t-1) + (1/8) [6 f(t-2) + f(t-3) + 1 ]] , which is the iteration version. A "final" version is ft = (1/32) [ 24 f(t-1) + 6 f(t-2) + f(t-3) + 1 ] . The iteration algorithm is similar to that for full cousins, except that the last term is (1/8) [ 6 f(t-(1+k)) + f(t-(2+k)) + 1 ] . Notice that this last term is basically similar to the half sib equation, in parallel to the pattern for full cousins and full sibs. In other words, half sibs are "zero level" half cousins. There is a tendency to regard cousin crossing with a human-oriented point of view, possibly because of a wide interest in Genealogy. The use of pedigrees to derive the inbreeding perhaps reinforces this "Family History" view. However, such kinds of inter-crossing occur also in natural populations—especially those that are sedentary, or have a "breeding area" that they re-visit from season to season. The progeny-group of a harem with a dominant male, for example, may contain elements of sib-crossing, cousin crossing, and backcrossing, as well as genetic drift, especially of the "island" type. In addition to that, the occasional "outcross" adds an element of hybridization to the mix. It is not panmixia. Backcrossing (BC) Following the hybridizing between A and R, the F1 (individual B) is crossed back (BC1) to an original parent (R) to produce the BC1 generation (individual C). [It is usual to use the same label for the act of making the back-cross and for the generation produced by it. The act of back-crossing is here in italics. ] Parent R is the recurrent parent. Two successive backcrosses are depicted, with individual D being the BC2 generation. These generations have been given t indices also, as indicated. As before, fD = ft = fCR = (1/2) [ fRB + fRR ] , using cross-multiplier 2 previously given. The fRB just defined is the one that involves generation (t-1) with (t-2). However, there is another such fRB contained wholly within generation (t-2) as well, and it is this one that is used now: as the co-ancestry of the parents of individual C in generation (t-1). As such, it is also the inbreeding coefficient of C, and hence is f(t-1). The remaining fRR is the coefficient of parentage of the recurrent parent, and so is (1/2) [1 + fR ] . Putting all this together : ft = (1/2) [ (1/2) [ 1 + fR ] + f(t-1) ] = (1/4) [ 1 + fR + 2 f(t-1) ] . The graphs at right illustrate Backcross inbreeding over twenty backcrosses for three different levels of (fixed) inbreeding in the Recurrent parent. This routine is commonly used in Animal and Plant Breeding programmes. Often after making the hybrid (especially if individuals are short-lived), the recurrent parent needs separate "line breeding" for its maintenance as a future recurrent parent in the backcrossing. This maintenance may be through selfing, or through full-sib or half-sib crossing, or through restricted randomly fertilized populations, depending on the species' reproductive possibilities. Of course, this incremental rise in fR carries-over into the ft of the backcrossing. The result is a more gradual curve rising to the asymptotes than shown in the present graphs, because the fR is not at a fixed level from the outset. Contributions from ancestral genepools In the section on "Pedigree analysis", was used to represent probabilities of autozygous allele descent over n generations down branches of the pedigree. This formula arose because of the rules imposed by sexual reproduction: (i) two parents contributing virtually equal shares of autosomal genes, and (ii) successive dilution for each generation between the zygote and the "focus" level of parentage. These same rules apply also to any other viewpoint of descent in a two-sex reproductive system. One such is the proportion of any ancestral gene-pool (also known as 'germplasm') which is contained within any zygote's genotype. Therefore, the proportion of an ancestral genepool in a genotype is: where n = number of sexual generations between the zygote and the focus ancestor. For example, each parent defines a genepool contributing to its offspring; while each great-grandparent contributes to its great-grand-offspring. The zygote's total genepool (Γ) is, of course, the sum of the sexual contributions to its descent. Relationship through ancestral genepools Individuals descended from a common ancestral genepool obviously are related. This is not to say they are identical in their genes (alleles), because, at each level of ancestor, segregation and assortment will have occurred in producing gametes. But they will have originated from the same pool of alleles available for these meioses and subsequent fertilizations. [This idea was encountered firstly in the sections on pedigree analysis and relationships.] The genepool contributions [see section above] of their nearest common ancestral genepool(an ancestral node) can therefore be used to define their relationship. This leads to an intuitive definition of relationship which conforms well with familiar notions of "relatedness" found in family-history; and permits comparisons of the "degree of relatedness" for complex patterns of relations arising from such genealogy. The only modifications necessary (for each individual in turn) are in Γ and are due to the shift to "shared common ancestry" rather than "individual total ancestry". For this, define Ρ (in lieu of Γ); m = number of ancestors-in-common at the node (i.e. m = 1 or 2 only); and an "individual index" k. Thus: where, as before, n = number of sexual generations between the individual and the ancestral node. An example is provided by two first full-cousins. Their nearest common ancestral node is their grandparents which gave rise to their two sibling parents, and they have both of these grandparents in common. [See earlier pedigree.] For this case, m=2 and n=2, so for each of them In this simple case, each cousin has numerically the same Ρ . A second example might be between two full cousins, but one (k=1) has three generations back to the ancestral node (n=3), and the other (k=2) only two (n=2) [i.e. a second and first cousin relationship]. For both, m=2 (they are full cousins). and Notice each cousin has a different Ρ k. GRC – genepool relationship coefficient In any pairwise relationship estimation, there is one Ρk for each individual: it remains to average them in order to combine them into a single "Relationship coefficient". Because each Ρ is a fraction of a total genepool, the appropriate average for them is the geometric mean This average is their Genepool Relationship Coefficient—the "GRC". For the first example (two full first-cousins), their GRC = 0.5; for the second case (a full first and second cousin), their GRC = 0.3536. All of these relationships (GRC) are applications of path-analysis. A summary of some levels of relationship (GRC) follow. Resemblances between relatives These, in like manner to the Genotypic variances, can be derived through either the gene-model ("Mather") approach or the allele-substitution ("Fisher") approach. Here, each method is demonstrated for alternate cases. Parent-offspring covariance These can be viewed either as the covariance between any offspring and any one of its parents (PO), or as the covariance between any offspring and the "mid-parent" value of both its parents (MPO). One-parent and offspring (PO) This can be derived as the sum of cross-products between parent gene-effects and one-half of the progeny expectations using the allele-substitution approach. The one-half of the progeny expectation accounts for the fact that only one of the two parents is being considered. The appropriate parental gene-effects are therefore the second-stage redefined gene effects used to define the genotypic variances earlier, that is: a = 2q(a − qd) and d = (q-p)a + 2pqd and also (-a) = -2p(a + pd) [see section "Gene effects redefined"]. Similarly, the appropriate progeny effects, for allele-substitution expectations are one-half of the earlier breeding values, the latter being: aAA = 2qa, and aAa = (q-p)a and also aaa = -2pa [see section on "Genotype substitution – Expectations and Deviations"]. Because all of these effects are defined already as deviates from the genotypic mean, the cross-product sum using {genotype-frequency * parental gene-effect * half-breeding-value} immediately provides the allele-substitution-expectation covariance between any one parent and its offspring. After careful gathering of terms and simplification, this becomes cov(PO)A = pqa2 = s2A . Unfortunately, the allele-substitution-deviations are usually overlooked, but they have not "ceased to exist" nonetheless! Recall that these deviations are: dAA = -2q2 d, and dAa = 2pq d and also daa = -2p2 d [see section on "Genotype substitution – Expectations and Deviations"]. Consequently, the cross-product sum using {genotype-frequency * parental gene-effect * half-substitution-deviations} also immediately provides the allele-substitution-deviations covariance between any one parent and its offspring. Once more, after careful gathering of terms and simplification, this becomes cov(PO)D = 2p2q2d2 = s2D . It follows therefore that: cov(PO) = cov(PO)A + cov(PO)D = s2A + s2D , when dominance is not overlooked ! Mid-parent and offspring (MPO) Because there are many combinations of parental genotypes, there are many different mid-parents and offspring means to consider, together with the varying frequencies of obtaining each parental pairing. The gene-model approach is the most expedient in this case. Therefore, an unadjusted sum of cross-products (USCP)—using all products { parent-pair-frequency * mid-parent-gene-effect * offspring-genotype-mean }—is adjusted by subtracting the {overall genotypic mean}2 as correction factor (CF). After multiplying out all the various combinations, carefully gathering terms, simplifying, factoring and cancelling-out where applicable, this becomes:cov(MPO) = pq [a + (q-p)d ]2 = pq a2 = s2A , with no dominance having been overlooked in this case, as it had been used-up in defining the a. Applications (parent-offspring) The most obvious application is an experiment that contains all parents and their offspring, with or without reciprocal crosses, preferably replicated without bias, enabling estimation of all appropriate means, variances and covariances, together with their standard errors. These estimated statistics can then be used to estimate the genetic variances. Twice the difference between the estimates of the two forms of (corrected) parent-offspring covariance provides an estimate of s2D; and twice the cov(MPO) estimates s2A. With appropriate experimental design and analysis, standard errors can be obtained for these genetical statistics as well. This is the basic core of an experiment known as Diallel analysis, the Mather, Jinks and Hayman version of which is discussed in another section. A second application involves using regression analysis, which estimates from statistics the ordinate (Y-estimate), derivative (regression coefficient) and constant (Y-intercept) of calculus. The regression coefficient estimates the rate of change of the function predicting Y from X, based on minimizing the residuals between the fitted curve and the observed data (MINRES). No alternative method of estimating such a function satisfies this basic requirement of MINRES. In general, the regression coefficient is estimated as the ratio of the covariance(XY) to the variance of the determinator (X). In practice, the sample size is usually the same for both X and Y, so this can be written as SCP(XY) / SS(X), where all terms have been defined previously. In the present context, the parents are viewed as the "determinative variable" (X), and the offspring as the "determined variable" (Y), and the regression coefficient as the "functional relationship" (ßPO) between the two. Taking cov(MPO) = s2A as cov(XY), and s2P / 2 (the variance of the mean of two parents—the mid-parent) as s2X, it can be seen that ßMPO = [ s2A] / [ s2P] = h2 . Next, utilizing cov(PO) = [ s2A + s2D ] as cov(XY), and s2P as s2X, it is seen that 2 ßPO = [ 2 ( s2A + s2D )] / s2P = H2 . Analysis of epistasis has previously been attempted via an interaction variance approach of the type s2AA , and s2AD and also s2DD. This has been integrated with these present covariances in an effort to provide estimators for the epistasis variances. However, the findings of epigenetics suggest that this may not be an appropriate way to define epistasis. Siblings covariances Covariance between half-sibs (HS) is defined easily using allele-substitution methods; but, once again, the dominance contribution has historically been omitted. However, as with the mid-parent/offspring covariance, the covariance between full-sibs (FS) requires a "parent-combination" approach, thereby necessitating the use of the gene-model corrected-cross-product method; and the dominance contribution has not historically been overlooked. The superiority of the gene-model derivations is as evident here as it was for the Genotypic variances. Half-sibs of the same common-parent (HS) The sum of the cross-products { common-parent frequency * half-breeding-value of one half-sib * half-breeding-value of any other half-sib in that same common-parent-group } immediately provides one of the required covariances, because the effects used [breeding values—representing the allele-substitution expectations] are already defined as deviates from the genotypic mean [see section on "Allele substitution – Expectations and deviations"]. After simplification. this becomes: cov(HS)A = pq a2 = s2A . However, the substitution deviations also exist, defining the sum of the cross-products { common-parent frequency * half-substitution-deviation of one half-sib * half-substitution-deviation of any other half-sib in that same common-parent-group }, which ultimately leads to: cov(HS)D = p2 q2 d2 = s2D . Adding the two components gives:cov(HS) = cov(HS)A + cov(HS)D = s2A + s2D . Full-sibs (FS) As explained in the introduction, a method similar to that used for mid-parent/progeny covariance is used. Therefore, an unadjusted sum of cross-products (USCP) using all products—{ parent-pair-frequency * the square of the offspring-genotype-mean }—is adjusted by subtracting the {overall genotypic mean}2 as correction factor (CF). In this case, multiplying out all combinations, carefully gathering terms, simplifying, factoring, and cancelling-out is very protracted. It eventually becomes:cov(FS) = pq a2 + p2 q2 d2 = s2A + s2D , with no dominance having been overlooked. Applications (siblings) The most useful application here for genetical statistics is the correlation between half-sibs. Recall that the correlation coefficient (r) is the ratio of the covariance to the variance [see section on "Associated attributes" for example]. Therefore, rHS = cov(HS) / s2all HS together = [ s2A + s2D ] / s2P = H2 . The correlation between full-sibs is of little utility, being rFS = cov(FS) / s2all FS together = [ s2A + s2D ] / s2P . The suggestion that it "approximates" ( h2) is poor advice. Of course, the correlations between siblings are of intrinsic interest in their own right, quite apart from any utility they may have for estimating heritabilities or genotypic variances. It may be worth noting that [ cov(FS) − cov(HS)] = s2A . Experiments consisting of FS and HS families could utilize this by using intra-class correlation to equate experiment variance components to these covariances [see section on "Coefficient of relationship as an intra-class correlation" for the rationale behind this]. The earlier comments regarding epistasis apply again here [see section on "Applications (Parent-offspring"]. Selection Basic principles Selection operates on the attribute (phenotype), such that individuals that equal or exceed a selection threshold (zP) become effective parents for the next generation. The proportion they represent of the base population is the selection pressure. The smaller the proportion, the stronger the pressure. The mean of the selected group (Ps) is superior to the base-population mean (P0) by the difference called the selection differential (S). All these quantities are phenotypic. To "link" to the underlying genes, a heritability (h2) is used, fulfilling the role of a coefficient of determination in the biometrical sense. The expected genetical change—still expressed in phenotypic units of measurement—is called the genetic advance (ΔG), and is obtained by the product of the selection differential (S) and its coefficient of determination (h2). The expected mean of the progeny (P1) is found by adding the genetic advance (ΔG) to the base mean (P0). The graphs to the right show how the (initial) genetic advance is greater with stronger selection pressure (smaller probability). They also show how progress from successive cycles of selection (even at the same selection pressure) steadily declines, because the Phenotypic variance and the Heritability are being diminished by the selection itself. This is discussed further shortly. Thus . and . The narrow-sense heritability (h2) is usually used, thereby linking to the genic variance (σ2A) . However, if appropriate, use of the broad-sense heritability (H2) would connect to the genotypic variance (σ2G); and even possibly an allelic heritability [ h2eu = (σ2a) / (σ2P) ] might be contemplated, connecting to (σ2a ). [See section on Heritability.] To apply these concepts before selection actually takes place, and so predict the outcome of alternatives (such as choice of selection threshold, for example), these phenotypic statistics are re-considered against the properties of the Normal Distribution, especially those concerning truncation of the superior tail of the Distribution. In such consideration, the standardized selection differential (i) and the standardized selection threshold (z) are used instead of the previous "phenotypic" versions. The phenotypic standard deviate (σP(0)) is also needed. This is described in a subsequent section. Therefore, ΔG = (i σP) h2, where (i σP(0)) = S previously. The text above noted that successive ΔG declines because the "input" [the phenotypic variance ( σ2P )] is reduced by the previous selection. The heritability also is reduced. The graphs to the left show these declines over ten cycles of repeated selection during which the same selection pressure is asserted. The accumulated genetic advance (ΣΔG) has virtually reached its asymptote by generation 6 in this example. This reduction depends partly upon truncation properties of the Normal Distribution, and partly upon the heritability together with meiosis determination ( b2 ). The last two items quantify the extent to which the truncation is "offset" by new variation arising from segregation and assortment during meiosis. This is discussed soon, but here note the simplified result for undispersed random fertilization (f = 0). Thus : σ2P(1) = σ2P(0) [1 − i ( i-z) h2], where i ( i-z) = K = truncation coefficient and h2 = R = reproduction coefficient This can be written also as σ2P(1) = σ2P(0) [1 − K R ], which facilitates more detailed analysis of selection problems. Here, i and z have already been defined, is the meiosis determination (b2) for f=0, and the remaining symbol is the heritability. These are discussed further in following sections. Also notice that, more generally, R = b2 h2. If the general meiosis determination ( b2 ) is used, the results of prior inbreeding can be incorporated into the selection. The phenotypic variance equation then becomes:σ2P(1) = σ2P(0) [1 − i ( i-z) b2 h2]. The Phenotypic variance truncated by the selected group ( σ2P(S) ) is simply σ2P(0) [1 − K], and its contained genic variance is (h20 σ2P(S) ). Assuming that selection has not altered the environmental variance, the genic variance for the progeny can be approximated by σ2A(1) = ( σ2P(1) − σ2E) . From this, h21 = ( σ2A(1) / σ2P(1) ). Similar estimates could be made for σ2G(1) and H21, or for σ2a(1) and h2eu(1) if required. Alternative ΔG The following rearrangement is useful for considering selection on multiple attributes (characters). It starts by expanding the heritability into its variance components. ΔG = i σP ( σ2A / σ2P ) . The σP and σ2P partially cancel, leaving a solo σP. Next, the σ2A inside the heritability can be expanded as (σA × σA), which leads to :ΔG = i σA ( σA / σP ) = i σA h . Corresponding re-arrangements could be made using the alternative heritabilities, giving ΔG = i σG H or ΔG = i σa heu. Polygenic Adaptation Models in Population Genetics This traditional view of adaptation in quantitative genetics provides a model for how the selected phenotype changes over time, as a function of the selection differential and heritability. However it does not provide insight into (nor does it depend upon) any of the genetic details - in particular, the number of loci involved, their allele frequencies and effect sizes, and the frequency changes driven by selection. This, in contrast, is the focus of work on polygenic adaptation within the field of population genetics. Recent studies have shown that traits such as height have evolved in humans during the past few thousands of years as a result of small allele frequency shifts at thousands of variants that affect height. Background Standardized selection – the normal distribution The entire base population is outlined by the normal curve to the right. Along the Z axis is every value of the attribute from least to greatest, and the height from this axis to the curve itself is the frequency of the value at the axis below. The equation for finding these frequencies for the "normal" curve (the curve of "common experience") is given in the ellipse. Notice it includes the mean (μ) and the variance (σ2). Moving infinitesimally along the z-axis, the frequencies of neighbouring values can be "stacked" beside the previous, thereby accumulating an area that represents the probability of obtaining all values within the stack. [That's integration from calculus.] Selection focuses on such a probability area, being the shaded-in one from the selection threshold (z) to the end of the superior tail of the curve. This is the selection pressure. The selected group (the effective parents of the next generation) include all phenotype values from z to the "end" of the tail. The mean of the selected group is μs, and the difference between it and the base mean (μ) represents the selection differential (S). By taking partial integrations over curve-sections of interest, and some rearranging of the algebra, it can be shown that the "selection differential" is S = [ y (σ / Prob.)] , where y is the frequency of the value at the "selection threshold" z (the ordinate of z). Rearranging this relationship gives S / σ = y / Prob., the left-hand side of which is, in fact, selection differential divided by standard deviation—that is the standardized selection differential (i). The right-side of the relationship provides an "estimator" for i—the ordinate of the selection threshold divided by the selection pressure. Tables of the Normal Distribution can be used, but tabulations of i itself are available also. The latter reference also gives values of i adjusted for small populations (400 and less), where "quasi-infinity" cannot be assumed (but was presumed in the "Normal Distribution" outline above). The standardized selection differential (i) is known also as the intensity of selection. Finally, a cross-link with the differing terminology in the previous sub-section may be useful: μ (here) = "P0" (there), μS = "PS" and σ2 = "σ2P". Meiosis determination – reproductive path analysis The meiosis determination (b2) is the coefficient of determination of meiosis, which is the cell-division whereby parents generate gametes. Following the principles of standardized partial regression, of which path analysis is a pictorially oriented version, Sewall Wright analyzed the paths of gene-flow during sexual reproduction, and established the "strengths of contribution" (coefficients of determination) of various components to the overall result. Path analysis includes partial correlations as well as partial regression coefficients (the latter are the path coefficients). Lines with a single arrow-head are directional determinative paths, and lines with double arrow-heads are correlation connections. Tracing various routes according to path analysis rules emulates the algebra of standardized partial regression. The path diagram to the left represents this analysis of sexual reproduction. Of its interesting elements, the important one in the selection context is meiosis. That's where segregation and assortment occur—the processes that partially ameliorate the truncation of the phenotypic variance that arises from selection. The path coefficients b are the meiosis paths. Those labeled a are the fertilization paths. The correlation between gametes from the same parent (g) is the meiotic correlation. That between parents within the same generation is rA. That between gametes from different parents (f) became known subsequently as the inbreeding coefficient. The primes ( ' ) indicate generation (t-1), and the unprimed indicate generation t. Here, some important results of the present analysis are given. Sewall Wright interpreted many in terms of inbreeding coefficients. The meiosis determination (b2) is (1+g) and equals (1 + f(t-1)) , implying that g = f(t-1). With non-dispersed random fertilization, f(t-1)) = 0, giving b2 = , as used in the selection section above. However, being aware of its background, other fertilization patterns can be used as required. Another determination also involves inbreeding—the fertilization determination (a2) equals 1 / [ 2 ( 1 + ft ) ] . Also another correlation is an inbreeding indicator—rA = 2 ft / ( 1 + f(t-1) ), also known as the coefficient of relationship. [Do not confuse this with the coefficient of kinship—an alternative name for the co-ancestry coefficient. See introduction to "Relationship" section.] This rA re-occurs in the sub-section on dispersion and selection. These links with inbreeding reveal interesting facets about sexual reproduction that are not immediately apparent. The graphs to the right plot the meiosis and syngamy (fertilization) coefficients of determination against the inbreeding coefficient. There it is revealed that as inbreeding increases, meiosis becomes more important (the coefficient increases), while syngamy becomes less important. The overall role of reproduction [the product of the previous two coefficients—r2] remains the same. This increase in b2 is particularly relevant for selection because it means that the selection truncation of the Phenotypic variance is offset to a lesser extent during a sequence of selections when accompanied by inbreeding (which is frequently the case). Genetic drift and selection The previous sections treated dispersion as an "assistant" to selection, and it became apparent that the two work well together. In quantitative genetics, selection is usually examined in this "biometrical" fashion, but the changes in the means (as monitored by ΔG) reflect the changes in allele and genotype frequencies beneath this surface. Referral to the section on "Genetic drift" brings to mind that it also effects changes in allele and genotype frequencies, and associated means; and that this is the companion aspect to the dispersion considered here ("the other side of the same coin"). However, these two forces of frequency change are seldom in concert, and may often act contrary to each other. One (selection) is "directional" being driven by selection pressure acting on the phenotype: the other (genetic drift) is driven by "chance" at fertilization (binomial probabilities of gamete samples). If the two tend towards the same allele frequency, their "coincidence" is the probability of obtaining that frequencies sample in the genetic drift: the likelihood of their being "in conflict", however, is the sum of probabilities of all the alternative frequency samples. In extreme cases, a single syngamy sampling can undo what selection has achieved, and the probabilities of it happening are available. It is important to keep this in mind. However, genetic drift resulting in sample frequencies similar to those of the selection target does not lead to so drastic an outcome—instead slowing progress towards selection goals. Correlated attributes Upon jointly observing two (or more) attributes (e.g. height and mass), it may be noticed that they vary together as genes or environments alter. This co-variation is measured by the covariance, which can be represented by " cov " or by θ. It will be positive if they vary together in the same direction; or negative if they vary together but in opposite direction. If the two attributes vary independently of each other, the covariance will be zero. The degree of association between the attributes is quantified by the correlation coefficient (symbol r or ρ ). In general, the correlation coefficient is the ratio of the covariance to the geometric mean of the two variances of the attributes. Observations usually occur at the phenotype, but in research they may also occur at the "effective haplotype" (effective gene product) [see Figure to the right]. Covariance and correlation could therefore be "phenotypic" or "molecular", or any other designation which an analysis model permits. The phenotypic covariance is the "outermost" layer, and corresponds to the "usual" covariance in Biometrics/Statistics. However, it can be partitioned by any appropriate research model in the same way as was the phenotypic variance. For every partition of the covariance, there is a corresponding partition of the correlation. Some of these partitions are given below. The first subscript (G, A, etc.) indicates the partition. The second-level subscripts (X, Y) are "place-keepers" for any two attributes. The first example is the un-partitioned phenotype. The genetical partitions (a) "genotypic" (overall genotype),(b) "genic" (substitution expectations) and (c) "allelic" (homozygote) follow. (a) (b) (c) With an appropriately designed experiment, a non-genetical (environment) partition could be obtained also. Underlying causes of correlation There are several different ways that phenotypic correlation can arise. Study design, sample size, sample statistics, and other factors can influence the ability to distinguish between them with more or less statistical confidence. Each of these have different scientific significance, and are relevant to different fields of work. Direct causation One phenotype may directly affect another phenotype, by influencing development, metabolism, or behavior. Genetic pathways A common gene or transcription factor in the biological pathways for the two phenotypes can result in correlation. Metabolic pathways The metabolic pathways from gene to phenotype are complex and varied, but the causes of correlation amongst attributes lie within them. Developmental and environmental factors Multiple phenotypes may be affected by the same factors. For example, there are many phenotypic attributes correlated with age, and so height, weight, caloric intake, endocrine function, and more all have a correlation. A study looking for other common factors must rule these out first. Correlated genotypes and selective pressures Differences between subgroups in a population, between populations, or selective biases can mean that some combinations of genes are overrepresented compared with what would be expected. While the genes may not have a significant influence on each other, there may still be a correlation between them, especially when certain genotypes are not allowed to mix. Populations in the process of genetic divergence or having already undergone it can have different characteristic phenotypes, which means that when considered together, a correlation appears. Phenotypic qualities in humans that predominantly depend on ancestry also produce correlations of this type. This can also be observed in dog breeds where several physical features make up the distinctness of a given breed, and are therefore correlated. Assortative mating, which is the sexually selective pressure to mate with a similar phenotype, can result in genotypes remaining correlated more than would be expected.
Biology and health sciences
Genetics
Biology
297438
https://en.wikipedia.org/wiki/Easel
Easel
An easel is an upright support used for displaying and/or fixing something resting upon it, at an angle of about 20° to the vertical. In particular, painters traditionally use an easel to support a painting while they work on it, normally standing up; easels are also sometimes used to display finished paintings. Artists' easels are still typically made of wood, in functional designs that have changed little for centuries, or even millennia, though new materials and designs exist. Easels are typically made from wood, aluminum or steel. Easel painting is a term in art history for the type of midsize painting that would have been painted on an easel, as opposed to a fresco wall painting, a large altarpiece or other piece that would have been painted resting on a floor, a small cabinet painting, or a miniature created while sitting at a desk, though perhaps also on an angled support. It does not refer to the way the painting is meant to be displayed; most easel paintings are intended for display framed and hanging on a wall. In a photographic darkroom, an easel is used to keep the photographic paper in a flat or upright (horizontal, big-size enlarging) position to the enlarger. Etymology The word easel is an old Germanic synonym for donkey (compare similar semantics). In various other languages, its equivalent is the only word for both the animal and the apparatus, such as and earlier (the easel generally in full , "painter's donkey"), themselves cognates of the (ass). History Easels have been in use since the time of the ancient Egyptians. In the 1st century, Pliny the Elder made reference to a "large panel" placed upon an easel. Design There are three common designs for easels: A Frame designs are based on three legs. Variations include: crossbars to make the easel more stable; and an independent mechanism to allow for the vertical adjustment of the working plane without sacrificing the stability of the legs of the easel. H-Frame designs are based on right angles. All posts are generally parallel to each other with the base of the easel being rectangular. The main, front portion of the easel consists of two vertical posts with a horizontal crossbar support, giving the design the general shape of an 'H'. A variation uses additions that allow the easel's angle with respect to the ground to be adjusted. Multiple purpose designs incorporate improved tripod and H-frame features with extra multiple adjustment capabilities that include finite rotational, horizontal and vertical adjustment of the working plane. Differences An easel can be full height, designed for standing by itself on the floor. Shorter easels can be designed for use on a table. Artists' easels typically are fully adjustable to accommodate for different angles. Most have built-in anti-skid plates on the feet to prevent sliding. They are collapsible and overall very slim in stature to fit in small spaces around the studio. The simplest form of an artist's easel, a tripod, consists of three vertical posts joined at one end. A pivoting mechanism allows the centre-most post to pivot away from the other two, while the two non-pivoting posts have a horizontal cross member where the canvas is placed. A similar model can hold a blackboard, projection surface, placard, etc. Pochade boxes are a type of artists' easel that is mounted on top of a camera tripod. They include both a support for the painting, as well as a palette. They may or may not include a box for supplies. Paint stations are meant as more stationary consoles. These are usually equipped with various holsters, slots and supporting platforms to accommodate for buckets, brushes and canvas styles. Most of the components can be broken down for easy cleaning and storage. Children's easels are intended to be more durable. They are typically shorter than standard easels and usually come equipped with dry erase boards and/or chalkboards attached. Display easels are for display purposes and are meant to enhance the presentation of a painting. Facilitation easels are for capturing audience or participant input and are meant to involve the participants with the content. Darkroom easels keep photographic paper in a flat or upright (horizontal, big-size enlarging) position to the enlarger. Use It is most often used to hold up a painter's canvas or large sketchbook while the artist is working, or to hold a completed painting for exhibition. Here are some common uses for easels: Studio easels are meant for use in the artist's studio with limited need for the easel to be portable. Studio easels may be simple in design or very complex, including winches, multiple masts and casters. The largest easels are studio easels, with some being able to support panels weighing over 200 pounds and measuring over 7 feet in height. Field easels or plein air easels are meant to be portable for the creation of en plein air work. These easels are usually midsize or small, have telescopic or collapsible legs and are based on the tripod design. French box easels include a compartment in which to store art supplies conveniently along with a handle or straps so that the French box may be carried like a briefcase or a backpack. Display easels are meant for the display of finished artworks. These easels tend to be very simple in design with less concern for the stability needed by a working artist. Display easels vary in size and sturdiness depending upon the weight and size of the object to be placed on them. Facilitation easels hold large pads of paper and have trays for holding markers of varying colors Mini easels are similar in design to display easels but scaled down to accommodate photos or flyers Darkroom easels hold photographic paper perfectly flat during exposure. Some of these easels are designed with adjustable, overlapping, flat steel "blades" to crop the image on the paper to the desired size while keeping an unexposed white border around the image.
Technology
Artist's and drafting tools
null
297466
https://en.wikipedia.org/wiki/Spontaneous%20symmetry%20breaking
Spontaneous symmetry breaking
Spontaneous symmetry breaking is a spontaneous process of symmetry breaking, by which a physical system in a symmetric state spontaneously ends up in an asymmetric state. In particular, it can describe systems where the equations of motion or the Lagrangian obey symmetries, but the lowest-energy vacuum solutions do not exhibit that same symmetry. When the system goes to one of those vacuum solutions, the symmetry is broken for perturbations around that vacuum even though the entire Lagrangian retains that symmetry. Overview The spontaneous symmetry breaking cannot happen in quantum mechanics that describes finite dimensional systems, due to Stone-von Neumann theorem (that states the uniqueness of Heisenberg commutation relations in finite dimensions). So spontaneous symmetry breaking can be observed only in infinite dimensional theories, as quantum field theories. By definition, spontaneous symmetry breaking requires the existence of physical laws which are invariant under a symmetry transformation (such as translation or rotation), so that any pair of outcomes differing only by that transformation have the same probability distribution. For example if measurements of an observable at any two different positions have the same probability distribution, the observable has translational symmetry. Spontaneous symmetry breaking occurs when this relation breaks down, while the underlying physical laws remain symmetrical. Conversely, in explicit symmetry breaking, if two outcomes are considered, the probability distributions of a pair of outcomes can be different. For example in an electric field, the forces on a charged particle are different in different directions, so the rotational symmetry is explicitly broken by the electric field which does not have this symmetry. Phases of matter, such as crystals, magnets, and conventional superconductors, as well as simple phase transitions can be described by spontaneous symmetry breaking. Notable exceptions include topological phases of matter like the fractional quantum Hall effect. Typically, when spontaneous symmetry breaking occurs, the observable properties of the system change in multiple ways. For example the density, compressibility, coefficient of thermal expansion, and specific heat will be expected to change when a liquid becomes a solid. Examples Sombrero potential Consider a symmetric upward dome with a trough circling the bottom. If a ball is put at the very peak of the dome, the system is symmetric with respect to a rotation around the center axis. But the ball may spontaneously break this symmetry by rolling down the dome into the trough, a point of lowest energy. Afterward, the ball has come to a rest at some fixed point on the perimeter. The dome and the ball retain their individual symmetry, but the system does not. In the simplest idealized relativistic model, the spontaneously broken symmetry is summarized through an illustrative scalar field theory. The relevant Lagrangian of a scalar field , which essentially dictates how a system behaves, can be split up into kinetic and potential terms, It is in this potential term that the symmetry breaking is triggered. An example of a potential, due to Jeffrey Goldstone is illustrated in the graph at the left. This potential has an infinite number of possible minima (vacuum states) given by for any real θ between 0 and 2π. The system also has an unstable vacuum state corresponding to . This state has a U(1) symmetry. However, once the system falls into a specific stable vacuum state (amounting to a choice of θ), this symmetry will appear to be lost, or "spontaneously broken". In fact, any other choice of θ would have exactly the same energy, and the defining equations respect the symmetry but the ground state (vacuum) of the theory breaks the symmetry, implying the existence of a massless Nambu–Goldstone boson, the mode running around the circle at the minimum of this potential, and indicating there is some memory of the original symmetry in the Lagrangian. Other examples For ferromagnetic materials, the underlying laws are invariant under spatial rotations. Here, the order parameter is the magnetization, which measures the magnetic dipole density. Above the Curie temperature, the order parameter is zero, which is spatially invariant, and there is no symmetry breaking. Below the Curie temperature, however, the magnetization acquires a constant nonvanishing value, which points in a certain direction (in the idealized situation where we have full equilibrium; otherwise, translational symmetry gets broken as well). The residual rotational symmetries which leave the orientation of this vector invariant remain unbroken, unlike the other rotations which do not and are thus spontaneously broken. The laws describing a solid are invariant under the full Euclidean group, but the solid itself spontaneously breaks this group down to a space group. The displacement and the orientation are the order parameters. General relativity has a Lorentz symmetry, but in FRW cosmological models, the mean 4-velocity field defined by averaging over the velocities of the galaxies (the galaxies act like gas particles at cosmological scales) acts as an order parameter breaking this symmetry. Similar comments can be made about the cosmic microwave background. For the electroweak model, as explained earlier, a component of the Higgs field provides the order parameter breaking the electroweak gauge symmetry to the electromagnetic gauge symmetry. Like the ferromagnetic example, there is a phase transition at the electroweak temperature. The same comment about us not tending to notice broken symmetries suggests why it took so long for us to discover electroweak unification. In superconductors, there is a condensed-matter collective field ψ, which acts as the order parameter breaking the electromagnetic gauge symmetry. Take a thin cylindrical plastic rod and push both ends together. Before buckling, the system is symmetric under rotation, and so visibly cylindrically symmetric. But after buckling, it looks different, and asymmetric. Nevertheless, features of the cylindrical symmetry are still there: ignoring friction, it would take no force to freely spin the rod around, displacing the ground state in time, and amounting to an oscillation of vanishing frequency, unlike the radial oscillations in the direction of the buckle. This spinning mode is effectively the requisite Nambu–Goldstone boson. Consider a uniform layer of fluid over an infinite horizontal plane. This system has all the symmetries of the Euclidean plane. But now heat the bottom surface uniformly so that it becomes much hotter than the upper surface. When the temperature gradient becomes large enough, convection cells will form, breaking the Euclidean symmetry. Consider a bead on a circular hoop that is rotated about a vertical diameter. As the rotational velocity is increased gradually from rest, the bead will initially stay at its initial equilibrium point at the bottom of the hoop (intuitively stable, lowest gravitational potential). At a certain critical rotational velocity, this point will become unstable and the bead will jump to one of two other newly created equilibria, equidistant from the center. Initially, the system is symmetric with respect to the diameter, yet after passing the critical velocity, the bead ends up in one of the two new equilibrium points, thus breaking the symmetry. The two-balloon experiment is an example of spontaneous symmetry breaking when both balloons are initially inflated to the local maximum pressure. When some air flows from one balloon into the other, the pressure in both balloons will drop, making the system more stable in the asymmetric state. In particle physics In particle physics, the force carrier particles are normally specified by field equations with gauge symmetry; their equations predict that certain measurements will be the same at any point in the field. For instance, field equations might predict that the mass of two quarks is constant. Solving the equations to find the mass of each quark might give two solutions. In one solution, quark A is heavier than quark B. In the second solution, quark B is heavier than quark A by the same amount. The symmetry of the equations is not reflected by the individual solutions, but it is reflected by the range of solutions. An actual measurement reflects only one solution, representing a breakdown in the symmetry of the underlying theory. "Hidden" is a better term than "broken", because the symmetry is always there in these equations. This phenomenon is called spontaneous symmetry breaking (SSB) because nothing (that we know of) breaks the symmetry in the equations. By the nature of spontaneous symmetry breaking, different portions of the early Universe would break symmetry in different directions, leading to topological defects, such as two-dimensional domain walls, one-dimensional cosmic strings, zero-dimensional monopoles, and/or textures, depending on the relevant homotopy group and the dynamics of the theory. For example, Higgs symmetry breaking may have created primordial cosmic strings as a byproduct. Hypothetical GUT symmetry-breaking generically produces monopoles, creating difficulties for GUT unless monopoles (along with any GUT domain walls) are expelled from our observable Universe through cosmic inflation. Chiral symmetry Chiral symmetry breaking is an example of spontaneous symmetry breaking affecting the chiral symmetry of the strong interactions in particle physics. It is a property of quantum chromodynamics, the quantum field theory describing these interactions, and is responsible for the bulk of the mass (over 99%) of the nucleons, and thus of all common matter, as it converts very light bound quarks into 100 times heavier constituents of baryons. The approximate Nambu–Goldstone bosons in this spontaneous symmetry breaking process are the pions, whose mass is an order of magnitude lighter than the mass of the nucleons. It served as the prototype and significant ingredient of the Higgs mechanism underlying the electroweak symmetry breaking. Higgs mechanism The strong, weak, and electromagnetic forces can all be understood as arising from gauge symmetries, which is a redundancy in the description of the symmetry. The Higgs mechanism, the spontaneous symmetry breaking of gauge symmetries, is an important component in understanding the superconductivity of metals and the origin of particle masses in the standard model of particle physics. The term "spontaneous symmetry breaking" is a misnomer here as Elitzur's theorem states that local gauge symmetries can never be spontaneously broken. Rather, after gauge fixing, the global symmetry (or redundancy) can be broken in a manner formally resembling spontaneous symmetry breaking. One important consequence of the distinction between true symmetries and gauge symmetries, is that the massless Nambu–Goldstone resulting from spontaneous breaking of a gauge symmetry are absorbed in the description of the gauge vector field, providing massive vector field modes, like the plasma mode in a superconductor, or the Higgs mode observed in particle physics. In the standard model of particle physics, spontaneous symmetry breaking of the gauge symmetry associated with the electro-weak force generates masses for several particles, and separates the electromagnetic and weak forces. The W and Z bosons are the elementary particles that mediate the weak interaction, while the photon mediates the electromagnetic interaction. At energies much greater than 100 GeV, all these particles behave in a similar manner. The Weinberg–Salam theory predicts that, at lower energies, this symmetry is broken so that the photon and the massive W and Z bosons emerge. In addition, fermions develop mass consistently. Without spontaneous symmetry breaking, the Standard Model of elementary particle interactions requires the existence of a number of particles. However, some particles (the W and Z bosons) would then be predicted to be massless, when, in reality, they are observed to have mass. To overcome this, spontaneous symmetry breaking is augmented by the Higgs mechanism to give these particles mass. It also suggests the presence of a new particle, the Higgs boson, detected in 2012. Superconductivity of metals is a condensed-matter analog of the Higgs phenomena, in which a condensate of Cooper pairs of electrons spontaneously breaks the U(1) gauge symmetry associated with light and electromagnetism. Dynamical symmetry breaking Dynamical symmetry breaking (DSB) is a special form of spontaneous symmetry breaking in which the ground state of the system has reduced symmetry properties compared to its theoretical description (i.e., Lagrangian). Dynamical breaking of a global symmetry is a spontaneous symmetry breaking, which happens not at the (classical) tree level (i.e., at the level of the bare action), but due to quantum corrections (i.e., at the level of the effective action). Dynamical breaking of a gauge symmetry is subtler. In conventional spontaneous gauge symmetry breaking, there exists an unstable Higgs particle in the theory, which drives the vacuum to a symmetry-broken phase (i.e, electroweak interactions.) In dynamical gauge symmetry breaking, however, no unstable Higgs particle operates in the theory, but the bound states of the system itself provide the unstable fields that render the phase transition. For example, Bardeen, Hill, and Lindner published a paper that attempts to replace the conventional Higgs mechanism in the standard model by a DSB that is driven by a bound state of top-antitop quarks. (Such models, in which a composite particle plays the role of the Higgs boson, are often referred to as "Composite Higgs models".) Dynamical breaking of gauge symmetries is often due to creation of a fermionic condensate — e.g., the quark condensate, which is connected to the dynamical breaking of chiral symmetry in quantum chromodynamics. Conventional superconductivity is the paradigmatic example from the condensed matter side, where phonon-mediated attractions lead electrons to become bound in pairs and then condense, thereby breaking the electromagnetic gauge symmetry. In condensed matter physics Most phases of matter can be understood through the lens of spontaneous symmetry breaking. For example, crystals are periodic arrays of atoms that are not invariant under all translations (only under a small subset of translations by a lattice vector). Magnets have north and south poles that are oriented in a specific direction, breaking rotational symmetry. In addition to these examples, there are a whole host of other symmetry-breaking phases of matter — including nematic phases of liquid crystals, charge- and spin-density waves, superfluids, and many others. There are several known examples of matter that cannot be described by spontaneous symmetry breaking, including: topologically ordered phases of matter, such as fractional quantum Hall liquids, and spin-liquids. These states do not break any symmetry, but are distinct phases of matter. Unlike the case of spontaneous symmetry breaking, there is not a general framework for describing such states. Continuous symmetry The ferromagnet is the canonical system that spontaneously breaks the continuous symmetry of the spins below the Curie temperature and at , where h is the external magnetic field. Below the Curie temperature, the energy of the system is invariant under inversion of the magnetization m(x) such that . The symmetry is spontaneously broken as when the Hamiltonian becomes invariant under the inversion transformation, but the expectation value is not invariant. Spontaneously-symmetry-broken phases of matter are characterized by an order parameter that describes the quantity which breaks the symmetry under consideration. For example, in a magnet, the order parameter is the local magnetization. Spontaneous breaking of a continuous symmetry is inevitably accompanied by gapless (meaning that these modes do not cost any energy to excite) Nambu–Goldstone modes associated with slow, long-wavelength fluctuations of the order parameter. For example, vibrational modes in a crystal, known as phonons, are associated with slow density fluctuations of the crystal's atoms. The associated Goldstone mode for magnets are oscillating waves of spin known as spin-waves. For symmetry-breaking states, whose order parameter is not a conserved quantity, Nambu–Goldstone modes are typically massless and propagate at a constant velocity. An important theorem, due to Mermin and Wagner, states that, at finite temperature, thermally activated fluctuations of Nambu–Goldstone modes destroy the long-range order, and prevent spontaneous symmetry breaking in one- and two-dimensional systems. Similarly, quantum fluctuations of the order parameter prevent most types of continuous symmetry breaking in one-dimensional systems even at zero temperature. (An important exception is ferromagnets, whose order parameter, magnetization, is an exactly conserved quantity and does not have any quantum fluctuations.) Other long-range interacting systems, such as cylindrical curved surfaces interacting via the Coulomb potential or Yukawa potential, have been shown to break translational and rotational symmetries. It was shown, in the presence of a symmetric Hamiltonian, and in the limit of infinite volume, the system spontaneously adopts a chiral configuration — i.e., breaks mirror plane symmetry. Generalisation and technical usage For spontaneous symmetry breaking to occur, there must be a system in which there are several equally likely outcomes. The system as a whole is therefore symmetric with respect to these outcomes. However, if the system is sampled (i.e. if the system is actually used or interacted with in any way), a specific outcome must occur. Though the system as a whole is symmetric, it is never encountered with this symmetry, but only in one specific asymmetric state. Hence, the symmetry is said to be spontaneously broken in that theory. Nevertheless, the fact that each outcome is equally likely is a reflection of the underlying symmetry, which is thus often dubbed "hidden symmetry", and has crucial formal consequences. (See the article on the Goldstone boson.) When a theory is symmetric with respect to a symmetry group, but requires that one element of the group be distinct, then spontaneous symmetry breaking has occurred. The theory must not dictate which member is distinct, only that one is. From this point on, the theory can be treated as if this element actually is distinct, with the proviso that any results found in this way must be resymmetrized, by taking the average of each of the elements of the group being the distinct one. The crucial concept in physics theories is the order parameter. If there is a field (often a background field) which acquires an expectation value (not necessarily a vacuum expectation value) which is not invariant under the symmetry in question, we say that the system is in the ordered phase, and the symmetry is spontaneously broken. This is because other subsystems interact with the order parameter, which specifies a "frame of reference" to be measured against. In that case, the vacuum state does not obey the initial symmetry (which would keep it invariant, in the linearly realized Wigner mode in which it would be a singlet), and, instead changes under the (hidden) symmetry, now implemented in the (nonlinear) Nambu–Goldstone mode. Normally, in the absence of the Higgs mechanism, massless Goldstone bosons arise. The symmetry group can be discrete, such as the space group of a crystal, or continuous (e.g., a Lie group), such as the rotational symmetry of space. However, if the system contains only a single spatial dimension, then only discrete symmetries may be broken in a vacuum state of the full quantum theory, although a classical solution may break a continuous symmetry. Nobel Prize On October 7, 2008, the Royal Swedish Academy of Sciences awarded the 2008 Nobel Prize in Physics to three scientists for their work in subatomic physics symmetry breaking. Yoichiro Nambu, of the University of Chicago, won half of the prize for the discovery of the mechanism of spontaneous broken symmetry in the context of the strong interactions, specifically chiral symmetry breaking. Physicists Makoto Kobayashi and Toshihide Maskawa, of Kyoto University, shared the other half of the prize for discovering the origin of the explicit breaking of CP symmetry in the weak interactions. This origin is ultimately reliant on the Higgs mechanism, but, so far understood as a "just so" feature of Higgs couplings, not a spontaneously broken symmetry phenomenon.
Physical sciences
Particle physics: General
Physics
297811
https://en.wikipedia.org/wiki/Jensen%27s%20inequality
Jensen's inequality
In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation (or equivalently, the opposite inequality for concave transformations). Jensen's inequality generalizes the statement that the secant line of a convex function lies above the graph of the function, which is Jensen's inequality for two points: the secant line consists of weighted means of the convex function (for t ∈ [0,1]), while the graph of the function is the convex function of the weighted means, Thus, Jensen's inequality in this case is In the context of probability theory, it is generally stated in the following form: if X is a random variable and is a convex function, then The difference between the two sides of the inequality, , is called the Jensen gap. Statements The classical form of Jensen's inequality involves several numbers and weights. The inequality can be stated quite generally using either the language of measure theory or (equivalently) probability. In the probabilistic setting, the inequality can be further generalized to its full strength. Finite form For a real convex function , numbers in its domain, and positive weights , Jensen's inequality can be stated as: and the inequality is reversed if is concave, which is Equality holds if and only if or is linear on a domain containing . As a particular case, if the weights are all equal, then () and () become For instance, the function is concave, so substituting in the previous formula () establishes the (logarithm of the) familiar arithmetic-mean/geometric-mean inequality: A common application has as a function of another variable (or set of variables) , that is, . All of this carries directly over to the general continuous case: the weights are replaced by a non-negative integrable function , such as a probability distribution, and the summations are replaced by integrals. Measure-theoretic form Let be a probability space. Let be a -measurable function and be convex. Then: In real analysis, we may require an estimate on where , and is a non-negative Lebesgue-integrable function. In this case, the Lebesgue measure of need not be unity. However, by integration by substitution, the interval can be rescaled so that it has measure unity. Then Jensen's inequality can be applied to get Probabilistic form The same result can be equivalently stated in a probability theory setting, by a simple change of notation. Let be a probability space, X an integrable real-valued random variable and a convex function. Then: In this probability setting, the measure is intended as a probability , the integral with respect to as an expected value , and the function as a random variable X. Note that the equality holds if and only if is a linear function on some convex set such that (which follows by inspecting the measure-theoretical proof below). General inequality in a probabilistic setting More generally, let T be a real topological vector space, and X a T-valued integrable random variable. In this general setting, integrable means that there exists an element in T, such that for any element z in the dual space of T: , and . Then, for any measurable convex function and any sub-σ-algebra of : Here stands for the expectation conditioned to the σ-algebra . This general statement reduces to the previous ones when the topological vector space is the real axis, and is the trivial -algebra (where is the empty set, and is the sample space). A sharpened and generalized form Let X be a one-dimensional random variable with mean and variance . Let be a twice differentiable function, and define the function Then In particular, when is convex, then , and the standard form of Jensen's inequality immediately follows for the case where is additionally assumed to be twice differentiable. Proofs Intuitive graphical proof Jensen's inequality can be proved in several ways, and three different proofs corresponding to the different statements above will be offered. Before embarking on these mathematical derivations, however, it is worth analyzing an intuitive graphical argument based on the probabilistic case where is a real number (see figure). Assuming a hypothetical distribution of values, one can immediately identify the position of and its image in the graph. Noticing that for convex mappings of some values the corresponding distribution of values is increasingly "stretched up" for increasing values of , it is easy to see that the distribution of is broader in the interval corresponding to and narrower in for any ; in particular, this is also true for . Consequently, in this picture the expectation of will always shift upwards with respect to the position of . A similar reasoning holds if the distribution of covers a decreasing portion of the convex function, or both a decreasing and an increasing portion of it. This "proves" the inequality, i.e. with equality when is not strictly convex, e.g. when it is a straight line, or when follows a degenerate distribution (i.e. is a constant). The proofs below formalize this intuitive notion. Proof 1 (finite form) If and are two arbitrary nonnegative real numbers such that then convexity of implies This can be generalized: if are nonnegative real numbers such that , then for any . The finite form of the Jensen's inequality can be proved by induction: by convexity hypotheses, the statement is true for n = 2. Suppose the statement is true for some n, so for any such that . One needs to prove it for . At least one of the is strictly smaller than , say ; therefore by convexity inequality: Since , , applying the inductive hypothesis gives therefore We deduce the inequality is true for , by induction it follows that the result is also true for all integer greater than 2. In order to obtain the general inequality from this finite form, one needs to use a density argument. The finite form can be rewritten as: where μn is a measure given by an arbitrary convex combination of Dirac deltas: Since convex functions are continuous, and since convex combinations of Dirac deltas are weakly dense in the set of probability measures (as could be easily verified), the general statement is obtained simply by a limiting procedure. Proof 2 (measure-theoretic form) Let be a real-valued -integrable function on a probability space , and let be a convex function on the real numbers. Since is convex, at each real number we have a nonempty set of subderivatives, which may be thought of as lines touching the graph of at , but which are below the graph of at all points (support lines of the graph). Now, if we define because of the existence of subderivatives for convex functions, we may choose and such that for all real and But then we have that for almost all . Since we have a probability measure, the integral is monotone with so that as desired. Proof 3 (general inequality in a probabilistic setting) Let X be an integrable random variable that takes values in a real topological vector space T. Since is convex, for any , the quantity is decreasing as approaches 0+. In particular, the subdifferential of evaluated at in the direction is well-defined by It is easily seen that the subdifferential is linear in (that is false and the assertion requires Hahn-Banach theorem to be proved) and, since the infimum taken in the right-hand side of the previous formula is smaller than the value of the same term for , one gets In particular, for an arbitrary sub--algebra we can evaluate the last inequality when to obtain Now, if we take the expectation conditioned to on both sides of the previous expression, we get the result since: by the linearity of the subdifferential in the y variable, and the following well-known property of the conditional expectation: Applications and special cases Form involving a probability density function Suppose is a measurable subset of the real line and f(x) is a non-negative function such that In probabilistic language, f is a probability density function. Then Jensen's inequality becomes the following statement about convex integrals: If g is any real-valued measurable function and is convex over the range of g, then If g(x) = x, then this form of the inequality reduces to a commonly used special case: This is applied in Variational Bayesian methods. Example: even moments of a random variable If g(x) = x2n, and X is a random variable, then g is convex as and so In particular, if some even moment 2n of X is finite, X has a finite mean. An extension of this argument shows X has finite moments of every order dividing n. Alternative finite form Let and take to be the counting measure on , then the general form reduces to a statement about sums: provided that and There is also an infinite discrete form. Statistical physics Jensen's inequality is of particular importance in statistical physics when the convex function is an exponential, giving: where the expected values are with respect to some probability distribution in the random variable . Proof: Let in Information theory If is the true probability density for , and is another density, then applying Jensen's inequality for the random variable and the convex function gives Therefore: a result called Gibbs' inequality. It shows that the average message length is minimised when codes are assigned on the basis of the true probabilities p rather than any other distribution q. The quantity that is non-negative is called the Kullback–Leibler divergence of q from p, where . Since is a strictly convex function for , it follows that equality holds when equals almost everywhere. Rao–Blackwell theorem If L is a convex function and a sub-sigma-algebra, then, from the conditional version of Jensen's inequality, we get So if δ(X) is some estimator of an unobserved parameter θ given a vector of observables X; and if T(X) is a sufficient statistic for θ; then an improved estimator, in the sense of having a smaller expected loss L, can be obtained by calculating the expected value of δ with respect to θ, taken over all possible vectors of observations X compatible with the same value of T(X) as that observed. Further, because T is a sufficient statistic, does not depend on θ, hence, becomes a statistic. This result is known as the Rao–Blackwell theorem. Risk aversion The relation between risk aversion and declining marginal utility for scalar outcomes can be stated formally with Jensen's inequality: risk aversion can be stated as preferring a certain outcome to a fair gamble with potentially larger but uncertain outcome of : . But this is simply Jensen's inequality for a concave : a utility function that exhibits declining marginal utility.
Mathematics
Other algebra topics
null
297839
https://en.wikipedia.org/wiki/Time%20dilation
Time dilation
Time dilation is the difference in elapsed time as measured by two clocks, either because of a relative velocity between them (special relativity), or a difference in gravitational potential between their locations (general relativity). When unspecified, "time dilation" usually refers to the effect due to velocity. After compensating for varying signal delays resulting from the changing distance between an observer and a moving clock (i.e. Doppler effect), the observer will measure the moving clock as ticking more slowly than a clock at rest in the observer's own reference frame. There is a difference between observed and measured relativistic time dilation - the observer does not visually perceive time dilation in the same way that they measure it. In addition, a clock that is close to a massive body (and which therefore is at lower gravitational potential) will record less elapsed time than a clock situated farther from the same massive body (and which is at a higher gravitational potential). These predictions of the theory of relativity have been repeatedly confirmed by experiment, and they are of practical concern, for instance in the operation of satellite navigation systems such as GPS and Galileo. History Time dilation by the Lorentz factor was predicted by several authors at the turn of the 20th century. Joseph Larmor (1897) wrote that, at least for those orbiting a nucleus, individual electrons describe corresponding parts of their orbits in times shorter for the [rest] system in the ratio: . Emil Cohn (1904) specifically related this formula to the rate of clocks. In the context of special relativity it was shown by Albert Einstein (1905) that this effect concerns the nature of time itself, and he was also the first to point out its reciprocity or symmetry. Subsequently, Hermann Minkowski (1907) introduced the concept of proper time which further clarified the meaning of time dilation. Time dilation caused by a relative velocity Special relativity indicates that, for an observer in an inertial frame of reference, a clock that is moving relative to the observer will be measured to tick more slowly than a clock at rest in the observer's frame of reference. This is sometimes called special relativistic time dilation. The faster the relative velocity, the greater the time dilation between them, with time slowing to a stop as one clock approaches the speed of light (299,792,458 m/s). In theory, time dilation would make it possible for passengers in a fast-moving vehicle to advance into the future in a short period of their own time. With sufficiently high speeds, the effect would be dramatic. For example, one year of travel might correspond to ten years on Earth. Indeed, a constant 1 g acceleration would permit humans to travel through the entire known Universe in one human lifetime. With current technology severely limiting the velocity of space travel, the differences experienced in practice are minuscule. After 6 months on the International Space Station (ISS), orbiting Earth at a speed of about 7,700 m/s, an astronaut would have aged about 0.005 seconds less than he would have on Earth. The cosmonauts Sergei Krikalev and Sergey Avdeev both experienced time dilation of about 20 milliseconds compared to time that passed on Earth. Simple inference Time dilation can be inferred from the observed constancy of the speed of light in all reference frames dictated by the second postulate of special relativity. This constancy of the speed of light means that, counter to intuition, the speeds of material objects and light are not additive. It is not possible to make the speed of light appear greater by moving towards or away from the light source. Consider then, a simple vertical clock consisting of two mirrors and , between which a light pulse is bouncing. The separation of the mirrors is and the clock ticks once each time the light pulse hits mirror . In the frame in which the clock is at rest (see left part of the diagram), the light pulse traces out a path of length and the time period between the ticks of the clock is equal to divided by the speed of light : From the frame of reference of a moving observer traveling at the speed relative to the resting frame of the clock (right part of diagram), the light pulse is seen as tracing out a longer, angled path . Keeping the speed of light constant for all inertial observers requires a lengthening (that is dilation) of the time period between the ticks of this clock from the moving observer's perspective. That is to say, as measured in a frame moving relative to the local clock, this clock will be running (that is ticking) more slowly, since tick rate equals one over the time period between ticks 1/. Straightforward application of the Pythagorean theorem leads to the well-known prediction of special relativity: The total time for the light pulse to trace its path is given by: The length of the half path can be calculated as a function of known quantities as: Elimination of the variables and from these three equations results in: which expresses the fact that the moving observer's period of the clock is longer than the period in the frame of the clock itself. The Lorentz factor gamma () is defined as Because all clocks that have a common period in the resting frame should have a common period when observed from the moving frame, all other clocksmechanical, electronic, optical (such as an identical horizontal version of the clock in the example)should exhibit the same velocity-dependent time dilation. Reciprocity Given a certain frame of reference, and the "stationary" observer described earlier, if a second observer accompanied the "moving" clock, each of the observers would measure the other's clock as ticking at a slower rate than their own local clock, due to them both measure the other to be the one that is in motion relative to their own stationary frame of reference. Common sense would dictate that, if the passage of time has slowed for a moving object, said object would observe the external world's time to be correspondingly sped up. Counterintuitively, special relativity predicts the opposite. When two observers are in motion relative to each other, each will measure the other's clock slowing down, in concordance with them being in motion relative to the observer's frame of reference. While this seems self-contradictory, a similar oddity occurs in everyday life. If two persons A and B observe each other from a distance, B will appear small to A, but at the same time, A will appear small to B. Being familiar with the effects of perspective, there is no contradiction or paradox in this situation. The reciprocity of the phenomenon also leads to the so-called twin paradox where the aging of twins, one staying on Earth and the other embarking on space travel, is compared, and where the reciprocity suggests that both persons should have the same age when they reunite. On the contrary, at the end of the round-trip, the traveling twin will be younger than the sibling on Earth. The dilemma posed by the paradox can be explained by the fact that situation is not symmetric. The twin staying on Earth is in a single inertial frame, and the traveling twin is in two different inertial frames: one on the way out and another on the way back.
Physical sciences
Theory of relativity
Physics
297924
https://en.wikipedia.org/wiki/Mantis%20shrimp
Mantis shrimp
Mantis shrimp are carnivorous marine crustaceans of the order Stomatopoda (). Stomatopods branched off from other members of the class Malacostraca around 400 million years ago, with more than 520 extant species of mantis shrimp known. All living species are in the suborder Unipeltata, which arose around 250 million years ago. They are among the most important predators in many shallow, tropical and subtropical marine habitats. However, despite being common in their habitats, they are poorly understood, as many species spend most of their lives sheltering in burrows and holes. Dubbed "sea locusts" by ancient Assyrians, "prawn killers" in Australia, and now sometimes referred to as "thumb splitters" due to their ability to inflict painful wounds if handled incautiously, mantis shrimp possess powerful raptorial appendages that are used to attack and kill prey either by spearing, stunning, or dismembering; the shape of these appendages are often used to classify them into groups: extant mantis shrimp either have appendages which form heavily mineralized "clubs" that can strike with great power, or they have sharp, grasping forelimbs used to swiftly seize prey (similar to those of praying mantis, hence their common name). Description Mantis shrimp typically grow to around in length, while a few can reach up to . A mantis shrimp's carapace covers only the rear part of the head and the first four segments of the thorax. Mantis shrimp widely range in colour, with species mostly being shades of brown to having multiple contrasting, vivid colours. Claws The mantis shrimp's second pair of thoracic appendages is adapted for powerful close-range combat. These claws can accelerate at a rate comparable to that of a .22 caliber bullet when fired, having around 1500 newtons of force with each swing/attack. The appendage differences divide mantis shrimp into two main types: those that hunt by impaling their prey with spear-like structures and those that smash prey with a powerful blow from a heavily mineralised club-like appendage. A considerable amount of damage can be inflicted after impact with these robust, hammer-like claws. This club is further divided into three subregions: the impact region, the periodic region, and the striated region. Mantis shrimp are commonly separated into distinct groups (most are categorized as either spearers or smashers but there are some outliers) as determined by the type of claws they possess: Spearers are armed with spiny appendages - the spines having barbed tips - used to stab and snag prey. These raptorial appendages resemble those of praying mantids, hence the common name of these crustaceans. This is the type found in most mantis shrimp. Smashers possess a much more developed club and a more rudimentary spear (which is nevertheless quite sharp and still used in fights between their own kind); the club is used to bludgeon and smash their meals apart. The inner aspect of the terminal portion of the appendage can also possess a sharp edge, used to cut prey while the mantis shrimp swims. This is found in the families Gonodactylidae, Odontodactylidae, Protosquillidae, and Takuidae. Spike Smashers (hammers or primitive smashers): An unspecialized form, found only in the basal family Hemisquillidae. The last segment lacks spines except at the tip, so it is not as effective at spearing but can also be used for smashing. Hatchet: An unusual, highly derived appendage that only a few species have. This body plan is largely unresearched. Both types strike by rapidly unfolding and swinging their raptorial claws at the prey, and can inflict serious damage on victims significantly greater in size than themselves. In smashers, these two weapons are employed with blinding quickness, with an acceleration of 10,400 g (102,000 m/s2 or 335,000 ft/s2) and speeds of from a standing start. Because they strike so rapidly, they generate vapor-filled bubbles in the water between the appendage and the striking surface—known as cavitation bubbles. The collapse of these cavitation bubbles produces measurable forces on their prey in addition to the instantaneous forces of 1,500 newtons that are caused by the impact of the appendage against the striking surface, which means that the prey is hit twice by a single strike; first by the claw and then by the collapsing cavitation bubbles that immediately follow. Even if the initial strike misses the prey, the resulting shock wave can be enough to stun or kill. Smashers use this ability to attack crabs, snails, rock oysters, and other molluscs, their blunt clubs enabling them to crack the shells of their prey into pieces. Spearers, however, prefer the meat of softer animals, such as fish and cephalopods, which their barbed claws can more easily slice and snag. The appendages are being studied as a microscale analogue for new macroscale material structures. Eyes The eyes of the mantis shrimp are mounted on mobile stalks and can move independently of each other. The extreme mobility allows them to be rotated in all three dimensions, yet the position of their eyes has shown to have no effect on the perception of their surroundings. They are thought to have the most complex eyes in the animal kingdom and have the most complex front-end for any visual system ever discovered. Each compound eye is made up of tens of thousands of ommatidia, clusters of photoreceptor cells. Each eye consists of two flattened hemispheres separated by parallel rows of specialised ommatidia, collectively called the midband. The number of omatidial rows in the midband ranges from two to six. This divides the eye into three regions. This configuration enables mantis shrimp to see objects that are near the mid-plane of an eye with three parts of the same eye (as can be seen in some photos showing three pseudopupils in one eye). In other words, each eye possesses trinocular vision, and therefore depth perception, for objects near its mid-plane. The upper and lower hemispheres are used primarily for recognition of form and motion, like the eyes of many other crustaceans. Compared with the three types of photoreceptor cell that humans possess in their eyes, the eyes of a mantis shrimp have between 12 and 16 types of photoreceptor cells. Furthermore, some of these stomatopods can tune the sensitivity of their long wavelength colour vision to adapt to their environment. This phenomenon, called "spectral tuning", is species-specific. Cheroske et al. did not observe spectral tuning in Neogonodactylus oerstedii, the species with the most monotonous natural photic environment. In N. bredini, a species with a variety of habitats ranging from a depth of 5 to 10 m (although it can be found down to 20 m below the surface), spectral tuning was observed, but the ability to alter wavelengths of maximum absorbance was not as pronounced as in N. wennerae, a species with much higher ecological/photic habitat diversity. The diversity of spectral tuning in Stomatopoda is also hypothesised to be directly linked to mutations in the retinal binding pocket of the opsin. The huge diversity seen in mantis shrimp photoreceptors likely comes from ancient gene duplication events. One consequence of this duplication is the lack of correlation between opsin transcript number and physiologically expressed photoreceptors. One species may have six different opsin genes, but only express one spectrally distinct photoreceptor. Over the years, some mantis shrimp species have lost the ancestral phenotype, although some still maintain 16 distinct photoreceptors and four light filters. Species that live in a variety of photic environments have high selective pressure for photoreceptor diversity, and maintain ancestral phenotypes better than species that live in murky waters or are primarily nocturnal. Mantis shrimp can perceive wavelengths of light ranging from deep ultraviolet (300 nm) to far-red (720 nm) and polarised light. In mantis shrimp in the superfamilies Gonodactyloidea, Lysiosquilloidea, and Hemisquilloidea, the midband is made up of six ommatidial rows. Rows 1 to 4 process colours, while rows 5 and 6 detect circularly or linearly polarised light. Twelve types of photoreceptor cells are in rows 1 to 4, four of which detect ultraviolet light. Despite the impressive range of wavelengths that mantis shrimp have the ability to see, they do not have the ability to discriminate wavelengths less than 25 nm apart. It is suggested that not discriminating between closely positioned wavelengths allows these organisms to make determinations of its surroundings with little processing delay. Having little delay in evaluating surroundings is important for mantis shrimp, since they are territorial and frequently in combat. However, some mantis shrimp have been found capable of distinguishing between high-saturation and low-saturation colors. Rows 1 to 4 of the midband are specialised for colour vision, from deep ultraviolet to far red. Their UV vision can detect five different frequency bands in the deep ultraviolet. To do this, they use two photoreceptors in combination with four different colour filters. They are currently believed insensitive to infrared light. The optical elements in these rows have eight different classes of visual pigments and the rhabdom (area of eye that absorbs light from a single direction) is divided into three different pigmented layers (tiers), each for different wavelengths. The three tiers in rows 2 and 3 are separated by colour filters (intrarhabdomal filters) that can be divided into four distinct classes, two classes in each row. Each consists of a tier, a colour filter of one class, a tier again, a colour filter of another class, and then a last tier. These colour filters allow the mantis shrimp to see with diverse colour vision. Without the filters, the pigments themselves range only a small segment of the visual spectrum, about 490 to 550 nm. Rows 5 and 6 are also segregated into different tiers, but have only one class of visual pigment, the ninth class, and are specialised for polarisation vision. Depending upon the species, they can detect circularly polarised light, linearly polarised light, or both. A tenth class of visual pigment is found in the upper and lower hemispheres of the eye. Some species have at least 16 photoreceptor types, which are divided into four classes (their spectral sensitivity is further tuned by colour filters in the retinas), 12 for colour analysis in the different wavelengths (including six which are sensitive to ultraviolet light) and four for analysing polarised light. By comparison, most humans have only four visual pigments, of which three are dedicated to see colour, and human lenses block ultraviolet light. The visual information leaving the retina seems to be processed into numerous parallel data streams leading into the brain, greatly reducing the analytical requirements at higher levels. The midband covers only about 5 to 10° of the visual field at any given instant, but like most crustaceans, mantis shrimps' eyes are mounted on stalks. In mantis shrimps, the movement of the stalked eye is unusually free, and can be driven up to 70° in all possible axes of movement by eight eyecup muscles divided into six functional groups. By using these muscles to scan the surroundings with the midband, they can add information about forms, shapes, and landscape, which cannot be detected by the upper and lower hemispheres of the eyes. They can also track moving objects using large, rapid eye movements where the two eyes move independently. By combining different techniques, including movements in the same direction, the midband can cover a very wide range of the visual field. Polarized light Six species of mantis shrimp have been reported to be able to detect circularly polarised light, which has not been documented in any other animal, and whether it is present across all species is unknown. They perform this feat by converting circularly polarized light into linearly polarized light via quarter-waveplates formed from stacks of microvilli. Some of their biological quarter-waveplates perform more uniformly over the visual spectrum than any current man-made polarising optics, and this could inspire new types of optical media that would outperform early 21st century Blu-ray Disc technology. The species Gonodactylus smithii is the only organism known to simultaneously detect the four linear and two circular polarisation components required to measure all four Stokes parameters, which yield a full description of polarisation. It is thus believed to have optimal polarisation vision. It is the only animal known to have dynamic polarisation vision. This is achieved by rotational eye movements to maximise the polarisation contrast between the object in focus and its background. Since each eye moves independently from the other, it creates two separate streams of visual information. Suggested advantages of visual system What advantage sensitivity to polarisation confers is unclear; however, polarisation vision is used by other animals for sexual signaling and secret communication that avoids the attention of predators. This mechanism could provide an evolutionary advantage; it only requires small changes to the cell in the eye and could easily lead to natural selection. The eyes of mantis shrimps may enable them to recognise different types of coral, prey species (which are often transparent or semitransparent), or predators, such as barracuda, which have shimmering scales. Alternatively, the manner in which they hunt (very rapid movements of the claws) may require very accurate ranging information, which would require accurate depth perception. The capacity to see UV light may enable observation of otherwise hard-to-detect prey on coral reefs. During mating rituals, mantis shrimps actively fluoresce, and the wavelength of this fluorescence matches the wavelengths detected by their eye pigments. Females are only fertile during certain phases of the tidal cycle; the ability to perceive the phase of the moon may, therefore, help prevent wasted mating efforts. It may also give these shrimps information about the size of the tide, which is important to species living in shallow water near the shore. Researchers suspect that the broader variety of photoreceptors in the eyes of mantis shrimps allows visual information to be preprocessed by the eyes instead of the brain, which would otherwise have to be larger to deal with the complex task of opponent process colour perception used by other species, thus requiring more time and energy. While the eyes themselves are complex and not yet fully understood, the principle of the system appears to be simple. It has a similar set of sensitivities to the human visual system, but works in the opposite manner. In the human brain, the inferior temporal cortex has a huge number of colour-specific neurons, which process visual impulses from the eyes to extract colour information. The mantis shrimp instead uses the different types of photoreceptors in its eyes to perform the same function as the human brain neurons, resulting in a hardwired and more efficient system for an animal that requires rapid colour identification. Humans have fewer types of photoreceptors, but more colour-tuned neurons, while mantis shrimp appear to have fewer colour neurons and more classes of photoreceptors. However, a study from 2022 failed to find unequivocal evidence for a solely "barcode"-like visual system as described above. Stomatopods of the species Haptosquilla trispinosa were able to distinguish high and low-saturation colors from grey, contravening Thoen and colleagues. It may be that some combination of color opponency and photoreceptor activation comparison/barcode analysis is present. The shrimps use a form of reflector of polarised light not seen in nature or human technology before. It allows the manipulation of light across the structure rather than through its depth, the typical way polarisers work. This allows the structure to be both small and microscopically thin, and still be able to produce big, bright, colourful polarised signals. Ecology and life history Mantis shrimp are long-lived and exhibit complex behaviour, such as ritualised fighting, or by the use of fluorescent patterns on their bodies for signalling with their own and perhaps even other species. Many have developed complex social behaviours to defend their space from rivals; mantis shrimp are typically solitary sea creatures that may aggressively defend their burrows, either rock formations or self-dug intricate burrows in the seabed. They are rarely seen outside their homes except to feed and relocate. They can learn and remember well, and are able to recognise neighbouring mantis shrimp with which they frequently interact. They can recognise them by visual signs and even by individual smell. Mantis shrimp can be diurnal, nocturnal, or crepuscular (active at twilight), depending on the species. Unlike most crustaceans, they sometimes hunt, chase, and kill prey. Although some live in temperate seas, most species live in tropical and subtropical waters in the Indian and Pacific Oceans, encompassing the seas between eastern Africa and Hawaii. Mantis shrimp live in burrows where they spend the majority of their time. The spearing species build their habitat in soft sediments and the smashing species make burrows in hard substrata, such as cavities in coral. These two habitats are crucial for their ecology since they use burrows as sites for retreat and as locations for consuming their prey. Burrows and coral cavities are also used as sites for mating and for keeping their eggs safe. Stomatopod body size undergoes periodic growth which necessitates finding a new cavity or burrow that will fit the animal's new diameter. Some spearing species can modify their pre-established habitat if the burrow is made of silt or mud, which can be expanded. Stomatopods can have as many as 20 or 30 breeding episodes over their lifespan. Depending on the species, the eggs are either laid and kept in a burrow, or are carried around under the female's tail until they hatch, as in a number of other crustaceans. Also depending on the species, males and females may come together only to mate, or they may bond in monogamous, long-term relationships. In the monogamous species, the mantis shrimps remain with the same partner up to 20 years. They share the same burrow and may be able to coordinate their activities. Both sexes often take care of the eggs (bi-parental care). In Pullosquilla and some species in Nannosquilla, the female lays two clutches of eggs – one that the male tends and one that the female tends. In other species, the female looks after the eggs while the male hunts for both of them. After the eggs hatch, the offspring may spend up to three months as plankton. Although stomatopods typically display the standard types of movement seen in true shrimp and lobsters, one species, Nannosquilla decemspinosa, has been observed rolling itself into a crude wheel (somewhat resembling volvation). The species lives in shallow, sandy areas. At low tides, N. decemspinosa is often stranded by its short rear legs, which are sufficient for movement when the body is supported by water, but not on dry land. The mantis shrimp thus performs a forward flip in an attempt to roll towards the nearest tide pool. N. has been observed to roll repeatedly for , but specimens typically travel less than . Systematics Evolutionary history Although the Devonian Eopteridae have been suggested to be early stomatopods, their fragmentary known remains make the referral uncertain. The oldest unambiguous stem-group mantis shrimp date to the Carboniferous (359–300 million years ago). Stem-group mantis shrimp are assigned to two major groups the Palaeostomatopodea and the Archaeostomatopodea, the latter of which are more closely related to modern mantis shrimp, which are assigned to the clade Unipeltata. The oldest members of Unipeltata date to the Triassic. Selected extant species Family Gonodactylidae Gonodactylus smithii Family Hemisquillidae Hemisquilla ensigera Hemisquilla australiensis Hemisquilla braziliensis Hemisquilla californiensis Family Lysiosquillidae Lysiosquillina maculata, zebra mantis shrimp or striped mantis shrimp Family Nannosquillidae Nannosquilla decemspinosa Platysquilla eusebia Family Odontodactylidae Odontodactylus scyllarus, peacock mantis shrimp Odontodactylus latirostris, pink-eared mantis shrimp Family Pseudosquillidae Pseudosquilla ciliata, common mantis shrimp Family Squillidae Rissoides desmaresti Squilla empusa Squilla mantis Family Tetrasquillidae Heterosquilla tricarinata, New Zealand A large number of mantis shrimp species were first scientifically described by one carcinologist, Raymond B. Manning; the collection of stomatopods he amassed is the largest in the world, covering 90% of the known species whilst 10% are still unknown. Culinary uses The mantis shrimp is eaten by a variety of cultures. In Japanese cuisine, the mantis shrimp species Oratosquilla oratoria, called , is eaten boiled as a sushi topping, and occasionally raw as sashimi. Mantis shrimps are also abundant along Vietnam's coast, known in Vietnamese as bề bề, tôm tích or tôm tít. In regions such as Nha Trang, they are called bàn chải, named for its resemblance to a scrub brush. The shrimp can be steamed, boiled, grilled, or dried, used with pepper, salt and lime, fish sauce and tamarind, or fennel. In Cantonese cuisine, the mantis shrimp is known as "urinating shrimp" () because of their tendency to shoot a jet of water when picked up. After cooking, their flesh is closer to that of lobsters than that of shrimp, and like lobsters, their shells are quite hard and require some pressure to crack. One common preparation is first deep-frying, then stir-frying with garlic and chili peppers. They may also be boiled or steamed. In the Mediterranean countries, the mantis shrimp Squilla mantis is a common seafood, especially on the Adriatic coasts (canocchia) and the Gulf of Cádiz (galera). In the Philippines, the mantis shrimp is known as tatampal, hipong-dapa, pitik-pitik, or alupihang-dagat, and is cooked and eaten like any other shrimp. In Kiribati, mantis shrimp called te waro in Gilbertese are abundant and are eaten boiled. In Hawaii, some mantis shrimp have grown unusually large in the contaminated water of the Grand Ala Wai Canal in Waikiki. The dangers normally associated with consuming seafood caught in contaminated waters are present in these mantis shrimp. Aquaria Some saltwater aquarists keep stomatopods in captivity. The peacock mantis is especially colourful and desired in the trade. While some aquarists value mantis shrimps, others consider them harmful pests, because they are voracious predators, eating other desirable inhabitants of the tank. Additionally, some rock-burrowing species can do more damage to live rock than the fishkeeper would prefer. The live rock with mantis shrimp burrows is considered useful by some in the marine aquarium trade and is often collected. A piece of live rock not uncommonly conveys a live mantis shrimp into an aquarium. Once inside the tank, it may feed on fish and other inhabitants, and is notoriously difficult to catch when established in a well-stocked tank. While there are accounts of this shrimp breaking glass tanks, they are rare and are usually the result of the shrimp being kept in too small a tank. While stomatopods do not eat coral, smashers can damage it if they try to make a home within it.
Biology and health sciences
Malacostraca
Animals
298223
https://en.wikipedia.org/wiki/Glassblowing
Glassblowing
Glassblowing is a glassforming technique that involves inflating molten glass into a bubble (or parison) with the aid of a blowpipe (or blow tube). A person who blows glass is called a glassblower, glassmith, or gaffer. A lampworker (often also called a glassblower or glassworker) manipulates glass with the use of a torch on a smaller scale, such as in producing precision laboratory glassware out of borosilicate glass. Technology Principles As a novel glass forming technique created in the middle of the 1st century BC, glassblowing exploited a working property of glass that was previously unknown to glassworkers; inflation, which is the expansion of a molten blob of glass by introducing a small amount of air into it. That is based on the liquid structure of glass where the atoms are held together by strong chemical bonds in a disordered and random network, therefore molten glass is viscous enough to be blown and gradually hardens as it loses heat. To increase the stiffness of the molten glass, which in turn makes the process of blowing easier, there was a subtle change in the composition of glass. With reference to their studies of the ancient glass assemblages from Sepphoris. postulated that the concentration of natron, which acts as flux in glass, is slightly lower in blown vessels than those manufactured by casting. Lower concentration of natron would have allowed the glass to be stiffer for blowing. During blowing, thinner layers of glass cool faster than thicker ones and become more viscous than the thicker layers. That allows production of blown glass with uniform thickness instead of causing blow-through of the thinned layers. A full range of glassblowing techniques was developed within decades of its invention. The two major methods of glassblowing are free-blowing and mold-blowing. Free-blowing This method held a pre-eminent position in glassforming ever since its introduction in the middle of the 1st century BC until the late 19th century, and is still widely used as a glassforming technique, especially for artistic purposes. The process of free-blowing involves the blowing of short puffs of air into a molten portion of glass called a "gather" which has been spooled at one end of the blowpipe. This has the effect of forming an elastic skin on the interior of the glass blob that matches the exterior skin caused by the removal of heat from the furnace. The glassworker can then quickly inflate the molten glass to a coherent blob and work it into a desired shape. Researchers at the Toledo Museum of Art attempted to reconstruct the ancient free-blowing technique by using clay blowpipes. The result proved that short clay blowpipes of about facilitate free-blowing because they are simple to handle and to manipulate and can be re-used several times. Skilled workers are capable of shaping almost any vessel forms by rotating the pipe, swinging it and controlling the temperature of the piece while they blow. They can produce a great variety of glass objects, ranging from drinking cups to window glass. An outstanding example of the free-blowing technique is the Portland Vase, which is a cameo manufactured during the Roman period. An experiment was carried out by Gudenrath and Whitehouse with the aim of re-creating the Portland Vase. A full amount of blue glass required for the body of the vase was gathered on the end of the blowpipe and was subsequently dipped into a pot of hot white glass. Inflation occurred when the glassworker blew the molten glass into a sphere which was then stretched or elongated into a vase with a layer of white glass overlying the blue body. Mold-blowing Mold-blowing was an alternative glassblowing method that came after the invention of free-blowing, during the first part of the second quarter of the 1st century AD. A glob of molten glass is placed on the end of the blowpipe, and is then inflated into a wooden or metal carved mold. In that way, the shape and the texture of the bubble of glass is determined by the design on the interior of the mold rather than the skill of the glassworker. Two types of mold, namely single-piece molds and multi-piece molds, are frequently used to produce mold-blown vessels. The former allows the finished glass object to be removed in one movement by pulling it upwards from the single-piece mold and is largely employed to produce tableware and utilitarian vessels for storage and transportation. Whereas the latter is made in multi-paneled mold segments that join together, thus permitting the development of more sophisticated surface modeling, texture and design. The Roman leaf beaker which is now on display in the J. Paul Getty Museum was blown in a three-part mold decorated with the foliage relief frieze of four vertical plants. Meanwhile, Taylor and Hill tried to reproduce mold-blown vessels by using three-part molds made of different materials. The result suggested that metal molds, in particular bronze, are more effective in producing high-relief design on glass than plaster or wooden molds. The development of the mold-blowing technique has enabled the speedy production of glass objects in large quantity, thus encouraging the mass production and widespread distribution of glass objects. Modern glassblowing The transformation of raw materials into glass takes place at around ; the glass emits enough heat to appear almost white hot. The glass is then left to "fine out" (allowing the bubbles to rise out of the mass), and then the working temperature is reduced in the furnace to around . At this stage, the glass appears to be a bright orange color. Though most glassblowing is done between , "soda-lime" glass remains somewhat plastic and workable at as low as . Annealing is usually done between . Glassblowing involves three furnaces. The first, which contains a crucible of molten glass, is simply referred to as "the furnace". The second is called the "glory hole", and is used to reheat a piece in between steps of working with it. The final furnace is called the "lehr" or "annealer", and is used to slowly cool the glass, over a period of a few hours to a few days, depending on the size of the pieces. This keeps the glass from cracking or shattering due to thermal stress. Historically, all three furnaces were contained in one structure, with a set of progressively cooler chambers for each of the three purposes. Tools The major tools used by a glassblower are the blowpipe (or blow tube), punty (or punty rod, pontil, or mandrel), bench, marver, blocks, jacks, paddles, tweezers, newspaper pads, and a variety of shears. Blowpipe The tip of the blowpipe is first preheated; then dipped in the molten glass in the furnace. The molten glass is "gathered" onto the end of the blowpipe in much the same way that viscous honey is picked up on a honey dipper. This glass is then rolled on the marver, which was traditionally a flat slab of marble, but today is more commonly a fairly thick flat sheet of steel. This process, called "marvering", forms a cool skin on the exterior of the molten glass blob, and shapes it. Then air is blown into the pipe, creating a bubble. Next, the glassworker can gather more glass over that bubble to create a larger piece. Once a piece has been blown to its approximate final size, the bottom is finalized. Then, the molten glass is attached to a stainless steel or iron rod called a "punty" for shaping and transferring the hollow piece from the blowpipe to provide an opening and to finalize the top. Bench The bench is a glassblower's workstation; it includes places for the glassblower to sit, for the handheld tools, and two rails that the pipe or punty rides on while the blower works with the piece. Blocks Blocks are ladle-like tools made from water-soaked fruitwood, and are used similarly to the marver to shape and cool a piece in the early steps of creation. In similar fashion, pads of water-soaked newspaper (roughly square, thick), held in the bare hand, can be used to shape the piece. Jacks Jacks are tools shaped somewhat like large tweezers with two blades, which are used for forming shape later in the creation of a piece. Paddles are flat pieces of wood or graphite used for creating flat spots such as a bottom. Tweezers are used to pick out details or to pull on the glass. There are two important types of shears, straight shears and diamond shears. Straight shears are essentially bulky scissors, used for making linear cuts. Diamond shears have blades that form a diamond shape when partially open. These are used for cutting off masses of glass. Patterning There are many ways to apply patterns and color to blown glass, including rolling molten glass in powdered color or larger pieces of colored glass called "frit". Complex patterns with great detail can be created through the use of cane (rods of colored glass) and murrine (rods cut in cross-sections to reveal patterns). These pieces of color can be arranged in a pattern on a flat surface, and then "picked up" by rolling a bubble of molten glass over them. One of the most exacting and complicated caneworking techniques is "reticello", which involves creating two bubbles from cane, each twisted in a different direction and then combining them and blowing out the final form. Lampworkers, usually but not necessarily work on a much smaller scale, historically using alcohol lamps and breath- or bellows-driven air to create a hot flame at a workbench to manipulate preformed glass rods and tubes. These stock materials took form as laboratory glassware, beads, and durable scientific "specimens"—miniature glass sculpture. The craft, which was raised to an art form in the late 1960s by Hans Godo Frabel (later followed by lampwork artists such as Milon Townsend and Robert Mickelson), is still practiced today. The modern lampworker uses a flame of oxygen and propane or natural gas. The modern torch permits working both the soft glass from the furnace worker and the borosilicate glass (low-expansion) of the scientific glassblower. This latter worker may also have multiple headed torches and special lathes to help form the glass or fused quartz used for special projects. History Earliest evidence Glassblowing was invented by Syrian craftsmen from Hama and Aleppo between 27 BC and 14 AD. The ancient Romans copied the technique consisting of blowing air into molten glass with a blowpipe making it into a bubble. Hence, tube blowing not only represents the initial attempts of experimentation by glassworkers at blowing glass, it is also a revolutionary step that induced a change in conception and a deep understanding of glass. Such inventions swiftly eclipsed all other traditional methods, such as casting and core-forming, in working glass. Evidence of glass blowing comes even earlier from the Indian subcontinent in the form of Indo-Pacific beads which uses glass blowing to make cavity before being subjected to tube drawn technique for bead making dated more than 2500 BP. Beads are made by attaching molten glass gather to the end of a blowpipe, a bubble is then blown into the gather. Roman Empire The invention of glassblowing coincided with the establishment of the Roman Empire in the 1st century BC, which enhanced the spread and dominance of this new technology. Glassblowing was greatly supported by the Roman government (although Roman citizens could not be "in trade", in particular under the reign of Augustus), and glass was being blown in many areas of the Roman world. On the eastern borders of the Empire, the first large glass workshops were set up by the Phoenicians in the birthplace of glassblowing in contemporary Lebanon and Israel as well as in the neighbouring province of Cyprus. Ennion for example, was among the most prominent glassworkers from Lebanon of the time. He was renowned for producing the multi-paneled mold-blown glass vessels that were complex in their shapes, arrangement and decorative motifs. The complexity of designs of these mold-blown glass vessels illustrated the sophistication of the glassworkers in the eastern regions of the Roman Empire. Mold-blown glass vessels manufactured by the workshops of Ennion and other contemporary glassworkers such as Jason, Nikon, Aristeas, and Meges, constitutes some of the earliest evidence of glassblowing found in the eastern territories. Eventually, the glassblowing technique reached Egypt and was described in a fragmentary poem printed on papyrus which was dated to the 3rd century AD. The Roman hegemony over the Mediterranean areas resulted in the substitution of glassblowing for earlier Hellenistic casting, core-forming and mosaic fusion techniques. The earliest evidence of blowing in Hellenistic work consists of small blown bottles for perfume and oil retrieved from the glass workshops on the Greek island of Samothrace and at Corinth in mainland Greece which were dated to the 1st century AD. Later, the Phoenician glassworkers exploited their glassblowing techniques and set up their workshops in the western territories of the Roman Empire, first in Italy by the middle of the 1st century AD. Rome, the heartland of the empire, soon became a major glassblowing center, and more glassblowing workshops were subsequently established in other provinces of Italy, for example Campania, Morgantina and Aquileia. A great variety of blown glass objects, ranging from unguentaria (toiletry containers for perfume) to cameo, from tableware to window glass, were produced. From there, escaping craftsmen (who had been forbidden to travel) otherwise advanced to the rest of Europe by building their glassblowing workshops in the north of the Alps (which is now Switzerland), and then at sites in northern Europe in present-day France and Belgium. One of the most prolific glassblowing centers of the Roman period was established in Cologne on the river Rhine in Germany by the late 1st century BC. Stone base molds and terracotta base molds were discovered from these Rhineland workshops, suggesting the adoption and the application of mold-blowing technique by the glassworkers. Besides, blown flagons and blown jars decorated with ribbing, as well as blown perfume bottles with letters CCAA or CCA which stand for Colonia Claudia Agrippiniensis, were produced from the Rhineland workshops. Remains of blown blue-green glass vessels, for example bottles with handles, collared bowls and indented beakers, were found in abundance from the local glass workshops at Poetovio and Celeia in Slovenia. Surviving physical evidence, such as blowpipes and molds which are indicative of the presence of blowing, is fragmentary and limited. Pieces of clay blowpipes were retrieved from the late 1st century AD glass workshop at Avenches in Switzerland. Clay blowpipes, also known as mouthblowers, were made by the ancient glassworkers due to the accessibility and availability of the resources before the introduction of the metal blowpipes. Hollow iron rods, together with blown vessel fragments and glass waste dating to approximately 4th century AD, were recovered from the glass workshop in Mérida of Spain, as well as in Salona in Croatia. Middle Ages The glass blowing tradition was carried on in Europe from the medieval period through the Middle Ages to the Renaissance in the demise of the Roman Empire in the 5th century AD. During the early medieval period, the Franks manipulated the technique of glassblowing by creating the simple corrugated molds and developing the claws decoration techniques. Blown glass objects, such as the drinking vessels that imitated the shape of the animal horn were produced in the Rhine and Meuse valleys, as well as in Belgium. The Byzantine glassworkers made mold-blown glass decorated with Christian and Jewish symbols in Jerusalem between the late 6th century and the middle of the 7th century AD. Mold-blown vessels with facets, relief and linear-cut decoration were discovered at Samarra in the Islamic lands. Renaissance Europe witnessed the revitalization of glass industry in Italy. Glassblowing, in particular the mold-blowing technique, was employed by the Venetian glassworkers from Murano to produce the fine glassware which is also known as "cristallo". The technique of glassblowing, coupled with the cylinder and crown methods, was used to manufacture sheet or flat glass for window panes in the late 17th century. The applicability of glassblowing was so widespread that glass was being blown in many parts of the world, for example, in China, Japan and the Islamic Lands. The Nøstetangen Museum at Hokksund, Norway, shows how glass was made according to ancient tradition. The Nøstetangen glassworks had operated there from 1741 to 1777, producing table-glass and chandeliers in the German and English styles. Industrial Revolution Recent developments The "studio glass movement" began in 1962 when Harvey Littleton, a ceramics professor, and Dominick Labino, a chemist and engineer, held two workshops at the Toledo Museum of Art, during which they started experimenting with melting glass in a small furnace and creating blown glass art. Littleton promoted the use of small furnaces in individual artists' studios. This approach to glassblowing blossomed into a worldwide movement, producing such flamboyant and prolific artists as Dale Chihuly, Dante Marioni, Fritz Driesbach and Marvin Lipofsky as well as scores of other modern glass artists. Today there are many different institutions around the world that offer glassmaking resources for training and sharing equipment. Working with large or complex pieces requires a team of several glassworkers, in a complex choreography of precisely timed movements. This practical requirement has encouraged collaboration among glass artists, in both semi-permanent and temporary working groups. In addition, recent developments in technology allow for the use of glass components in high-tech applications. Using machininery to shape and form glass enables to manufacture glass products of the highest quality and accuracy. As a result, glass is often used in semiconductor, analytical, life science, industrial, and medical applications. In literature The writer Daphne du Maurier was descended from a family of glass-blowers in 18th century France, and she wrote about her forebears in the 1963 historical novel The Glass-Blowers. The subject of mystery novelist Donna Leon's Through a Glass, Darkly is the investigation of a crime in a Venetian glassworks on the island of Murano.
Technology
Materials
null
298420
https://en.wikipedia.org/wiki/Maximum%20and%20minimum
Maximum and minimum
In mathematical analysis, the maximum and minimum of a function are, respectively, the greatest and least value taken by the function. Known generically as extremum, they may be defined either within a given range (the local or relative extrema) or on the entire domain (the global or absolute extrema) of a function. Pierre de Fermat was one of the first mathematicians to propose a general technique, adequality, for finding the maxima and minima of functions. As defined in set theory, the maximum and minimum of a set are the greatest and least elements in the set, respectively. Unbounded infinite sets, such as the set of real numbers, have no minimum or maximum. In statistics, the corresponding concept is the sample maximum and minimum. Definition A real-valued function f defined on a domain X has a global (or absolute) maximum point at x∗, if for all x in X. Similarly, the function has a global (or absolute) minimum point at x∗, if for all x in X. The value of the function at a maximum point is called the of the function, denoted , and the value of the function at a minimum point is called the of the function, (denoted for clarity). Symbolically, this can be written as follows: is a global maximum point of function if The definition of global minimum point also proceeds similarly. If the domain X is a metric space, then f is said to have a local (or relative) maximum point at the point x∗, if there exists some ε > 0 such that for all x in X within distance ε of x∗. Similarly, the function has a local minimum point at x∗, if f(x∗) ≤ f(x) for all x in X within distance ε of x∗. A similar definition can be used when X is a topological space, since the definition just given can be rephrased in terms of neighbourhoods. Mathematically, the given definition is written as follows: Let be a metric space and function . Then is a local maximum point of function if such that The definition of local minimum point can also proceed similarly. In both the global and local cases, the concept of a can be defined. For example, x∗ is a if for all x in X with , we have , and x∗ is a if there exists some such that, for all x in X within distance ε of x∗ with , we have . Note that a point is a strict global maximum point if and only if it is the unique global maximum point, and similarly for minimum points. A continuous real-valued function with a compact domain always has a maximum point and a minimum point. An important example is a function whose domain is a closed and bounded interval of real numbers (see the graph above). Search Finding global maxima and minima is the goal of mathematical optimization. If a function is continuous on a closed interval, then by the extreme value theorem, global maxima and minima exist. Furthermore, a global maximum (or minimum) either must be a local maximum (or minimum) in the interior of the domain, or must lie on the boundary of the domain. So a method of finding a global maximum (or minimum) is to look at all the local maxima (or minima) in the interior, and also look at the maxima (or minima) of the points on the boundary, and take the greatest (or least) one.Minima For differentiable functions, Fermat's theorem states that local extrema in the interior of a domain must occur at critical points (or points where the derivative equals zero). However, not all critical points are extrema. One can often distinguish whether a critical point is a local maximum, a local minimum, or neither by using the first derivative test, second derivative test, or higher-order derivative test, given sufficient differentiability. For any function that is defined piecewise, one finds a maximum (or minimum) by finding the maximum (or minimum) of each piece separately, and then seeing which one is greatest (or least). Examples For a practical example, assume a situation where someone has feet of fencing and is trying to maximize the square footage of a rectangular enclosure, where is the length, is the width, and is the area: The derivative with respect to is: Setting this equal to reveals that is our only critical point. Now retrieve the endpoints by determining the interval to which is restricted. Since width is positive, then , and since that implies that Plug in critical point as well as endpoints and into and the results are and respectively. Therefore, the greatest area attainable with a rectangle of feet of fencing is Functions of more than one variable For functions of more than one variable, similar conditions apply. For example, in the (enlargeable) figure on the right, the necessary conditions for a local maximum are similar to those of a function with only one variable. The first partial derivatives as to z (the variable to be maximized) are zero at the maximum (the glowing dot on top in the figure). The second partial derivatives are negative. These are only necessary, not sufficient, conditions for a local maximum, because of the possibility of a saddle point. For use of these conditions to solve for a maximum, the function z must also be differentiable throughout. The second partial derivative test can help classify the point as a relative maximum or relative minimum. In contrast, there are substantial differences between functions of one variable and functions of more than one variable in the identification of global extrema. For example, if a bounded differentiable function f defined on a closed interval in the real line has a single critical point, which is a local minimum, then it is also a global minimum (use the intermediate value theorem and Rolle's theorem to prove this by contradiction). In two and more dimensions, this argument fails. This is illustrated by the function whose only critical point is at (0,0), which is a local minimum with f(0,0) = 0. However, it cannot be a global one, because f(2,3) = −5. Maxima or minima of a functional If the domain of a function for which an extremum is to be found consists itself of functions (i.e. if an extremum is to be found of a functional), then the extremum is found using the calculus of variations. In relation to sets Maxima and minima can also be defined for sets. In general, if an ordered set S has a greatest element m, then m is a maximal element of the set, also denoted as . Furthermore, if S is a subset of an ordered set T and m is the greatest element of S with (respect to order induced by T), then m is a least upper bound of S in T. Similar results hold for least element, minimal element and greatest lower bound. The maximum and minimum function for sets are used in databases, and can be computed rapidly, since the maximum (or minimum) of a set can be computed from the maxima of a partition; formally, they are self-decomposable aggregation functions. In the case of a general partial order, a least element (i.e., one that is less than all others) should not be confused with the minimal element (nothing is lesser). Likewise, a greatest element of a partially ordered set (poset) is an upper bound of the set which is contained within the set, whereas the maximal element m of a poset A is an element of A such that if m ≤ b (for any b in A), then m = b. Any least element or greatest element of a poset is unique, but a poset can have several minimal or maximal elements. If a poset has more than one maximal element, then these elements will not be mutually comparable. In a totally ordered set, or chain, all elements are mutually comparable, so such a set can have at most one minimal element and at most one maximal element. Then, due to mutual comparability, the minimal element will also be the least element, and the maximal element will also be the greatest element. Thus in a totally ordered set, we can simply use the terms minimum and maximum. If a chain is finite, then it will always have a maximum and a minimum. If a chain is infinite, then it need not have a maximum or a minimum. For example, the set of natural numbers has no maximum, though it has a minimum. If an infinite chain S is bounded, then the closure Cl(S) of the set occasionally has a minimum and a maximum, in which case they are called the greatest lower bound and the least upper bound of the set S, respectively. Argument of the maximum
Mathematics
Functions: General
null
298428
https://en.wikipedia.org/wiki/Identity%20%28mathematics%29
Identity (mathematics)
In mathematics, an identity is an equality relating one mathematical expression A to another mathematical expression B, such that A and B (which might contain some variables) produce the same value for all values of the variables within a certain domain of discourse. In other words, A = B is an identity if A and B define the same functions, and an identity is an equality between functions that are differently defined. For example, and are identities. Identities are sometimes indicated by the triple bar symbol instead of , the equals sign. Formally, an identity is a universally quantified equality. Common identities Algebraic identities Certain identities, such as and , form the basis of algebra, while other identities, such as and , can be useful in simplifying algebraic expressions and expanding them. Trigonometric identities Geometrically, trigonometric identities are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities involving both angles and side lengths of a triangle. Only the former are covered in this article. These identities are useful whenever expressions involving trigonometric functions need to be simplified. Another important application is the integration of non-trigonometric functions: a common technique which involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. One of the most prominent examples of trigonometric identities involves the equation which is true for all real values of . On the other hand, the equation is only true for certain values of , not all. For example, this equation is true when but false when . Another group of trigonometric identities concerns the so-called addition/subtraction formulas (e.g. the double-angle identity , the addition formula for ), which can be used to break down expressions of larger angles into those with smaller constituents. Exponential identities The following identities hold for all integer exponents, provided that the base is non-zero: Unlike addition and multiplication, exponentiation is not commutative. For example, and , but whereas . Also unlike addition and multiplication, exponentiation is not associative either. For example, and , but 23 to the 4 is 84 (or 4,096) whereas 2 to the 34 is 281 (or 2,417,851,639,229,258,349,412,352). When no parentheses are written, by convention the order is top-down, not bottom-up:   whereas Logarithmic identities Several important formulas, sometimes called logarithmic identities or log laws, relate logarithms to one another: Product, quotient, power and root The logarithm of a product is the sum of the logarithms of the numbers being multiplied; the logarithm of the ratio of two numbers is the difference of the logarithms. The logarithm of the th power of a number is times the logarithm of the number itself; the logarithm of a th root is the logarithm of the number divided by . The following table lists these identities with examples. Each of the identities can be derived after substitution of the logarithm definitions and/or in the left hand sides. Change of base The logarithm logb(x) can be computed from the logarithms of x and b with respect to an arbitrary base k using the following formula: Typical scientific calculators calculate the logarithms to bases 10 and e. Logarithms with respect to any base b can be determined using either of these two logarithms by the previous formula: Given a number x and its logarithm logb(x) to an unknown base b, the base is given by: Hyperbolic function identities The hyperbolic functions satisfy many identities, all of them similar in form to the trigonometric identities. In fact, Osborn's rule states that one can convert any trigonometric identity into a hyperbolic identity by expanding it completely in terms of integer powers of sines and cosines, changing sine to sinh and cosine to cosh, and switching the sign of every term which contains a product of an even number of hyperbolic sines. The Gudermannian function gives a direct relationship between the trigonometric functions and the hyperbolic ones that does not involve complex numbers. Logic and universal algebra Formally, an identity is a true universally quantified formula of the form where and are terms with no other free variables than The quantifier prefix is often left implicit, when it is stated that the formula is an identity. For example, the axioms of a monoid are often given as the formulas or, shortly, So, these formulas are identities in every monoid. As for any equality, the formulas without quantifier are often called equations. In other words, an identity is an equation that is true for all values of the variables.
Mathematics
Basics
null
298448
https://en.wikipedia.org/wiki/Cosmochemistry
Cosmochemistry
Cosmochemistry () or chemical cosmology is the study of the chemical composition of matter in the universe and the processes that led to those compositions. This is done primarily through the study of the chemical composition of meteorites and other physical samples. Given that the asteroid parent bodies of meteorites were some of the first solid material to condense from the early solar nebula, cosmochemists are generally, but not exclusively, concerned with the objects contained within the Solar System. History In 1938, Swiss mineralogist Victor Goldschmidt and his colleagues compiled a list of what they called "cosmic abundances" based on their analysis of several terrestrial and meteorite samples. Goldschmidt justified the inclusion of meteorite composition data into his table by claiming that terrestrial rocks were subjected to a significant amount of chemical change due to the inherent processes of the Earth and the atmosphere. This meant that studying terrestrial rocks exclusively would not yield an accurate overall picture of the chemical composition of the cosmos. Therefore, Goldschmidt concluded that extraterrestrial material must also be included to produce more accurate and robust data. This research is considered to be the foundation of modern cosmochemistry. During the 1950s and 1960s, cosmochemistry became more accepted as a science. Harold Urey, widely considered to be one of the fathers of cosmochemistry, engaged in research that eventually led to an understanding of the origin of the elements and the chemical abundance of stars. In 1956, Urey and his colleague, German scientist Hans Suess, published the first table of cosmic abundances to include isotopes based on meteorite analysis. The continued refinement of analytical instrumentation throughout the 1960s, especially that of mass spectrometry, allowed cosmochemists to perform detailed analyses of the isotopic abundances of elements within meteorites. in 1960, John Reynolds determined, through the analysis of short-lived nuclides within meteorites, that the elements of the Solar System were formed before the Solar System itself which began to establish a timeline of the processes of the early Solar System. Meteorites Meteorites are one of the most important tools that cosmochemists have for studying the chemical nature of the Solar System. Many meteorites come from material that is as old as the Solar System itself, and thus provide scientists with a record from the early solar nebula. Carbonaceous chondrites are especially primitive; that is they have retained many of their chemical properties since their formation 4.56 billion years ago, and are therefore a major focus of cosmochemical investigations. The most primitive meteorites also contain a small amount of material (< 0.1%) which is now recognized to be presolar grains that are older than the Solar System itself, and which are derived directly from the remnants of the individual supernovae that supplied the dust from which the Solar System formed. These grains are recognizable from their exotic chemistry which is alien to the Solar System (such as matrixes of graphite, diamond, or silicon carbide). They also often have isotope ratios which are not those of the rest of the Solar System (in particular, the Sun), and which differ from each other, indicating sources in a number of different explosive supernova events. Meteorites also may contain interstellar dust grains, which have collected from non-gaseous elements in the interstellar medium, as one type of composite cosmic dust ("stardust"). Recent findings by NASA, based on studies of meteorites found on Earth, suggests DNA and RNA components (adenine, guanine and related organic molecules), building blocks for life as we know it, may be formed extraterrestrially in outer space. Comets On 30 July 2015, scientists reported that upon the first touchdown of the Philae lander on comet 67/P surface, measurements by the COSAC and Ptolemy instruments revealed sixteen organic compounds, four of which were seen for the first time on a comet, including acetamide, acetone, methyl isocyanate and propionaldehyde. Research In 2004, scientists reported detecting the spectral signatures of anthracene and pyrene in the ultraviolet light emitted by the Red Rectangle nebula (no other such complex molecules had ever been found before in outer space). This discovery was considered a confirmation of a hypothesis that as nebulae of the same type as the Red Rectangle approach the ends of their lives, convection currents cause carbon and hydrogen in the nebulae's core to get caught in stellar winds, and radiate outward. As they cool, the atoms supposedly bond to each other in various ways and eventually form particles of a million or more atoms. The scientists inferred that since they discovered polycyclic aromatic hydrocarbons (PAHs)—which may have been vital in the formation of early life on Earth—in a nebula, by necessity they must originate in nebulae. In August 2009, NASA scientists identified one of the fundamental chemical building-blocks of life (the amino acid glycine) in a comet for the first time. In 2010, fullerenes (or "buckyballs") were detected in nebulae. Fullerenes have been implicated in the origin of life; according to astronomer Letizia Stanghellini, "It's possible that buckyballs from outer space provided seeds for life on Earth." In August 2011, findings by NASA, based on studies of meteorites found on Earth, suggests DNA and RNA components (adenine, guanine and related organic molecules), building blocks for life as we know it, may be formed extraterrestrially in outer space. In October 2011, scientists reported that cosmic dust contains complex organic matter ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. On August 29, 2012, astronomers at Copenhagen University reported the detection of a specific sugar molecule, glycolaldehyde, in a distant star system. The molecule was found around the protostellar binary IRAS 16293-2422, which is located 400 light years from Earth. Glycolaldehyde is needed to form ribonucleic acid, or RNA, which is similar in function to DNA. This finding suggests that complex organic molecules may form in stellar systems prior to the formation of planets, eventually arriving on young planets early in their formation. In September 2012, NASA scientists reported that polycyclic aromatic hydrocarbons (PAHs), subjected to interstellar medium (ISM) conditions, are transformed, through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". Further, as a result of these transformations, the PAHs lose their spectroscopic signature which could be one of the reasons "for the lack of PAH detection in interstellar ice grains, particularly the outer regions of cold, dense clouds or the upper molecular layers of protoplanetary disks." In 2013, the Atacama Large Millimeter Array (ALMA Project) confirmed that researchers have discovered an important pair of prebiotic molecules in the icy particles in interstellar space (ISM). The chemicals, found in a giant cloud of gas about 25,000 light-years from Earth in ISM, may be a precursor to a key component of DNA and the other may have a role in the formation of an important amino acid. Researchers found a molecule called cyanomethanimine, which produces adenine, one of the four nucleobases that form the "rungs" in the ladder-like structure of DNA. The other molecule, called ethanamine, is thought to play a role in forming alanine, one of the twenty amino acids in the genetic code. Previously, scientists thought such processes took place in the very tenuous gas between the stars. The new discoveries, however, suggest that the chemical formation sequences for these molecules occurred not in gas, but on the surfaces of ice grains in interstellar space. NASA ALMA scientist Anthony Remijan stated that finding these molecules in an interstellar gas cloud means that important building blocks for DNA and amino acids can 'seed' newly formed planets with the chemical precursors for life. In January 2014, NASA reported that current studies on the planet Mars by the Curiosity and Opportunity rovers will now be searching for evidence of ancient life, including a biosphere based on autotrophic, chemotrophic and/or chemolithoautotrophic microorganisms, as well as ancient water, including fluvio-lacustrine environments (plains related to ancient rivers or lakes) that may have been habitable. The search for evidence of habitability, taphonomy (related to fossils), and organic carbon on the planet Mars is now a primary NASA objective. In February 2014, NASA announced a greatly upgraded database for tracking polycyclic aromatic hydrocarbons (PAHs) in the universe. According to scientists, more than 20% of the carbon in the universe may be associated with PAHs, possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets.
Physical sciences
Astronomy basics
Astronomy
298534
https://en.wikipedia.org/wiki/CompactFlash
CompactFlash
CompactFlash (CF) is a flash memory mass storage device used mainly in portable electronic devices. The format was specified and the devices were first manufactured by SanDisk in 1994. CompactFlash became one of the most successful of the early memory card formats, surpassing Miniature Card and SmartMedia. Subsequent formats, such as MMC/SD, various Memory Stick formats, and xD-Picture Card offered stiff competition. Most of these cards are smaller than CompactFlash while offering comparable capacity and speed. Proprietary memory card formats for use in professional audio and video, such as P2 and SxS, are faster, but physically larger and more costly. CompactFlash's popularity is declining as CFexpress is taking over. As of 2022, both Canon and Nikon's newest high end cameras, e.g. the Canon EOS R5, Canon EOS R3, and Nikon Z 9 use CFexpress cards for the higher performance required to record 8K video. Traditional CompactFlash cards use the Parallel ATA interface, but in 2008, a variant of CompactFlash, CFast was announced. CFast (also known as CompactFast) is based on the Serial ATA interface. In November 2010, SanDisk, Sony and Nikon presented a next generation card format to the CompactFlash Association. The new format has a similar form factor to CF/CFast but is based on the PCI Express interface instead of Parallel ATA or Serial ATA. With potential read and write speeds of 1 Gbit/s (125 MB/s) and storage capabilities beyond 2 TiB, the new format is aimed at high-definition camcorders and high-resolution digital cameras, but the new cards are not backward compatible with either CompactFlash or CFast. The XQD card format was officially announced by the CompactFlash Association in December 2011. Description There are two main subdivisions of CF cards, 3.3 mm-thick type I and 5 mm-thick type II (CF2). The type II slot is used by miniature hard drives and some other devices, such as the Hasselblad CFV Digital Back for the Hasselblad series of medium format cameras. There are four main card speeds: original CF, CF High Speed (using CF+/CF2.0), faster CF 3.0 standard and the faster CF 4.0 standard adopted as of 2007. CompactFlash was originally built around Intel's NOR-based flash memory, but has switched to NAND technology. CF is among the oldest and most successful formats, and has held a niche in the professional camera market especially well. It has benefited from both a better cost to memory-size ratio and, for much of the format's life, generally greater available capacity than other formats. CF cards can be used directly in a PC Card slot with a plug adapter, used as an ATA (IDE) or PCMCIA storage device with a passive adapter or with a reader, or attached to other types of ports such as USB or FireWire. As some newer card types are smaller, they can be used directly in a CF card slot with an adapter. Formats that can be used this way include SD/MMC, Memory Stick Duo, xD-Picture Card in a Type I slot and SmartMedia in a Type II slot, as of 2005. Some multi-card readers use CF for I/O as well. The first CompactFlash cards had capacities of 2 to 10 megabytes. This increased to 64 MB in 1996, 128 MB in 1998, 256 MB in 1999, 512 MB in 2001, and 1 GB in 2002. Technical details The CompactFlash interface is a 50-pin subset of the 68-pin PCMCIA connector. "It can be easily slipped into a passive 68-pin PCMCIA Type II to CF Type I adapter that fully meets PCMCIA electrical and mechanical interface specifications", according to compactflash.org. The interface operates, depending on the state of a mode pin on power-up, as either a 16-bit PC Card (0x7FF address limit) or as an IDE (PATA) interface. Unlike the PC Card interface, no dedicated programming voltages (Vpp1 and Vpp2) are provided on the CompactFlash interface. CompactFlash IDE mode defines an interface that is smaller than, but electrically identical to, the ATA interface. The CF device contains an ATA controller and appears to the host device as if it were a hard disk. CF devices operate at 3.3 volts or 5 volts, and can be swapped from system to system. CompactFlash supports C-H-S and 28-bit logical block addressing (CF 5.0 introduced support for LBA-48). CF cards with flash memory are able to cope with extremely rapid changes in temperature. Industrial versions of flash memory cards can operate at a range of −45 °C to +85 °C. NOR-based flash has lower density than newer NAND-based systems, and CompactFlash is therefore the physically largest of the three memory card formats introduced in the early 1990s, being derived from the JEIDA/PCMCIA Memory Card formats. The other two are Miniature Card (MiniCard) and SmartMedia (SSFDC). However, CF did switch to NAND type memory later. The IBM Microdrive format, later made by Hitachi, implements the CF Type II interface, but is a hard disk drive (HDD) as opposed to solid-state memory. Seagate also made CF HDDs. Speed CompactFlash IDE (ATA) emulation speed is usually specified in "x" ratings, e.g. 8x, 20x, 133x. This is the same system used for CD-ROMs and indicates the maximum transfer rate in the form of a multiplier based on the original audio CD data transfer rate, which is 150 kB/s. where R = transfer rate, K = speed rating. For example, 133x rating means transfer rate of: 133 × 150 kB/s = 19,950 kB/s ≈ 20 MB/s. These are manufacturer speed ratings. Actual transfer rate may be higher, or lower, than shown on the card depending on several factors. The speed rating quoted is almost always the read speed, while write speed is often slower. Solid state For reads, the onboard controller first powers up the memory chips from standby. Reads are usually in parallel, error correction is done on the data, then transferred through the interface 16 bits at a time. Error checking is required due to soft read errors. Writes require powerup from standby, wear leveling calculation, a block erase of the area to be written to, ECC calculation, write itself (an individual memory cell read takes around 100 ns, a write to the chip takes 1ms+ or 10,000 times longer). Because the USB 2.0 interface is limited to 35 MB/s and lacks bus mastering hardware, USB 2.0 implementation results in slower access. Modern UDMA-7 CompactFlash Cards provide data rates up to 145 MB/s and require USB 3.0 data transfer rates. A direct motherboard connection is often limited to 33 MB/s because IDE to CF adapters lack high speed ATA (66 MB/s plus) cable support. Power on from sleep/off takes longer than power up from standby. Magnetic media Many hard drives (often referred to by the trademarked name "Microdrive") typically spin at 3600 RPM, so rotational latency is a consideration, as is spin-up from standby or idle. Seagate's 8 GB ST68022CF drive spins up fully within a few revolutions but current drawn can reach up to 350 milliamps and runs at 40-50 mA mean current. Its average seek time is 8 ms and can sustain 9 MB/s read and write, and has an interface speed of 33 MB/s. Hitachi's 4 GB Microdrive is 12 ms seek, sustained 6 MB/s. Capacities and compatibility The CF 5.0 Specification supports capacities up to 128 PiB using 48-bit logical block addressing (LBA). Prior to 2006, CF drives using magnetic media offered the highest capacities (up to 8 GiB). Now there are solid-state cards with higher capacities (up to 512 GB). As of 2011, solid-state drives (SSDs) have supplanted both kinds of CF drive for large capacity requirements. Solid state capacities SanDisk announced its 16 GB Extreme III card at the photokina trade fair, in September, 2006. That same month, Samsung announced 16, 32 and 64 GB CF cards. Two years later, in September, 2008, PRETEC announced 100 GB cards. Magnetic media capacities Seagate announced a 5 GB "1-inch hard drive" in June, 2004, and an 8 GB version in June, 2005. Use in place of a hard disk drive In early 2008, the CFA demonstrated CompactFlash cards with a built in SATA interface. Several companies make adapters that allow CF cards to be connected to PCI, PCMCIA, IDE and SATA connections, allowing a CF card to act as a solid-state drive with virtually any operating system or BIOS, and even in a RAID configuration. CF cards may perform the function of the master or slave drive on the IDE bus, but have issues sharing the bus. Moreover, late-model cards that provide DMA (using UDMA or MWDMA) may present problems when used through a passive adapter that does not support DMA. Reliability Original PC Card memory cards used an internal battery to maintain data when power was removed. The rated life of the battery was the only reliability issue. CompactFlash cards that use flash memory, like other flash-memory devices, are rated for a limited number of erase/write cycles for any "block." While NOR flash has higher endurance, ranging from 10,000 to 1,000,000, they have not been adapted for memory card usage. Most mass storage usage flash are NAND based. NAND flash were being scaled down to 16 nm. They are usually rated for 500 to 3,000 write/erase cycles per block before hard failure. This is less reliable than magnetic media. Car PC Hacks suggests disabling the Windows swap file and using its Enhanced Write Filter (EWF) to eliminate unnecessary writes to flash memory. Additionally, when formatting a flash-memory drive, the Quick Format method should be used, to write as little as possible to the device. Most CompactFlash flash-memory devices limit wear on blocks by varying the physical location to which a block is written. This process is called wear leveling. When using CompactFlash in ATA mode to take the place of the hard disk drive, wear leveling becomes critical because low-numbered blocks contain tables whose contents change frequently. Current CompactFlash cards spread the wear-leveling across the entire drive. The more advanced CompactFlash cards will move data that rarely changes to ensure all blocks wear evenly. NAND flash memory is prone to frequent soft read errors. The CompactFlash card includes error checking and correction (ECC) that detects the error and re-reads the block. The process is transparent to the user, although it may slow data access. As a flash memory device is solid-state, it is less affected by physical shock than a spinning disk. The possibility for electrical damage from upside-down insertion is prevented by asymmetrical side slots, assuming that the host device uses a suitable connector. Power consumption and data transfer rate Small cards consume around 5% of the power required by small disk drives and still have reasonable transfer rates of over 45 MB/s for the more expensive 'high-speed' cards. However, the manufacturer's warning on the flash memory used for ReadyBoost indicates a current draw in excess of 500 mA. File systems CompactFlash cards for use in consumer devices are typically formatted as FAT12 (for media up to 16 MB), FAT16 (for media up to 2 GB, sometimes up to 4 GB) and FAT32 (for media larger than 2 GB). This lets the devices be read by personal computers but also suits the limited processing ability of some consumer devices such as cameras. There are varying levels of compatibility among FAT32-compatible cameras, MP3 players, PDAs, and other devices. While any device that claims FAT32-capability should read and write to a FAT32-formatted card without problems, some devices are tripped up by cards larger than 2 GB that are completely unformatted, while others may take longer to apply a FAT32 format. The way many digital cameras update the file system as they write to the card creates a FAT32 bottleneck. Writing to a FAT32-formatted card generally takes a little longer than writing to a FAT16-formatted card with similar performance capabilities. For instance, the Canon EOS 10D writes the same photo to a FAT16-formatted 2 GB CompactFlash card somewhat faster than to a same speed 4 GB FAT32-formatted CompactFlash card, although the memory chips in both cards have the same write speed specification. Although FAT16 is more wasteful of disk space with its larger clusters, it works better with the write strategy that flash memory chips require. The cards themselves can be formatted with any type of file system such as Ext, JFS, NTFS, or by one of the dedicated flash file systems. It can be divided into partitions as long as the host device can read them. CompactFlash cards are often used instead of hard drives in embedded systems, dumb terminals and various small form-factor PCs that are built for low noise output or power consumption. CompactFlash cards are often more readily available and smaller than purpose-built solid-state drives and often have faster seek times than hard drives. CF+ and CompactFlash specification revisions When CompactFlash was first being standardized, even full-sized hard disks were rarely larger than 4 GB in size, and so the limitations of the ATA standard were considered acceptable. However, CF cards manufactured after the original Revision 1.0 specification are available in capacities up to 512 GB. While the current revision 6.0 works in [P]ATA mode, future revisions are expected to implement SATA mode. CompactFlash Revision 1.0 (1995), 8.3 MB/s (PIO mode 2), support for up to 128 GB storage space. CompactFlash+ aka CompactFlash I/O (1997) CF+ and CompactFlash Revision 2.0 (2003) added an increase in speed to 16.6 MB/s data-transfer (PIO mode 4). At the end of 2003, DMA 33 transfers were added as well, available since mid-2004. CF+ and CompactFlash Revision 3.0 (2004) added support for up to a 66 MB/s data transfer rate (UDMA 66), 25 MB/s in PC Card mode, added password protection, along with a number of other features. CFA recommends usage of the FAT32 filesystem for storage cards larger than 2 GB. CF+ and CompactFlash Revision 4.0 (2006) added support for IDE Ultra DMA Mode 6 for a maximum data transfer rate of 133 MB/s (UDMA 133). CF+ and CompactFlash Revision 4.1 (2007) added support for Power Enhanced CF Storage Cards. CompactFlash Revision 5.0 (2010) added a number of features, including 48-bit addressing (supporting 128 petabyte of storage), larger block transfers of up to 32 megabytes, quality-of-service and video performance guarantees, and other enhancements CompactFlash Revision 6.0 (November 2010) added UltraDMA Mode 7 (167 MB/s), ATA-8/ACS-2 sanitize command, TRIM and an optional card capability to report the operating temperature range of the card. CE-ATA CE-ATA is a serial MMC-compatible interface based on the MultiMediaCard standard. CFast A variant of CompactFlash known as CFast is based on the Serial ATA (SATA) interface, rather than the Parallel ATA/IDE (PATA) bus for which all previous versions of CompactFlash are designed. CFast is also known as CompactFast. CFast 1.0/1.1 supports a higher maximum transfer rate than current CompactFlash cards, using SATA 2.0 (300 MB/s) interface, while PATA is limited to 167 MB/s using UDMA 7. CFast cards are not physically or electrically compatible with CompactFlash cards. However, since SATA can emulate the PATA command protocol, existing CompactFlash software drivers can be used, although writing new drivers to use AHCI instead of PATA emulation will almost always result in significant performance gains. CFast cards use a female 7-pin SATA data connector, and a female 17-pin power connector, so an adaptor is required to connect CFast cards in place of standard SATA hard drives which use male connectors. The first CFast cards reached the market in late 2009. At CES 2009, Pretec showed a 32 GB CFast card and announced that they should reach the market within a few months. Delock began distributing CFast cards in 2010, offering several card readers with USB 3.0 and eSATAp (power over eSATA) ports to support CFast cards. Seeking higher performance and still keeping a compact storage format, some of the earliest adoptors of CFast cards were in the gaming industry (used in slot machines), as a natural evolution from the by then well-established CF cards. Current gaming industry supporters of the format include both specialist gaming companies (e.g. Aristocrat Leisure) and OEMs such as Innocore (now part of Advantech Co., Ltd.). The CFast 2.0 specification was released in the second quarter of 2012, updating the electrical interface to SATA 3.0 (600 MB/s). As of 2014, the only product employing CFast 2.0 cards was the Arri Amira digital production camera, allowing frame rates of up to 200 fps; a CFast 2.0 adapter for the Arri Alexa/XT camera was also released. On 7 April 2014, Blackmagic Design announced the URSA cinema camera, which records to CFast media. On 8 April 2015, Canon Inc. announced the XC10 video camera, which also makes use of CFast cards. Blackmagic Design also announced that its URSA Mini will use CFast 2.0. As of October 2016, there are a growing number of cameras, video recorders, and audio recorders that use the faster data rates offered by CFast media. As of 2017, in the wider embedded electronics industry, transition from CF to CFast is still relatively slow, probably due to hardware cost considerations and some inertia (familiarity with CF) and because a significant part of the industry is satisfied with the lower performance provided by CF cards, thus having no reason to change. A strong incentive to change to CFast for embedded electronics companies using designs based on Intel PC architecture is the fact that Intel has removed native support for the (P)ATA interface a few design platforms ago and the older CPU/PCH generations now have end-of-life status. CFexpress In September 2016, the CompactFlash Association announced a new standard based on PCIe 3.0 and NVMe, CFexpress. In April 2017, the version 1.0 of the CFexpress specification was published, with support for two PCIe 3.0 lanes in an XQD form-factor for up to 2 GB/s. Type I and Type II The only physical difference between the two types is that Type I devices are 3.3 mm thick while Type II devices are 5 mm thick. Electrically, the two interfaces are the same except that Type I devices are permitted to draw up to 70 mA supply current from the interface, while type II devices may draw up to 500 mA. Most Type II devices are Microdrive devices (see below), other miniature hard drives, and adapters, such as a popular adapter that takes Secure Digital cards. A few flash-based Type II devices were manufactured, but Type I cards are now available in capacities that exceed CF HDDs. Manufacturers of CompactFlash cards such as Sandisk, Toshiba, Alcotek and Hynix offer devices with Type I slots only. Some of the latest DSLR cameras, like the Nikon D800, have also dropped Type II support. Microdrives Microdrive was a brand of tiny hard disks—about 25 mm (1 inch) wide—in a CompactFlash Type II package. The first was developed and released in 1999 by IBM, with a capacity of 170 MB. IBM sold its disk drive division, including the Microdrive trademark, to Hitachi in 2002. Comparable hard disks were also made by other vendors, such as Seagate and Sony. They were available in capacities of up to 8 GB but have been superseded by flash memory in cost, capacity, and reliability, and are no longer manufactured. As mechanical devices, CF HDDs drew more current than flash memory's 100 mA maximum. Early versions drew up to 500 mA, but more recent ones drew under 200 mA for reads and under 300 mA for writes. CF HDDs were also susceptible to damage from physical shock or temperature changes. However, CF HDDs had a longer lifespan of write cycles than early flash memories. The iPod mini, Nokia N91, iriver H10 (5 or 6 GB model), LifeDrive, Sony NW-A1000/3000 and Rio Carbon used a Microdrive to store data. Compared to other portable storage CompactFlash cards that use flash memory are more rugged than some hard drive solutions because they are solid-state. (
Technology
Non-volatile memory
null
298547
https://en.wikipedia.org/wiki/Immunity%20%28medicine%29
Immunity (medicine)
In biology, immunity is the state of being insusceptible or resistant to a noxious agent or process, especially a pathogen or infectious disease. Immunity may occur naturally or be produced by prior exposure or immunization. Innate and adaptive The immune system has innate and adaptive components. Innate immunity is present in all metazoans, immune responses: inflammatory responses and phagocytosis. The adaptive component, on the other hand, involves more advanced lymphatic cells that can distinguish between specific "non-self" substances in the presence of "self". The reaction to foreign substances is etymologically described as inflammation while the non-reaction to self substances is described as immunity. The two components of the immune system create a dynamic biological environment where "health" can be seen as a physical state where the self is immunologically spared, and what is foreign is inflammatorily and immunologically eliminated. "Disease" can arise when what is foreign cannot be eliminated or what is self is not spared. Innate immunity, also known as native immunity, is a semi-specific and widely distributed form of immunity. It is defined as the first line of defense against pathogens, representing a critical systemic response to prevent infection and maintain homeostasis, contributing to the activation of an adaptive immune response. It does not adapt to specific external stimulus or a prior infection, but relies on genetically encoded recognition of particular patterns. Adaptive or acquired immunity is the active component of the host immune response, mediated by antigen-specific lymphocytes. Unlike the innate immunity, the acquired immunity is highly specific to a particular pathogen, including the development of immunological memory. Like the innate system, the acquired system includes both humoral immunity components and cell-mediated immunity components. Adaptive immunity can be acquired either 'naturally' (by infection) or 'artificially' (through deliberate actions such as vaccination). Adaptive immunity can also be classified as 'active' or 'passive'. Active immunity is acquired through the exposure to a pathogen, which triggers the production of antibodies by the immune system. Passive immunity is acquired through the transfer of antibodies or activated T-cells derived from an immune host either artificially or through the placenta; it is short-lived, requiring booster doses for continued immunity. The diagram below summarizes these divisions of immunity. Adaptive immunity recognizes more diverse patterns. Unlike innate immunity it is associated with memory of the pathogen. History of theories For thousands of years mankind has been intrigued with the causes of disease and the concept of immunity. The prehistoric view was that disease was caused by supernatural forces, and that illness was a form of theurgic punishment for "bad deeds" or "evil thoughts" visited upon the soul by the gods or by one's enemies. In Classical Greek times, Hippocrates, who is regarded as the Father of Medicine, diseases were attributed to an alteration or imbalance in one of the four humors (blood, phlegm, yellow bile or black bile). The first written descriptions of the concept of immunity may have been made by the Athenian Thucydides who, in 430 BC, described that when the plague hit Athens: "the sick and the dying were tended by the pitying care of those who had recovered, because they knew the course of the disease and were themselves free from apprehensions. For no one was ever attacked a second time, or not with a fatal result". Active immunotherapy may have begun with Mithridates VI of Pontus (120-63 BC) who, to induce active immunity for snake venom, recommended using a method similar to modern toxoid serum therapy, by drinking the blood of animals which fed on venomous snakes. He is thought to have assumed that those animals acquired some detoxifying property, so that their blood would contain transformed components of the snake venom that could induce resistance to it instead of exerting a toxic effect. Mithridates reasoned that, by drinking the blood of these animals, he could acquire a similar resistance. Fearing assassination by poison, he took daily sub-lethal doses of venom to build tolerance. He is also said to have sought to create a 'universal antidote' to protect him from all poisons. For nearly 2000 years, poisons were thought to be the proximate cause of disease, and a complicated mixture of ingredients, called Mithridate, was used to cure poisoning during the Renaissance. An updated version of this cure, Theriacum Andromachi, was used well into the 19th century. The term "immunes" is also found in the epic poem "Pharsalia" written around 60 BC by the poet Marcus Annaeus Lucanus to describe a North African tribe's resistance to snake venom. The first clinical description of immunity which arose from a specific disease-causing organism is probably A Treatise on Smallpox and Measles ("Kitab fi al-jadari wa-al-hasbah{{}}, translated 1848A "al-Razi". 2003 The Columbia Electronic Encyclopedia, Sixth Edition. Columbia University Press (from Answers.com, 2006.)) written by the Islamic physician Al-Razi in the 9th century. In the treatise, Al Razi describes the clinical presentation of smallpox and measles and goes on to indicate that exposure to these specific agents confers lasting immunity (although he does not use this term). Until the 19th century, the miasma theory was also widely accepted. The theory viewed diseases such as cholera or the Black Plague as being caused by a miasma, a noxious form of "bad air". If someone was exposed to the miasma in a swamp, in evening air, or breathing air in a sickroom or hospital ward, they could catch a disease. Since the 19th century, communicable diseases came to be viewed as being caused by germs/microbes. The modern word "immunity" derives from the Latin immunis, meaning exemption from military service, tax payments or other public services. The first scientist who developed a full theory of immunity was Ilya Mechnikov who revealed phagocytosis in 1882. With Louis Pasteur's germ theory of disease, the fledgling science of immunology began to explain how bacteria caused disease, and how, following infection, the human body gained the ability to resist further infections. In 1888 Emile Roux and Alexandre Yersin isolated diphtheria toxin, and following the 1890 discovery by Behring and Kitasato of antitoxin based immunity to diphtheria and tetanus, the antitoxin became the first major success of modern therapeutic immunology. In Europe, the induction of active immunity emerged in an attempt to contain smallpox. Immunization has existed in various forms for at least a thousand years, without the terminology. The earliest use of immunization is unknown, but, about 1000 AD, the Chinese began practicing a form of immunization by drying and inhaling powders derived from the crusts of smallpox lesions. Around the 15th century in India, the Ottoman Empire, and east Africa, the practice of inoculation (poking the skin with powdered material derived from smallpox crusts) was quite common. This practice was first introduced into the west in 1721 by Lady Mary Wortley Montagu. In 1798, Edward Jenner introduced the far safer method of deliberate infection with cowpox virus, (smallpox vaccine), which caused a mild infection that also induced immunity to smallpox. By 1800, the procedure was referred to as vaccination. To avoid confusion, smallpox inoculation was increasingly referred to as variolation, and it became common practice to use this term without regard for chronology. The success and general acceptance of Jenner's procedure would later drive the general nature of vaccination developed by Pasteur and others towards the end of the 19th century. In 1891, Pasteur widened the definition of vaccine in honour of Jenner, and it then became essential to qualify the term by referring to polio vaccine, measles vaccine etc. Passive immunity Passive immunity is the immunity acquired by the transfer of ready-made antibodies from one individual to another. Passive immunity can occur naturally, such as when maternal antibodies are transferred to the foetus through the placenta, and can also be induced artificially, when high levels of human (or horse) antibodies specific for a pathogen or toxin are transferred to non-immune individuals. Passive immunization is used when there is a high risk of infection and insufficient time for the body to develop its own immune response, or to reduce the symptoms of ongoing or immunosuppressive diseases. Passive immunity provides immediate protection, but the body does not develop memory, therefore the patient is at risk of being infected by the same pathogen later. Naturally acquired passive immunity A fetus naturally acquires passive immunity from its mother during pregnancy. Maternal passive immunity is antibody-mediated immunity. The mother's antibodies (MatAb) are passed through the placenta to the fetus by an FcRn receptor on placental cells. This occurs around the third month of gestation. IgG is the only antibody isotype that can pass through the placenta. Passive immunity is also provided through the transfer of IgA antibodies found in breast milk that are transferred to the gut of a nursing infant, protecting against bacterial infections, until the newborn can synthesize its antibodies. Colostrum present in mothers milk is an example of passive immunity. Artificially acquired passive immunity Artificially acquired passive immunity is a short-term immunization induced by the transfer of antibodies, which can be administered in several forms; as human or animal blood plasma, as pooled human immunoglobulin for intravenous (IVIG) or intramuscular (IG) use, and in the form of monoclonal antibodies (MAb). Passive transfer is used prophylactically in the case of immunodeficiency diseases, such as hypogammaglobulinemia. It is also used in the treatment of several types of acute infection, and to treat poisoning. Immunity derived from passive immunization lasts for only a short period of time, and there is also a potential risk for hypersensitivity reactions, and serum sickness, especially from gamma globulin of non-human origin. The artificial induction of passive immunity has been used for over a century to treat infectious disease, and before the advent of antibiotics, was often the only specific treatment for certain infections. Immunoglobulin therapy continued to be a first line therapy in the treatment of severe respiratory diseases until the 1930s, even after sulfonamide lot antibiotics were introduced. Transfer of activated T-cells Passive or "adoptive transfer" of cell-mediated immunity, is conferred by the transfer of "sensitized" or activated T-cells from one individual into another. It is rarely used in humans because it requires histocompatible (matched) donors, which are often difficult to find. In unmatched donors this type of transfer carries severe risks of graft versus host disease. It has, however, been used to treat certain diseases including some types of cancer and immunodeficiency. This type of transfer differs from a bone marrow transplant, in which (undifferentiated) hematopoietic stem cells are transferred. Active immunity When B cells and T cells are activated by a pathogen, memory B-cells and T- cells develop, and the primary immune response results. Throughout the lifetime of an animal, these memory cells will "remember" each specific pathogen encountered, and can mount a strong secondary response if the pathogen is detected again. The primary and secondary responses were first described in 1921 by English immunologist Alexander Glenny although the mechanism involved was not discovered until later. This type of immunity is both active and adaptive because the body's immune system prepares itself for future challenges. Active immunity often involves both the cell-mediated and humoral aspects of immunity as well as input from the innate immune system. Naturally acquired Naturally acquired active immunity occurs as the result of surviving an infection. When a person is exposed to a live pathogen and develops a primary immune response, this leads to immunological memory. Many disorders of immune system function can affect the formation of active immunity, such as immunodeficiency (both acquired and congenital forms) and immunosuppression. Artificially acquired Artificially acquired active immunity can be induced by a vaccine, a substance that contains antigen. A vaccine stimulates a primary response against the antigen without causing symptoms of the disease. The term vaccination was coined by Richard Dunning, a colleague of Edward Jenner, and adapted by Louis Pasteur for his pioneering work in vaccination. The method Pasteur used entailed treating the infectious agents for those diseases, so they lost the ability to cause serious disease. Pasteur adopted the name vaccine as a generic term in honor of Jenner's discovery, which Pasteur's work built upon. In 1807, Bavaria became the first group to require their military recruits to be vaccinated against smallpox, as the spread of smallpox was linked to combat. Subsequently, the practice of vaccination would increase with the spread of war. There are four types of traditional vaccines: Inactivated vaccines are composed of micro-organisms that have been killed with chemicals and/or heat and are no longer infectious. Examples are vaccines against flu, cholera, plague, and hepatitis A. Most vaccines of this type are likely to require booster shots. Live, attenuated vaccines are composed of micro-organisms that have been cultivated under conditions which disable their ability to induce disease. These responses are more durable, however, they may require booster shots. Examples include yellow fever, measles, rubella, and mumps. Toxoids are inactivated toxic compounds from micro-organisms in cases where these (rather than the micro-organism itself) cause illness, used prior to an encounter with the toxin of the micro-organism. Examples of toxoid-based vaccines include tetanus and diphtheria. Subunit, recombinant, polysaccharide, and conjugate vaccines are composed of small fragments or pieces from a pathogenic (disease-causing) organism. A characteristic example is the subunit vaccine against Hepatitis B virus. In addition, there are some newer types of vaccines in use: Outer Membrane Vesicle (OMV) vaccines contain the outer membrane of a bacterium without any of its internal components or genetic material. Thus, ideally, they stimulate an immune response effective against the original bacteria without the risk of an infection. Genetic vaccines deliver nucleic acid that codes for an antigen into host cells, which then produce that antigen, stimulating an immune response. This category of vaccine includes DNA vaccines, RNA vaccines, and viral vector vaccines, which differ in the chemical form of nucleic acid and how it is delivered into host cells. A variety of vaccine types are under development; see Experimental Vaccine Types. Most vaccines are given by hypodermic or intramuscular injection as they are not absorbed reliably through the gut. Live attenuated polio and some typhoid and cholera vaccines are given orally in order to produce immunity based in the bowel. Hybrid immunity Hybrid immunity is the combination of natural immunity and artificial immunity. Studies of hybrid-immune people found that their blood was better able to neutralize the Beta and other variants of SARS-CoV-2 than never-infected, vaccinated people. Moreover, on 29 October 2021, the Centers for Disease Control and Prevention (CDC) concluded that "Multiple studies in different settings have consistently shown that infection with SARS-CoV-2 and vaccination each result in a low risk of subsequent infection with antigenically similar variants for at least 6 months. Numerous immunologic studies and a growing number of epidemiologic studies have shown that vaccinating previously infected individuals significantly enhances their immune response and effectively reduces the risk of subsequent infection, including in the setting of increased circulation of more infectious variants. ..." Genetics Immunity is determined genetically. Genomes in humans and animals encode the antibodies and numerous other immune response genes. While many of these genes are generally required for active and passive immune responses (see sections above), there are also many genes that appear to be required for very specific immune responses. For instance, Tumor Necrosis Factor (TNF) is required for defense of tuberculosis in humans. Individuals with genetic defects in TNF may get recurrent and life-threatening infections with tuberculosis bacteria (Mycobacterium tuberculosis) but are otherwise healthy. They also seem to respond to other infections more or less normally. The condition is therefore called Mendelian susceptibility to mycobacterial disease (MSMD) and variants of it can be caused by other genes related to interferon production or signaling (e.g. by mutations in the genes IFNG, IL12B, IL12RB1, IL12RB2, IL23R, ISG15, MCTS1, RORC, TBX21, TYK2, CYBB, JAK1, IFNGR1, IFNGR2, STAT1, USP18, IRF1, IRF8, NEMO, SPPL2A'').
Biology and health sciences
Concepts
Health
298600
https://en.wikipedia.org/wiki/Procaine
Procaine
Procaine is a local anesthetic drug of the amino ester group. It is most commonly used in dental procedures to numb the area around a tooth and is also used to reduce the pain of intramuscular injection of penicillin. Owing to the ubiquity of the trade name Novocain or Novocaine, in some regions, procaine is referred to generically as novocaine. It acts mainly as a sodium channel blocker. Today, it is used therapeutically in some countries due to its sympatholytic, anti-inflammatory, perfusion-enhancing, and mood-enhancing effects. Procaine was first synthesized in 1905, shortly after amylocaine. It was created by the chemist Alfred Einhorn who gave the chemical the trade name Novocaine, from the Latin nov- (meaning "new") and -caine, a common ending for alkaloids used as anesthetics. It was introduced into medical use by surgeon Heinrich Braun. Prior to the discovery of amylocaine and procaine, cocaine was a commonly used local anesthetic. Einhorn wished his new discovery to be used for amputations, but for this surgeons preferred general anesthesia. Dentists, however, found it very useful. Pharmacology The primary use for procaine is as an anaesthetic. Aside from its use as a dental anesthetic, procaine is used less frequently today, since more effective (and hypoallergenic) alternatives such as lidocaine (Xylocaine) exist. Like other local anesthetics (such as mepivacaine, and prilocaine), procaine is a vasodilator, thus is often coadministered with epinephrine for the purpose of vasoconstriction. Vasoconstriction helps to reduce bleeding, increases the duration and quality of anesthesia, prevents the drug from reaching systemic circulation in large amounts, and overall reduces the amount of anesthetic required. As a dental anesthesic, for example, more novocaine is needed for root canal treatment than for a simple filling. Unlike cocaine, a vasoconstrictor, procaine does not have the euphoric and addictive qualities that put it at risk for abuse. Procaine, an ester anesthetic, is metabolized in the plasma by the enzyme pseudocholinesterase through hydrolysis into para-amino benzoic acid (PABA), which is then excreted by the kidneys into the urine. A 1% procaine injection has been recommended for the treatment of extravasation complications associated with venipuncture, steroids, and antibiotics. It has likewise been recommended for treatment of inadvertent intra-arterial injections (10 ml of 1% procaine), as it helps relieve pain and vascular spasm. Procaine is an occasional additive in illicit street drugs and is presented and sold usually as cocaine or heroin. Adverse effects Application of procaine leads to the depression of neuronal activity. The depression causes the nervous system to become hypersensitive, producing restlessness and shaking, leading to minor to severe convulsions. Studies on animals have shown the use of procaine led to the increase of dopamine and serotonin levels in the brain. Other issues may occur because of varying individual tolerance to procaine dosage. Nervousness and dizziness can arise from the excitation of the central nervous system, which may lead to respiratory failure if overdosed. Procaine may also induce weakening of the myocardium leading to cardiac arrest. Procaine can also cause allergic reactions causing individuals to have problems with breathing, rashes, and swelling. Allergic reactions to procaine are usually not in response to procaine itself, but to its metabolite PABA. Allergic reactions are in fact quite rare, estimated to have an incidence of 1 per 500,000 injections. About one in 3000 white North Americans is homozygous (i.e. has two copies of the abnormal gene) for the most common atypical form of the enzyme pseudocholinesterase, and do not hydrolyze ester anesthetics such as procaine. This results in a prolonged period of high levels of the anesthetic in the blood and increased toxicity. However, certain populations in the world such as the Vysya community in India commonly have a deficiency of this enzyme. Synthesis Procaine can be synthesized in two ways. The first consists of the direct reaction of the 4-aminobenzoic acid ethyl ester with 2-diethylaminoethanol in the presence of sodium ethoxide. The second is by oxidizing 4-nitrotoluene to 4-nitrobenzoic acid, which is further reacted with thionyl chloride, the resulting acid chloride is then esterified with 2-diethylaminoethanol to give Nitrocaine. Finally, the nitro group is reduced by hydrogenation over Raney nickel catalyst.
Biology and health sciences
Anesthetics
Health
298681
https://en.wikipedia.org/wiki/Gondola%20lift
Gondola lift
A gondola lift is a means of cable transport and type of aerial lift which is supported and propelled by cables from above. It consists of a loop of steel wire rope that is strung between two stations, sometimes over intermediate supporting towers. The cable is driven by a bullwheel in a terminal, which is typically connected to an engine or electric motor. It is often considered a continuous system since it features a haul rope which continuously moves and circulates around two terminal stations. In contrast, an aerial tramway operates solely with fixed grips and simply shuttles back and forth between two end terminals. The capacity, cost, and functionality of a gondola lift will differ dramatically depending on the combination of cables used for support and haulage and the type of grip (detachable or fixed). Because of the proliferation of such systems in the Alps, the and are also used in English-language texts. The systems may also be referred to as cable cars. History The Kohlerer-Bahn opened on June 29, 1908, in Bolzano, South Tyrol, the first modern aerial enclosed cable car solely for passenger service. Types Passenger lift In some systems the passenger cabins, which can hold between two and fifteen people, are connected to the cable by means of spring–loaded grips. These grips allow the cabin to be detached from the moving cable and slowed in the terminals, to allow passengers to board and disembark. Doors are almost always automatic and controlled by a lever on the roof or on the undercarriage that is pushed up or down. Cabins are driven through the terminals either by rotating tires, or by a chain system. To be accelerated to and decelerated from line speed, cabins are driven along by progressively swifter (or slower) rotating tires until they reach line or terminal speed. On older installations, gondolas are accelerated manually by an operator. Gondola lifts can have intermediate stops that allow for uploading and downloading on the lift. Examples of a lift with three stops instead of the standard two are the Village Gondola, the Excalibur Gondolas at Whistler Blackcomb and the Skyride at Alton Towers. In other systems the cable is slowed intermittently to allow passengers to disembark and embark the cabins at stations, and to allow people in the cars along the route to take photographs, such as Lebanon's Téléférique which offers an exceptional view to the Mediterranean, the historical Jounieh Bay and the pine forest at the 80% slope which this gondola lift goes over. Such a system is called pulse cabin and usually several cabins are loaded simultaneously. Open-air gondolas, or cabriolets as commonly called, are fairly uncommon and are quite primitive because they are exposed to the elements. Their cabins are usually hollow cylinders, open from chest height up, with floors and roof covers. They are usually used as village gondolas and for short distances. Examples are at Mont Tremblant Resort in Quebec, Canada, and at Blue Mountain Ski Resort (summer only, in the winter it is converted to a six person high-speed chairlift.) in Ontario, Canada, The Canyons Resort in Park City, Utah, Mountain Creek, and the new Village Cabriolet at Winter Park Resort in Colorado. Open-air gondolas can also come in a style similar to that of pulse gondolas, like the Village Gondola at Panorama Ski Resort, British Columbia. The first gondola built in the United States for a ski resort was at the Wildcat Mountain Ski Area. It was a two-person gondola built in 1957 and serviced skiers until 1999. The lift was later demolished in 2004. The lift and its cabins were manufactured by a former Italian lift company: Carlevaro-Savio. One of the longest gondola rides in the world, Gondelbahn Grindelwald-Männlichen, is in the Bernese Oberland in Switzerland and connects Grindelwald with Männlichen. Urban transport In recent years, gondola lifts are finding increased usage in urban environments. Cable cars used for urban transit include the Metrocable in Medellín, Colombia and the TransMiCable in Bogotá, Colombia; Aerovia in Guayaquil, Ecuador; Portland Aerial Tram in Portland, Oregon, United States; Roosevelt Island Tramway in New York City, New York, United States; Metrocable in Caracas, Venezuela; Trolcable in Mérida, Venezuela; Cable Aéreo in Manizales, Colombia; Mi Teleférico in La Paz, Bolivia; Mexicable in the State of Mexico, Mexico; Teleférico de Santo Domingo; Yenimahalle-Şentepe teleferik in Ankara, Turkey; Maçka and Eyüp Gondolas in Istanbul; the London cable car in London, England; Nizhny Novgorod Cableway, Russia. The Metrocable systems in Medellin and Caracas are fully integrated with the public transit network which provides passengers the ability to seamlessly transfer to the local metro lines, whereas the network in La Paz, the largest in the world, forms the backbone of the city's public transit system itself. Disney Skyliner is a gondola-lift service, which opened on September 29, 2019, at Walt Disney World in central Florida. The system uses multiple lines and has five stations, and it connects Epcot and Disney's Hollywood Studios with one another and with several Disney-owned and -operated resort hotels. In terms of urban gondola systems for the future, TransLink in Metro Vancouver has proposed to build a gondola up Burnaby Mountain to Simon Fraser University in an announcement in September 2010. The project was sidelined in 2014, but was revived in 2017. In late 2012, a widespread aerial gondola system was proposed for Austin, Texas, in an effort to expand mass transit options in the rapidly growing city. The proposal was rejected by the local transit agency in 2017. A proposed gondola system in Montreal was ultimately rejected by the Old Port of Montreal. Ropeway conveyor A ropeway conveyor or material ropeway is essentially a subtype of gondola lift, from which containers for goods rather than passenger cars are suspended. Ropeway conveyors are typically found around large mining concerns, and can be of considerable length. The COMILOG Cableway, which ran from Moanda in Gabon to Mbinda in the Republic of the Congo, was over in length. The Kristineberg-Boliden ropeway in Sweden had a length of . In Eritrea, the Italians built the Asmara-Massawa Cableway in 1936, which was long. The Manizales - Mariquita Cableway (1922) in Colombia was 73 km long. Conveyors can be powered by a wide variety of forms of power sources: electric motors, internal combustion engines, steam engines, or gravity. Gravity is particularly common in mountainous mining concerns, and directly employed; the weight of loaded down-going containers pulling the returning empties back up the slope. Gravity can also be used indirectly, where running water is available; a waterwheel is powered by gravity acting on water, and is used to power the cable. Bicable and tricable gondola lifts Conventional systems where a single cable provides both support and propulsion of the cabins are often called monocable gondola lifts. Gondola lifts which feature one stationary cable (known as the 'support' rope), and one haul rope are known as bicable gondola lifts, while lifts that feature two support ropes and one haul rope are known as tricable gondola lifts. Famous examples of bicable gondola lifts include the Ngong Ping 360 in Hong Kong, the Singapore Cable Car, and the Sulphur Mountain Gondola in Banff, Canada. This system has the advantage that the stationary cable's strength and properties can be tailored to each span, which reduces costs. They differ from aerial tramways, as these consist only of one or two usually larger cabins moving back and forth, rather than circulating. Bicable and tricable systems provide greater lateral stability compared with monocable systems, allowing the system to operate in higher cross-winds. List of accidents The National Ski Areas Association reports 0.138 fatalities per 100 million miles transported compared to 1.23 for cars. October 22, 1979: one person was killed and 17 other injured when two gondolas fell from the "Swiss Sky Ride" at the Texas State Fair. Winds gusting to caused three cars to collide and two fell on midway games below the cable. January 29, 1983: seven people were killed in the Singapore Cable Car disaster when two cabins plunged into the sea after the cableway was hit by a Panamanian-registered oil rig being towed. September 5, 2005: nine people died and ten were injured when a concrete block was accidentally dropped by a construction helicopter in Sölden, Austria. Hundreds had to be evacuated from the lift. March 2, 2008: a man fell out of a gondola in Chamonix and died, perhaps after he and one of his friends leaned on and broke the plexiglass window.
Technology
Rail and cable transport
null
298762
https://en.wikipedia.org/wiki/Lidocaine
Lidocaine
Lidocaine, also known as lignocaine and sold under the brand name Xylocaine among others, is a local anesthetic of the amino amide type. It is also used to treat ventricular tachycardia and ventricular fibrillation. When used for local anaesthesia or in nerve blocks, lidocaine typically begins working within several minutes and lasts for half an hour to three hours. Lidocaine mixtures may also be applied directly to the skin or mucous membranes to numb the area. It is often used mixed with a small amount of adrenaline (epinephrine) to prolong its local effects and to decrease bleeding. If injected intravenously, it may cause cerebral effects such as confusion, changes in vision, numbness, tingling, and vomiting. It can cause low blood pressure and an irregular heart rate. There are concerns that injecting it into a joint can cause problems with the cartilage. It appears to be generally safe for use in pregnancy. A lower dose may be required in those with liver problems. It is generally safe to use in those allergic to tetracaine or benzocaine. Lidocaine is an antiarrhythmic medication of the class Ib type. This means it works by blocking sodium channels thus decreasing the rate of contractions of the heart. When injected near nerves, the nerves cannot conduct signals to or from the brain. Lidocaine was discovered in 1946 and went on sale in 1948. It is on the World Health Organization's List of Essential Medicines. It is available as a generic medication. In 2022, it was the 262nd most commonly prescribed medication in the United States, with more than 1million prescriptions. Medical uses Local numbing agent The efficacy profile of lidocaine as a local anaesthetic is characterized by a rapid onset of action and intermediate duration of efficacy. Therefore, lidocaine is suitable for infiltration, block, and surface anaesthesia. Longer-acting substances such as bupivacaine are sometimes given preference for spinal and epidural anaesthesias; lidocaine, though, has the advantage of a rapid onset of action. Lidocaine is one of the most commonly used local anaesthetics in dentistry. It can be administered in multiple ways, most often as a nerve block or infiltration, depending on the type of treatment carried out and the area of the mouth worked on. For surface anaesthesia, several formulations can be used for endoscopies, before intubations. Lidocaine drops can be used on the eyes for short ophthalmic procedures. There is tentative evidence for topical lidocaine for neuropathic pain and skin graft donor site pain. As a local numbing agent, it is used for the treatment of premature ejaculation. An adhesive transdermal patch containing a 5% concentration of lidocaine in a hydrogel bandage, is approved by the US FDA for reducing nerve pain caused by shingles. The transdermal patch is also used for pain from other causes, such as compressed nerves and persistent nerve pain after some surgeries. Heart arrhythmia Lidocaine is a common class-1b antiarrhythmic drug; it is used intravenously for the treatment of ventricular arrhythmias (for acute myocardial infarction, digoxin poisoning, cardioversion, or cardiac catheterization) if amiodarone is not available or contraindicated. Lidocaine should be given for this indication after defibrillation, CPR, and vasopressors have been initiated. A routine preventive dose is no longer recommended after a myocardial infarction as the overall benefit is not convincing. Epilepsy A 2013 review on treatment for neonatal seizures recommended intravenous lidocaine as a second-line treatment, if phenobarbital fails to stop seizures. Other Intravenous lidocaine infusions are also used to treat chronic pain and acute surgical pain as an opiate sparing technique. The quality of evidence for this use is poor so it is difficult to compare it to placebo or an epidural. Inhaled lidocaine can be used as a cough suppressor acting peripherally to reduce the cough reflex. This application can be implemented as a safety and comfort measure for people needing intubation, as it reduces the incidence of coughing and any tracheal damage it might cause when emerging from anaesthesia. A 2019 systematic review of the literature found that intraurethral lidocaine reduces pain in men who undergo cystoscopic procedures. Lidocaine, along with ethanol, ammonia, and acetic acid, may also help in treating jellyfish stings, both numbing the affected area and preventing further nematocyst discharge. For gastritis, drinking a viscous lidocaine formulation may help with the pain. A 2021 study found that lidocaine 5% spray on glans penis 10-20 minutes prior to sexual intercourse significantly improves premature ejaculation. Another study found that lidocaine-prilocaine cream 5% is effective in premature ejaculation and 20 minutes of application time before sexual intercourse. Adverse effects Adverse drug reactions (ADRs) are rare when lidocaine is used as a local anesthetic and is administered correctly. Most ADRs associated with lidocaine for anesthesia relate to administration technique (resulting in systemic exposure) or pharmacological effects of anesthesia, and allergic reactions only rarely occur. Systemic exposure to excessive quantities of lidocaine mainly results in central nervous system (CNS) and cardiovascular effects – CNS effects usually occur at lower blood plasma concentrations and additional cardiovascular effects present at higher concentrations, though cardiovascular collapse may also occur with low concentrations. ADRs by individual organ systems are: CNS excitation: nervousness, agitation, anxiety, apprehension, tingling around the mouth (circumoral paraesthesia), headache, hyperesthesia, tremor, dizziness, pupillary changes, psychosis, euphoria, hallucinations, and seizures CNS depression with increasingly heavier exposure: drowsiness, lethargy, slurred speech, hypoesthesia, confusion, disorientation, loss of consciousness, respiratory depression and apnoea. Cardiovascular: hypotension, bradycardia, arrhythmias, flushing, venous insufficiency, increased defibrillator threshold, edema, and/or cardiac arrest – some of which may be due to hypoxemia secondary to respiratory depression. Respiratory: bronchospasm, dyspnea, respiratory depression or arrest Gastrointestinal: metallic taste, nausea, vomiting, agita, and diarrhea Ears: tinnitus Eyes: local burning, conjunctival hyperemia, corneal epithelial changes/ulceration, diplopia, visual changes (opacification) Skin: itching, depigmentation, rash, urticaria, edema, angioedema, bruising, inflammation of the vein at the injection site, irritation of the skin when applied topically Blood: methemoglobinemia Allergy ADRs associated with the use of intravenous lidocaine are similar to the toxic effects of systemic exposure above. These are dose-related and more frequent at high infusion rates (≥3 mg/min). Common ADRs include headache, dizziness, drowsiness, confusion, visual disturbances, tinnitus, tremor, and/or paraesthesia. Infrequent ADRs associated with the use of lidocaine include: hypotension, bradycardia, arrhythmias, cardiac arrest, muscle twitching, seizures, coma, and/or respiratory depression. It is generally safe to use lidocaine with vasoconstrictors such as adrenaline, including in regions such as the nose, ears, fingers, and toes. While concerns of tissue death, if used in these areas, have been raised, the evidence does not support these concerns. The use of lidocaine for spinal anesthesia may lead to an increased risk of transient neurological symptoms, a painful condition that is sometimes experienced immediately after surgery. There is some weak evidence to suggest that the use of alternative anesthetic medications such as prilocaine, procaine, bupivacaine, ropivacaine, or levobupivacaine may decrease the risk of a person developing transient neurological symptoms. Low-quality evidence suggests that 2‐chloroprocaine and mepivacaine when used for spinal anesthetic have a similar risk of the person developing transient neurological symptoms as lidocaine. Interactions Any drugs that are also ligands of CYP3A4 and CYP1A2 can potentially increase serum levels and potential for toxicity or decrease serum levels and the efficacy, depending on whether they induce or inhibit the enzymes, respectively. Drugs that may increase the chance of methemoglobinemia should also be considered carefully. Dronedarone and liposomal morphine are both absolutely a contraindication, as they may increase the serum levels, but hundreds of other drugs require monitoring for interaction. Contraindications Absolute contraindications for the use of lidocaine include: Heart block, second or third degree (without pacemaker) Severe sinoatrial block (without pacemaker) Serious adverse drug reaction to lidocaine or amide local anesthetics Hypersensitivity to corn and corn-related products (corn-derived dextrose is used in the mixed injections) Concurrent treatment with quinidine, flecainide, disopyramide, procainamide (class I antiarrhythmic agents) Prior use of amiodarone hydrochloride Adams–Stokes syndrome Wolff–Parkinson–White syndrome Lidocaine viscous is not recommended by the FDA to treat teething pain in children and infants. Exercise caution in people with any of these: Hypotension not due to arrhythmia Bradycardia Accelerated idioventricular rhythm Elderly Ehlers–Danlos syndromes; efficiency of local anesthetics can be reduced Pseudocholinesterase deficiency Intra-articular infusion (this is not an approved indication and can cause chondrolysis) Porphyria, especially acute intermittent porphyria; lidocaine has been classified as porphyrogenic because of the hepatic enzymes it induces, although clinical evidence suggests it is not. Bupivacaine is a safe alternative in this case. Impaired liver function – people with lowered hepatic function may have an adverse reaction with repeated administration of lidocaine because the drug is metabolized by the liver. Adverse reactions may include neurological symptoms (e.g. dizziness, nausea, muscle twitches, vomiting, or seizures). Overdosage Overdoses of lidocaine may result from excessive administration by topical or parenteral routes, accidental oral ingestion of topical preparations by children (who are more susceptible to overdose), accidental intravenous (rather than subcutaneous, intrathecal, or paracervical) injection, or from prolonged use of subcutaneous infiltration anesthesia during cosmetic surgery. The maximum safe dose is 3 mg per kg. Such overdoses have often led to severe toxicity or death in both children and adults (local anesthetic systemic toxicity). Symptoms include central nervous system manifestations such as numbness of the tongue, dizziness, tinnitus, visual disturbances, convulsions, reduced consciousness progressing to coma, as well as respiratory arrest and cardiovascular disturbances. Lidocaine and its two major metabolites may be quantified in blood, plasma, or serum to confirm the diagnosis in potential poisoning victims or to assist forensic investigation in a case of fatal overdose. Lidocaine is often given intravenously as an antiarrhythmic agent in critical cardiac-care situations. Treatment with intravenous lipid emulsions (used for parenteral feeding) to reverse the effects of local anaesthetic toxicity is becoming more common. Postarthroscopic glenohumeral chondrolysis Lidocaine in large amounts may be toxic to cartilage and intra-articular infusions can lead to postarthroscopic glenohumeral chondrolysis. Pharmacology Mechanism of action Lidocaine alters signal conduction in neurons by prolonging the inactivation of the fast voltage-gated Na+ channels in the neuronal cell membrane responsible for action potential propagation. With sufficient blockage, the voltage-gated sodium channels will not open and an action potential will not be generated. Careful titration allows for a high degree of selectivity in the blockage of sensory neurons, whereas higher concentrations also affect other types of neurons. The same principle applies to this drug's actions in the heart. Blocking sodium channels in the conduction system, as well as the muscle cells of the heart, raises the depolarization threshold, making the heart less likely to initiate or conduct early action potentials that may cause an arrhythmia. Pharmacokinetics When used as an injectable it typically begins working within four minutes and lasts for half an hour to three hours. Lidocaine is about 95% metabolized (dealkylated) in the liver mainly by CYP3A4 to the pharmacologically active metabolites monoethylglycinexylidide (MEGX) and then subsequently to the inactive glycine xylidide. MEGX has a longer half-life than lidocaine, but also is a less potent sodium channel blocker. The volume of distribution is 1.1 L/kg to 2.1 L/kg, but congestive heart failure can decrease it. About 60% to 80% circulates bound to the protein alpha1 acid glycoprotein. The oral bioavailability is 35% and the topical bioavailability is 3%. Lidocaine efficacy may be reduced in tissues that are inflamed, due to competing inflammatory mediators. The elimination half-life of lidocaine is biphasic and around 90 min to 120 min in most people. This may be prolonged in people with hepatic impairment (average 343 min) or congestive heart failure (average 136 min). Lidocaine is excreted in the urine (90% as metabolites and 10% as unchanged drug). Chemistry Molecular structure and conformational flexibility Lidocaine's 1,5-dimethylbenzene group gives it hydrophobic properties. In addition to this aromatic unit, lidocaine has an aliphatic section comprising amide, carbonyl, and enyl groups. Lidocaine exhibits a remarkable degree of conformational flexibility, resulting in more than 60 probable conformers. This adaptability arises from the high lability of the amide and ethyl groups within the molecule. These groups can undergo shifts in their positions, leading to significant variations in the overall molecular configuration. Influence of temperature and pressure on conformational preference The dynamic transformation of lidocaine conformers in supercritical carbon dioxide (scCO2) highly depends on external factors such as pressure and temperature. Alterations in these conditions can lead to distinct conformations, impacting the molecule's physicochemical properties. One notable consequence of these variations is the particle size of lidocaine when produced through micronization using scCO2. Changes in the position of the amide group within the molecule can trigger a redistribution of intra- and intermolecular hydrogen bonds, affecting the outcome of the micronization process and the resultant particle size. Veterinary use Lidocaine is commonly used in veterinary medicine in both companion and production animals around the world and is listed as an essential veterinary medicine by the World Veterinary Association and also the World Small Animal Veterinary Association. In veterinary medicine, it is commonly used as a local anaesthetic both as an injectable or topical product. It provides excellent local anaesthesia when given by local infiltration into a tissue or via specific nerve blocks. These are commonly applied to nerves of the head, limbs, thorax, and spine. It can also be used to treat ventricular arrhythmias when given intravenously. In most veterinary species, when given via injection, it has a rapid onset of action (2-10 minutes) with a duration of action of 30-60 minutes. In veterinary species, its metabolism is much the same as humans with rapid metabolism in the liver to the major metabolites MEGX (monoethylglycine xylidide) and GX (glycine xylidide) that retain partial activity against sodium channels. These compounds are further metabolized to monoethylglycine and xylidide, respectively. Toxicity in animals is similar to that seen in humans with both toxicity to the central nervous system (CNS) and cardiovascular system observed. General the CNS signs are seen first with agitation and muscle twitching seen before the cardiovascular signs of hypotension, myocardial depression, and arrhythmias. Further CNS depression will result from higher doses with seizures and convulsions and eventually apnea and death. It is a component of the veterinary drug Tributame along with embutramide and chloroquine used to carry out euthanasia on horses and dogs. History Lidocaine, the first amino amide–type local anesthetic (previous were amino esters), was first synthesized under the name 'xylocaine' by Swedish chemist Nils Löfgren in 1943. His colleague Bengt Lundqvist performed the first injection anesthesia experiments on himself. It was first marketed in 1949. Society and culture Dosage forms Lidocaine, usually in the form of its hydrochloride salt, is available in various forms including many topical formulations and solutions for injection or infusion. It is also available as a transdermal patch, which is applied directly to the skin. Names Lidocaine is the International Nonproprietary Name (INN), British Approved Name (BAN), and Australian Approved Name (AAN), while lignocaine is the former BAN and AAN. Both the old and new names will be displayed on the product label in Australia until at least 2023. Xylocaine is a brand name, referring to the major synthetic building block 2,6-xylidine. The "ligno" prefix is chosen because "xylo" means wood in Greek while "ligno" means the same in Latin. The "lido" prefix instead refers to the fact that the drug is chemically related to acetanilide. Recreational use lidocaine is not listed by the World Anti-Doping Agency as a substance whose use is banned in sport. It is used as an adjuvant, adulterant, and diluent to street drugs such as cocaine and heroin. It is one of the three common ingredients in site enhancement oil used by bodybuilders. Adulterant in cocaine Lidocaine is often added to cocaine as a diluent. Cocaine and lidocaine both numb the gums when applied. This gives the user the impression of high-quality cocaine when in actuality the user is receiving a diluted product. Compendial status Japanese Pharmacopoeia 15 United States Pharmacopeia 31
Biology and health sciences
Anesthetics
Health
298834
https://en.wikipedia.org/wiki/Affine%20space
Affine space
In mathematics, an affine space is a geometric structure that generalizes some of the properties of Euclidean spaces in such a way that these are independent of the concepts of distance and measure of angles, keeping only the properties related to parallelism and ratio of lengths for parallel line segments. Affine space is the setting for affine geometry. As in Euclidean space, the fundamental objects in an affine space are called points, which can be thought of as locations in the space without any size or shape: zero-dimensional. Through any pair of points an infinite straight line can be drawn, a one-dimensional set of points; through any three points that are not collinear, a two-dimensional plane can be drawn; and, in general, through points in general position, a -dimensional flat or affine subspace can be drawn. Affine space is characterized by a notion of pairs of parallel lines that lie within the same plane but never meet each-other (non-parallel lines within the same plane intersect in a point). Given any line, a line parallel to it can be drawn through any point in the space, and the equivalence class of parallel lines are said to share a direction. Unlike for vectors in a vector space, in an affine space there is no distinguished point that serves as an origin. There is no predefined concept of adding or multiplying points together, or multiplying a point by a scalar number. However, for any affine space, an associated vector space can be constructed from the differences between start and end points, which are called free vectors, displacement vectors, translation vectors or simply translations. Likewise, it makes sense to add a displacement vector to a point of an affine space, resulting in a new point translated from the starting point by that vector. While points cannot be arbitrarily added together, it is meaningful to take affine combinations of points: weighted sums with numerical coefficients summing to 1, resulting in another point. These coefficients define a barycentric coordinate system for the flat through the points. Any vector space may be viewed as an affine space; this amounts to "forgetting" the special role played by the zero vector. In this case, elements of the vector space may be viewed either as points of the affine space or as displacement vectors or translations. When considered as a point, the zero vector is called the origin. Adding a fixed vector to the elements of a linear subspace (vector subspace) of a vector space produces an affine subspace of the vector space. One commonly says that this affine subspace has been obtained by translating (away from the origin) the linear subspace by the translation vector (the vector added to all the elements of the linear space). In finite dimensions, such an affine subspace is the solution set of an inhomogeneous linear system. The displacement vectors for that affine space are the solutions of the corresponding homogeneous linear system, which is a linear subspace. Linear subspaces, in contrast, always contain the origin of the vector space. The dimension of an affine space is defined as the dimension of the vector space of its translations. An affine space of dimension one is an affine line. An affine space of dimension 2 is an affine plane. An affine subspace of dimension in an affine space or a vector space of dimension is an affine hyperplane. Informal description The following characterization may be easier to understand than the usual formal definition: an affine space is what is left of a vector space after one has forgotten which point is the origin (or, in the words of the French mathematician Marcel Berger, "An affine space is nothing more than a vector space whose origin we try to forget about, by adding translations to the linear maps"). Imagine that Alice knows that a certain point is the actual origin, but Bob believes that another point—call it —is the origin. Two vectors, and , are to be added. Bob draws an arrow from point to point and another arrow from point to point , and completes the parallelogram to find what Bob thinks is , but Alice knows that he has actually computed . Similarly, Alice and Bob may evaluate any linear combination of and , or of any finite set of vectors, and will generally get different answers. However, if the sum of the coefficients in a linear combination is 1, then Alice and Bob will arrive at the same answer. If Alice travels to then Bob can similarly travel to . Under this condition, for all coefficients , Alice and Bob describe the same point with the same linear combination, despite using different origins. While only Alice knows the "linear structure", both Alice and Bob know the "affine structure"—i.e. the values of affine combinations, defined as linear combinations in which the sum of the coefficients is 1. A set with an affine structure is an affine space. Definition While affine space can be defined axiomatically (see below), analogously to the definition of Euclidean space implied by Euclid's Elements, for convenience most modern sources define affine spaces in terms of the well developed vector space theory. An affine space is a set together with a vector space , and a transitive and free action of the additive group of on the set . The elements of the affine space are called points. The vector space is said to be associated to the affine space, and its elements are called vectors, translations, or sometimes free vectors. Explicitly, the definition above means that the action is a mapping, generally denoted as an addition, that has the following properties. Right identity: , where is the zero vector in Associativity: (here the last is the addition in ) Free and transitive action: For every , the mapping is a bijection. The first two properties are simply defining properties of a (right) group action. The third property characterizes free and transitive actions, the onto character coming from transitivity, and then the injective character follows from the action being free. There is a fourth property that follows from 1, 2 above: Existence of one-to-one translations For all , the mapping is a bijection. Property 3 is often used in the following equivalent form (the 5th property). Subtraction: For every in , there exists a unique , denoted , such that . Another way to express the definition is that an affine space is a principal homogeneous space for the action of the additive group of a vector space. Homogeneous spaces are, by definition, endowed with a transitive group action, and for a principal homogeneous space, such a transitive action is, by definition, free. Subtraction and Weyl's axioms The properties of the group action allows for the definition of subtraction for any given ordered pair of points in , producing a vector of . This vector, denoted or , is defined to be the unique vector in such that Existence follows from the transitivity of the action, and uniqueness follows because the action is free. This subtraction has the two following properties, called Weyl's axioms: , there is a unique point such that The parallelogram property is satisfied in affine spaces, where it is expressed as: given four points the equalities and are equivalent. This results from the second Weyl's axiom, since Affine spaces can be equivalently defined as a point set , together with a vector space , and a subtraction satisfying Weyl's axioms. In this case, the addition of a vector to a point is defined from the first of Weyl's axioms. Affine subspaces and parallelism An affine subspace (also called, in some contexts, a linear variety, a flat, or, over the real numbers, a linear manifold) of an affine space is a subset of such that, given a point , the set of vectors is a linear subspace of . This property, which does not depend on the choice of , implies that is an affine space, which has as its associated vector space. The affine subspaces of are the subsets of of the form where is a point of , and a linear subspace of . The linear subspace associated with an affine subspace is often called its , and two subspaces that share the same direction are said to be parallel. This implies the following generalization of Playfair's axiom: Given a direction , for any point of there is one and only one affine subspace of direction , which passes through , namely the subspace . Every translation maps any affine subspace to a parallel subspace. The term parallel is also used for two affine subspaces such that the direction of one is included in the direction of the other. Affine map Given two affine spaces and whose associated vector spaces are and , an affine map or affine homomorphism from to is a map such that is a well defined linear map. By being well defined is meant that implies . This implies that, for a point and a vector , one has Therefore, since for any given in , for a unique , is completely defined by its value on a single point and the associated linear map . Endomorphisms An affine transformation or endomorphism of an affine space is an affine map from that space to itself. One important family of examples is the translations: given a vector , the translation map that sends for every in is an affine map. Another important family of examples are the linear maps centred at an origin: given a point and a linear map , one may define an affine map by for every in . After making a choice of origin , any affine map may be written uniquely as a combination of a translation and a linear map centred at . Vector spaces as affine spaces Every vector space may be considered as an affine space over itself. This means that every element of may be considered either as a point or as a vector. This affine space is sometimes denoted for emphasizing the double role of the elements of . When considered as a point, the zero vector is commonly denoted (or , when upper-case letters are used for points) and called the origin. If is another affine space over the same vector space (that is ) the choice of any point in defines a unique affine isomorphism, which is the identity of and maps to . In other words, the choice of an origin in allows us to identify and up to a canonical isomorphism. The counterpart of this property is that the affine space may be identified with the vector space in which "the place of the origin has been forgotten". Relation to Euclidean spaces Definition of Euclidean spaces Euclidean spaces (including the one-dimensional line, two-dimensional plane, and three-dimensional space commonly studied in elementary geometry, as well as higher-dimensional analogues) are affine spaces. Indeed, in most modern definitions, a Euclidean space is defined to be an affine space, such that the associated vector space is a real inner product space of finite dimension, that is a vector space over the reals with a positive-definite quadratic form . The inner product of two vectors and is the value of the symmetric bilinear form The usual Euclidean distance between two points and is In older definition of Euclidean spaces through synthetic geometry, vectors are defined as equivalence classes of ordered pairs of points under equipollence (the pairs and are equipollent if the points (in this order) form a parallelogram). It is straightforward to verify that the vectors form a vector space, the square of the Euclidean distance is a quadratic form on the space of vectors, and the two definitions of Euclidean spaces are equivalent. Affine properties In Euclidean geometry, the common phrase "affine property" refers to a property that can be proved in affine spaces, that is, it can be proved without using the quadratic form and its associated inner product. In other words, an affine property is a property that does not involve lengths and angles. Typical examples are parallelism, and the definition of a tangent. A non-example is the definition of a normal. Equivalently, an affine property is a property that is invariant under affine transformations of the Euclidean space. Affine combinations and barycenter Let be a collection of points in an affine space, and be elements of the ground field. Suppose that . For any two points and one has Thus, this sum is independent of the choice of the origin, and the resulting vector may be denoted When , one retrieves the definition of the subtraction of points. Now suppose instead that the field elements satisfy . For some choice of an origin , denote by the unique point such that One can show that is independent from the choice of . Therefore, if one may write The point is called the barycenter of the for the weights . One says also that is an affine combination of the with coefficients . Examples When children find the answers to sums such as or by counting right or left on a number line, they are treating the number line as a one-dimensional affine space. Time can be modelled as a one-dimensional affine space. Specific points in time (such as a date on the calendar) are points in the affine space, while durations (such as a number of days) are displacements. The space of energies is an affine space for , since it is often not meaningful to talk about absolute energy, but it is meaningful to talk about energy differences. The vacuum energy when it is defined picks out a canonical origin. Physical space is often modelled as an affine space for in non-relativistic settings and in the relativistic setting. To distinguish them from the vector space these are sometimes called Euclidean spaces and . Any coset of a subspace of a vector space is an affine space over that subspace. In particular, a line in the plane that doesn't pass through the origin is an affine space that is not a vector space relative to the operations it inherits from , although it can be given a canonical vector space structure by picking the point closest to the origin as the zero vector; likewise in higher dimensions and for any normed vector space If is a matrix and lies in its column space, the set of solutions of the equation is an affine space over the subspace of solutions of . The solutions of an inhomogeneous linear differential equation form an affine space over the solutions of the corresponding homogeneous linear equation. Generalizing all of the above, if is a linear map and lies in its image, the set of solutions to the equation is a coset of the kernel of , and is therefore an affine space over . The space of (linear) complementary subspaces of a vector subspace in a vector space is an affine space, over . That is, if is a short exact sequence of vector spaces, then the space of all splittings of the exact sequence naturally carries the structure of an affine space over . The space of connections (viewed from the vector bundle , where is a smooth manifold) is an affine space for the vector space of valued 1-forms. The space of connections (viewed from the principal bundle ) is an affine space for the vector space of -valued 1-forms, where is the associated adjoint bundle. Affine span and bases For any non-empty subset of an affine space , there is a smallest affine subspace that contains it, called the affine span of . It is the intersection of all affine subspaces containing , and its direction is the intersection of the directions of the affine subspaces that contain . The affine span of is the set of all (finite) affine combinations of points of , and its direction is the linear span of the for and in . If one chooses a particular point , the direction of the affine span of is also the linear span of the for in . One says also that the affine span of is generated by and that is a generating set of its affine span. A set of points of an affine space is said to be or, simply, independent, if the affine span of any strict subset of is a strict subset of the affine span of . An or barycentric frame (see , below) of an affine space is a generating set that is also independent (that is a minimal generating set). Recall that the dimension of an affine space is the dimension of its associated vector space. The bases of an affine space of finite dimension are the independent subsets of elements, or, equivalently, the generating subsets of elements. Equivalently, } is an affine basis of an affine space if and only if } is a linear basis of the associated vector space. Coordinates There are two strongly related kinds of coordinate systems that may be defined on affine spaces. Barycentric coordinates Let be an affine space of dimension over a field , and be an affine basis of . The properties of an affine basis imply that for every in there is a unique -tuple of elements of such that and The are called the barycentric coordinates of over the affine basis . If the are viewed as bodies that have weights (or masses) , the point is thus the barycenter of the , and this explains the origin of the term barycentric coordinates. The barycentric coordinates define an affine isomorphism between the affine space and the affine subspace of defined by the equation . For affine spaces of infinite dimension, the same definition applies, using only finite sums. This means that for each point, only a finite number of coordinates are non-zero. Affine coordinates An affine frame is a coordinate frame of an affine space, consisting of a point, called the origin, and a linear basis of the associated vector space. More precisely, for an affine space with associated vector space , the origin belongs to , and the linear basis is a basis of (for simplicity of the notation, we consider only the case of finite dimension, the general case is similar). For each point of , there is a unique sequence of elements of the ground field such that or equivalently The are called the affine coordinates of over the affine frame . Example: In Euclidean geometry, Cartesian coordinates are affine coordinates relative to an orthonormal frame, that is an affine frame such that is an orthonormal basis. Relationship between barycentric and affine coordinates Barycentric coordinates and affine coordinates are strongly related, and may be considered as equivalent. In fact, given a barycentric frame one deduces immediately the affine frame and, if are the barycentric coordinates of a point over the barycentric frame, then the affine coordinates of the same point over the affine frame are Conversely, if is an affine frame, then is a barycentric frame. If are the affine coordinates of a point over the affine frame, then its barycentric coordinates over the barycentric frame are Therefore, barycentric and affine coordinates are almost equivalent. In most applications, affine coordinates are preferred, as involving less coordinates that are independent. However, in the situations where the important points of the studied problem are affinely independent, barycentric coordinates may lead to simpler computation, as in the following example. Example of the triangle The vertices of a non-flat triangle form an affine basis of the Euclidean plane. The barycentric coordinates allows easy characterization of the elements of the triangle that do not involve angles or distances: The vertices are the points of barycentric coordinates , and . The lines supporting the edges are the points that have a zero coordinate. The edges themselves are the points that have one zero coordinate and two nonnegative coordinates. The interior of the triangle are the points whose coordinates are all positive. The medians are the points that have two equal coordinates, and the centroid is the point of coordinates . Change of coordinates Case of barycentric coordinates Barycentric coordinates are readily changed from one basis to another. Let and be affine bases of . For every in there is some tuple for which Similarly, for every from the first basis, we now have in the second basis for some tuple . Now we can rewrite our expression in the first basis as one in the second with giving us coordinates in the second basis as the tuple . Case of affine coordinates Affine coordinates are also readily changed from one basis to another. Let , and , be affine frames of . For each point of , there is a unique sequence of elements of the ground field such that and similarly, for every from the first basis, we now have in the second basis for tuple and tuples . Now we can rewrite our expression in the first basis as one in the second with giving us coordinates in the second basis as the tuple . Properties of affine homomorphisms Matrix representation An affine transformation is executed on a projective space of , by a 4 by 4 matrix with a special fourth column: The transformation is affine instead of linear due to the inclusion of point , the transformed output of which reveals the affine shift. Image and fibers Let be an affine homomorphism, with its associated linear map. The image of is the affine subspace of , which has as associated vector space. As an affine space does not have a zero element, an affine homomorphism does not have a kernel. However, the linear map does, and if we denote by its kernel, then for any point of , the inverse image of is an affine subspace of whose direction is . This affine subspace is called the fiber of . Projection An important example is the projection parallel to some direction onto an affine subspace. The importance of this example lies in the fact that Euclidean spaces are affine spaces, and that these kinds of projections are fundamental in Euclidean geometry. More precisely, given an affine space with associated vector space , let be an affine subspace of direction , and be a complementary subspace of in (this means that every vector of may be decomposed in a unique way as the sum of an element of and an element of ). For every point of , its projection to parallel to is the unique point in such that This is an affine homomorphism whose associated linear map is defined by for and in . The image of this projection is , and its fibers are the subspaces of direction . Quotient space Although kernels are not defined for affine spaces, quotient spaces are defined. This results from the fact that "belonging to the same fiber of an affine homomorphism" is an equivalence relation. Let be an affine space, and be a linear subspace of the associated vector space . The quotient of by is the quotient of by the equivalence relation such that and are equivalent if This quotient is an affine space, which has as associated vector space. For every affine homomorphism , the image is isomorphic to the quotient of by the kernel of the associated linear map. This is the first isomorphism theorem for affine spaces. Axioms Affine spaces are usually studied by analytic geometry using coordinates, or equivalently vector spaces. They can also be studied as synthetic geometry by writing down axioms, though this approach is much less common. There are several different systems of axioms for affine space. axiomatizes the special case of affine geometry over the reals as ordered geometry together with an affine form of Desargues's theorem and an axiom stating that in a plane there is at most one line through a given point not meeting a given line. Affine planes satisfy the following axioms : (in which two lines are called parallel if they are equal or disjoint): Any two distinct points lie on a unique line. Given a point and line there is a unique line that contains the point and is parallel to the line There exist three non-collinear points. As well as affine planes over fields (or division rings), there are also many non-Desarguesian planes satisfying these axioms. gives axioms for higher-dimensional affine spaces. Purely axiomatic affine geometry is more general than affine spaces and is treated in a separate article. Relation to projective spaces Affine spaces are contained in projective spaces. For example, an affine plane can be obtained from any projective plane by removing one line and all the points on it, and conversely any affine plane can be used to construct a projective plane as a closure by adding a line at infinity whose points correspond to equivalence classes of parallel lines. Similar constructions hold in higher dimensions. Further, transformations of projective space that preserve affine space (equivalently, that leave the hyperplane at infinity invariant as a set) yield transformations of affine space. Conversely, any affine linear transformation extends uniquely to a projective linear transformation, so the affine group is a subgroup of the projective group. For instance, Möbius transformations (transformations of the complex projective line, or Riemann sphere) are affine (transformations of the complex plane) if and only if they fix the point at infinity. Affine algebraic geometry In algebraic geometry, an affine variety (or, more generally, an affine algebraic set) is defined as the subset of an affine space that is the set of the common zeros of a set of so-called polynomial functions over the affine space. For defining a polynomial function over the affine space, one has to choose an affine frame. Then, a polynomial function is a function such that the image of any point is the value of some multivariate polynomial function of the coordinates of the point. As a change of affine coordinates may be expressed by linear functions (more precisely affine functions) of the coordinates, this definition is independent of a particular choice of coordinates. The choice of a system of affine coordinates for an affine space of dimension over a field induces an affine isomorphism between and the affine coordinate space . This explains why, for simplification, many textbooks write , and introduce affine algebraic varieties as the common zeros of polynomial functions over . As the whole affine space is the set of the common zeros of the zero polynomial, affine spaces are affine algebraic varieties. Ring of polynomial functions By the definition above, the choice of an affine frame of an affine space allows one to identify the polynomial functions on with polynomials in variables, the ith variable representing the function that maps a point to its th coordinate. It follows that the set of polynomial functions over is a -algebra, denoted , which is isomorphic to the polynomial ring . When one changes coordinates, the isomorphism between and changes accordingly, and this induces an automorphism of , which maps each indeterminate to a polynomial of degree one. It follows that the total degree defines a filtration of , which is independent from the choice of coordinates. The total degree defines also a graduation, but it depends on the choice of coordinates, as a change of affine coordinates may map indeterminates on non-homogeneous polynomials. Zariski topology Affine spaces over topological fields, such as the real or the complex numbers, have a natural topology. The Zariski topology, which is defined for affine spaces over any field, allows use of topological methods in any case. Zariski topology is the unique topology on an affine space whose closed sets are affine algebraic sets (that is sets of the common zeros of polynomial functions over the affine set). As, over a topological field, polynomial functions are continuous, every Zariski closed set is closed for the usual topology, if any. In other words, over a topological field, Zariski topology is coarser than the natural topology. There is a natural injective function from an affine space into the set of prime ideals (that is the spectrum) of its ring of polynomial functions. When affine coordinates have been chosen, this function maps the point of coordinates to the maximal ideal . This function is a homeomorphism (for the Zariski topology of the affine space and of the spectrum of the ring of polynomial functions) of the affine space onto the image of the function. The case of an algebraically closed ground field is especially important in algebraic geometry, because, in this case, the homeomorphism above is a map between the affine space and the set of all maximal ideals of the ring of functions (this is Hilbert's Nullstellensatz). This is the starting idea of scheme theory of Grothendieck, which consists, for studying algebraic varieties, of considering as "points", not only the points of the affine space, but also all the prime ideals of the spectrum. This allows gluing together algebraic varieties in a similar way as, for manifolds, charts are glued together for building a manifold. Cohomology Like all affine varieties, local data on an affine space can always be patched together globally: the cohomology of affine space is trivial. More precisely, for all coherent sheaves F, and integers . This property is also enjoyed by all other affine varieties. But also all of the étale cohomology groups on affine space are trivial. In particular, every line bundle is trivial. More generally, the Quillen–Suslin theorem implies that every algebraic vector bundle over an affine space is trivial.
Mathematics
Other algebra topics
null
298931
https://en.wikipedia.org/wiki/Jujube
Jujube
Jujube (UK ; US or ), sometimes jujuba, scientific name Ziziphus jujuba, and also called red date, Chinese date, and Chinese jujube, is a species in the genus Ziziphus in the buckthorn family Rhamnaceae. It is often confused with the closely related Indian jujube, Z.mauritiana. The Chinese jujube enjoys a diverse range of climates from temperate to tropical, whereas the Indian jujube is restricted to warmer subtropical and tropical climates. Description It is a small deciduous tree or shrub reaching a height of , usually with thorny branches. The leaves are shiny-green, ovate-acute, long and wide, with three conspicuous veins at the base, and a finely toothed margin. The flowers are small, wide, with five inconspicuous yellowish-green petals. The fruit is an edible oval drupe deep; when immature it is smooth-green, with the consistency and taste of an apple with lower acidity, maturing brown to purplish-black, and eventually wrinkled, looking like a small date. There is a single hard kernel, similar to an olive pit, containing two seeds. Chemistry Leaves contain saponin and ziziphin, which suppresses the ability to perceive sweet taste. Flavinoids found in the fruits include Kaempferol 3-O-rutinoside, Quercetine 3-O-robinobioside, Quercetine 3-O-rutinoside. Terpenoids such as colubrinic acid and alphitolic acid were found in the fruits. Taxonomy The ultimate source of the name is Ancient Greek zízyphon. This was borrowed into Classical Latin as (used for the fruit) and (the tree). A descendant of the Latin word into a Romance language, which may have been French or medieval Latin , in turn gave rise to the common English jujube. This name is not related to jojoba, which is a loan from Spanish , itself borrowed from hohohwi, the name of that plant in the Oʼodham language. The binomial name has a curious nomenclatural history, due to a combination of botanical naming regulations, and variations in spelling. It was first named in the binomial system by Carl Linnaeus as Rhamnus zizyphus, in Species Plantarum (1753). Philip Miller, in his Gardener's Dictionary, considered that the jujube and its relatives were sufficiently distinct from Rhamnus to be placed in a separate genus (as it had already been by the pre-Linnaean author Tournefort in 1700), and in the 1768 edition he gave it the name Ziziphus jujuba (using Tournefort's spelling for the genus name). For the species name, he used a different name, as tautonyms (repetition of exactly the same name in the genus and species) are not permitted in botanical naming. However, because of Miller's slightly different spelling, the combination of the earlier species name (from Linnaeus) with the new genus, Ziziphus zizyphus, is not a tautonym, and was therefore permitted as a botanical name. This combination was made by Hermann Karsten in 1882. In 2006, a proposal was made to suppress the name Ziziphus zizyphus in favor of Ziziphus jujuba, and this proposal was accepted in 2011. Ziziphus jujuba is thus the correct scientific name for this species. Distribution and habitat Its precise natural distribution is uncertain due to extensive cultivation. However, its origin is thought to be in southwest Asia, between Lebanon, northern India, and southern and central China, and possibly also southeastern Europe though more likely introduced there. It grows wild but is also a garden shrub, kept for its fruit. The tree tolerates a wide range of temperatures and rainfall, though it requires hot summers and sufficient water for acceptable fruiting. Unlike most of the other species in the genus, it tolerates fairly cold winters, surviving temperatures down to about , and the tree is, for instance, commonly cultivated in Beijing. This wide tolerance enables the jujube to grow in mountain or desert habitats, provided there is access to underground water throughout the summer. The jujube (Z. jujuba) grows in cooler regions of Asia. Five or more other species of Ziziphus are widely distributed in milder climates to hot deserts of Asia and Africa. This plant has been introduced in Madagascar and grows as an invasive species in the western part of the island, threatening mostly protected areas. It is cultivated in parts of southern California. Ecology Witch's broom, prevalent in China and Korea, is the main disease affecting jujubes, though plantings in North America currently are not affected by any pests or diseases. In Europe, the last several years have seen some 80%–90% of the jujube crop eaten by insect larvae (see picture), including those of the false codling moth, Thaumatotibia (Cryptophlebia) leucotreta. In Madagascar, it is widely eaten by free-ranging zebus, and its seeds grow easily in zebu feces. Cultivation Jujube was domesticated in South Asia by 9000 BC. Over 400 cultivars have been selected. The fruit, when the plant is kept as a garden shrub, is picked in the autumn. Varieties Chico (also called GI 7-62) developed by the United States Department of Agriculture (USDA) in the 1950s Li, major commercial variety in the US Shanxi li, very large fruit Lang, major commercial variety in the US Sherwood Silverhill (also known as Yu and Tigertooth) can be grown in areas with high humidity So Shui Men GA 866 Honey jar, small juicy fruit Sugar cane Winter delight, major commercial variety in China Uses Culinary The freshly harvested, as well as the candied dried fruit, are often eaten as a snack, or with coffee. Smoked jujubes are consumed in Vietnam and are referred to as black jujubes. A drink can be made by crushing the pulp in water. Both China and Korea produce a sweetened tea syrup containing jujube fruit in glass jars, and canned jujube tea or jujube tea in the form of teabags. To a lesser extent, jujube fruit is made into juice and jujube vinegar (called 枣 醋 or 红枣 醋 in Chinese). They are used for making pickles (কুলের আচার) in west Bengal and Bangladesh. In Assam it is known as "Bogori" and the pickle, Bogori aachar (বগৰি আচাৰ), is famous. In China, a wine made from jujube fruit is called hong zao jiu (红枣酒). Sometimes pieces of jujube fruit are preserved by storing them in a jar filled with baijiu (Chinese liquor), which allows them to be kept fresh for a long time, especially through the winter. Such jujubes are called zui zao (醉枣; literally "drunk jujube"). The fruit is also a significant ingredient in a wide variety of Chinese delicacies (e.g. 甑糕 jing gao, a steamed rice cake). In Vietnam and Taiwan, fully mature, nearly ripe fruit is harvested and sold on the local markets and also exported to Southeast Asian countries. The dried fruit is used in desserts in China and Vietnam, such as ching bo leung, a cold beverage that includes the dried jujube, longan, fresh seaweed, barley, and lotus seeds. In Korea, jujubes are called daechu (대추) and are used in daechucha and samgyetang. On his visit to Medina, the 19th-century English explorer, Sir Richard Burton, observed that the local variety of jujube fruit was widely eaten. He describes its taste as like "a bad plum, an unripe cherry, and an insipid apple". He gives the local names for three varieties as "Hindi (Indian), Baladi (native), Tamri (date-like)." A hundred years ago, a close variety was common in the Jordan valley and around Jerusalem. The bedouin valued the fruit, calling it nabk. It could be dried and kept for winter or made into a paste which was used as bread. In Persian cuisine, the dried drupes are known as annab, while in neighboring Armenia, it is commonly eaten as a snack, and is known as unab. Confusion in the common name apparently is widespread. The unab is Z. jujuba. Rather, ber is used for three other cultivated or wild species, e.g., Z. spina-christi, Z. mauritiana and Z. nummularia in parts of India and is eaten both fresh and dried. The Arabic name sidr is used for Ziziphus species other than Z. jujuba. Traditionally in India, the fruits are dried in the sun and the hard seeds removed, after which the dried flesh is pounded with tamarind, red chillies, salt, and jaggery. In some parts of the Indian state of Tamil Nadu, fresh whole ripe fruit is crushed with the above ingredients and sun-dried to make cakes called ilanthai vadai or regi vadiyalu (Telugu). It is also commonly consumed as a snack. In Northern and Northeastern India the fruit is eaten fresh with salt and chilli flakes and also preserved as candy, jam or pickle with oil and spices. In Madagascar, jujube fruit is eaten fresh or dried. People also use it to make jam. A jujube honey is produced in the Atlas Mountains of Morocco. Italy has an alcoholic syrup called brodo di giuggiole. In Croatia, especially Dalmatia, jujubes are used in marmalades, juices, and rakija (fruit brandy). In Senegal and The Gambia, Jujube is called Sii dem or Ceedem, and the fruit is used as snack and also turned into a dried paste favoured as a sweetmeat by schoolchildren. More recently it has been processed and sold in Dakar by women. In Australia jujube beer is made. The commercial jujube candy popular in movie theaters originally contained jujube juice but now uses other flavorings. In Laoling, China, jujube juice and wine are made. Traditional Chinese medicine The fruit and its seeds are used in Traditional Chinese Medicine and Kampo for many purposes. Some investigational research indicates possibilities related to their traditional use to alleviate stress and for sedation. In these systems, it is also believed to have uses as an antiseptic/antifungal agent, anti-inflammatory, contraceptive, and muscle relaxer. It is also thought to help in regulation of blood pressure, stimulate the immune system, prevent ulcers and aid in wound healing. Jujube fruit is also combined with other herbs to treat colds and influenza. It is used to protect and heal the kidneys, heart, and spleen. Jujube is also one of the ingredients used in Chinese medicine to modulate the effects of other herbs, preventing overpowering effects or clashing properties. The fruit contains many different healthy properties like vitamins and amino acids. Other uses In Japan, the natsume has given its name to a style of tea caddy used in the Japanese tea ceremony, due to the similar shape. Its hard, oily wood was, along with pear, used for woodcuts to print books starting in the 8th century and continuing through the 19th in China and neighboring countries. As many as 2000 copies could be produced from one jujube woodcut. The timber is sometimes used for small items, such as tuning pegs for instruments. Select grade Jujube timber is often used in traditional Asian instruments for fingerboard, pegs, rests & soundposts, ribs & necks etc. It has a medium to hard density similar to luthier grade European maple and has excellent tonal qualities. Jujube Wood can be found in local folk instruments from Ceylon/India thru to China/Korea; it is also commonly used in China in violin & cello making for overseas export, though usually stained black to imitate the look of ebony. Luthier grade jujube wood planes and carves beautifully. Culture In Arabic-speaking regions the jujube and alternatively the species Z. lotus are closely related to the lote-trees (sing. سدرة sidrah, pl. سدر sidr) which are mentioned in the Quran, while in Palestine the species Z. spina-christi is called sidr. An ancient jujube tree in the city Al-Qurnah, Iraq, is claimed by locals as the Tree of Knowledge mentioned in the Bible. Local tradition holds that the place where the city was built was the original site of the Garden of Eden (a passage in the Book of Genesis creation narrative says that a river flowed from the garden and split into Tigris and Euphrates rivers, where the city is currently). The tree is a tourist spot in the town. Jujube tree is important in Hinduism too as Vishnu is worshipped in a major temple, in Badrinath, from the Sanskrit compound Badarīnātha, consisting of the terms badarī (jujube tree) and nātha (lord), an epithet of Vishnu. It is also known as Badarikashrama.
Biology and health sciences
Stone fruits
Plants
299014
https://en.wikipedia.org/wiki/Intensive%20care%20medicine
Intensive care medicine
Intensive care medicine, usually called critical care medicine, is a medical specialty that deals with seriously or critically ill patients who have, are at risk of, or are recovering from conditions that may be life-threatening. It includes providing life support, invasive monitoring techniques, resuscitation, and end-of-life care. Doctors in this specialty are often called intensive care physicians, critical care physicians, or intensivists. Intensive care relies on multidisciplinary teams composed of many different health professionals. Such teams often include doctors, nurses, physical therapists, respiratory therapists, and pharmacists, among others. They usually work together in intensive care units (ICUs) within a hospital. Scope Patients are admitted to the intensive care unit if their medical needs are greater than what the general hospital ward can provide. Indications for the ICU include blood pressure support for cardiovascular instability (hypertension/hypotension), sepsis, post-cardiac arrest syndrome or certain cardiac arrhythmias. Other ICU needs include airway or ventilator support due to respiratory compromise. The cumulative effects of multiple organ failure, more commonly referred to as multiple organ dysfunction syndrome, also requires advanced care. Patients may also be admitted to the ICU for close monitoring or intensive needs following a major surgery. There are two common ICU structures: closed and open. In a closed unit, the intensivist takes on the primary role for all patients in the unit. In an open ICU, the primary physician, who may or may not be an intensivist, can differ for each patient. There is increasingly strong evidence that closed units provide better patient outcomes. Patient management in intensive care differs between countries. Open units are the most common structure in the United States, but closed units are often found at large academic centers. Intermediate structures that fall between open and closed units also exist. Types of intensive care units Intensive care is usually provided in a specialized unit of a hospital called the intensive care unit (ICU) or critical care unit (CCU). Many hospitals also have designated intensive care areas for certain specialities of medicine. The naming is not rigidly standardized, and types of units are dictated by the needs and available resources of each hospital. These include: coronary intensive care unit (CCU or sometimes CICU) for heart disease medical intensive care unit (MICU) surgical intensive care unit (SICU) pediatric intensive care unit (PICU) pediatric cardiac intensive care unit (PCICU) neuroscience critical care unit (NCCU) overnight intensive-recovery (OIR) shock/trauma intensive-care unit (STICU) neonatal intensive care unit (NICU) ICU in the emergency department (E-ICU) Medical studies suggest a positive correlation between ICU volume and quality of care for mechanically ventilated patients. After adjustment for severity of illness, demographic variables, and characteristics of the ICUs (including staffing by intensivists), higher ICU volume was significantly associated with lower ICU and hospital mortality rates. For example, adjusted ICU mortality (for a patient at average predicted risk for ICU death) was 21.2% in hospitals with 87 to 150 mechanically ventilated patients annually, and 14.5% in hospitals with 401 to 617 mechanically ventilated patients annually. Hospitals with intermediate numbers of patients had outcomes between these extremes. ICU delirium, formerly and inaccurately referred to as ICU psychosis, is a syndrome common in intensive care and cardiac units where patients who are in unfamiliar, monotonous surroundings develop symptoms of delirium (Maxmen & Ward, 1995). This may include interpreting machine noises as human voices, seeing walls quiver, or hallucinating that someone is tapping them on the shoulder. There exists systematic reviews in which interventions of sleep promotion related outcomes in the ICU have proven impactful in the overall health of patients in the ICU. History The English nurse Florence Nightingale pioneered efforts to use a separate hospital area for critically injured patients. During the Crimean War in the 1850s, she introduced the practice of moving the sickest patients to the beds directly opposite the nursing station on each ward so that they could be monitored more closely. In 1923, the American neurosurgeon Walter Dandy created a three-bed unit at the Johns Hopkins Hospital. In these units, specially trained nurses cared for critically ill postoperative neurosurgical patients. The Danish anaesthesiologist Bjørn Aage Ibsen became involved in the 1952 poliomyelitis epidemic in Copenhagen, where 2722 patients developed the illness in a six-month period, with 316 of those developing some form of respiratory or airway paralysis. Some of these patients had been treated using the few available negative pressure ventilators, but these devices (while helpful) were limited in number and did not protect the patient's lungs from aspiration of secretions. Ibsen changed the management directly by instituting long-term positive pressure ventilation using tracheal intubation, and he enlisted 200 medical students to manually pump oxygen and air into the patients' lungs around the clock. At this time, Carl-Gunnar Engström had developed one of the first artificial positive-pressure volume-controlled ventilators, which eventually replaced the medical students. With the change in care, mortality during the epidemic declined from 90% to around 25%. Patients were managed in three special 35-bed areas, which aided charting medications and other management. In 1953, Ibsen set up what became the world's first intensive care unit in a converted student nurse classroom in Copenhagen Municipal Hospital. He provided one of the first accounts of the management of tetanus using neuromuscular-blocking drugs and controlled ventilation. The following year, Ibsen was elected head of the department of anaesthesiology at that institution. He jointly authored the first known account of intensive care management principles in the journal Nordisk Medicin, with Tone Dahl Kvittingen from Norway. For a time in the early 1960s, it was not clear that specialized intensive care units were needed, so intensive care resources were brought to the room of the patient that needed the additional monitoring, care, and resources. It became rapidly evident, however, that a fixed location where intensive care resources and dedicated personnel were available provided better care than ad hoc provision of intensive care services spread throughout a hospital. In 1962, in the University of Pittsburgh, the first critical care residency was established in the United States. In 1970, the Society of Critical Care Medicine was formed. Monitoring Monitoring refers to various tools and technologies used to obtain information about a patient's condition. These can include tests to evaluate blood flow and gas exchange in the body, or to assess the function of organs such as the heart and lungs. Broadly, there are two common types of monitoring in the ICU: noninvasive and invasive. Noninvasive monitoring Noninvasive monitoring does not require puncturing the skin and usually does not cause pain. These tools are more inexpensive, easier to perform, and faster to result. Vital signs which include heart rate, blood pressure, breathing rate, body temperature Capnography to confirm correct position of an endotracheal tube and estimate adequacy of ventilation in mechanically ventilated patients Echocardiogram to evaluate the function and structure of the heart Electroencephalography (EEG) to assess electrical activity of the brain Electrocardiogram to detect abnormal heart rhythms, electrolyte disturbances, and coronary blood flow Pulse oximetry for monitoring oxygen levels in the blood Thoracic electric bioimpedance (TEB) cardiography to monitor fluid status and heart function Ultrasound to evaluate internal structures including the heart, lungs, gallbladder, liver, kidneys, bladder, and blood vessels Invasive monitoring Invasive monitoring generally provides more accurate measurements, but these tests may require blood draws, puncturing the skin, and can be painful or uncomfortable. Arterial line to directly monitor blood pressure and obtain arterial blood gas measurements Blood draws or venipucture to monitor various blood components as well as administer therapeutic treatments Intracranial pressure monitoring to assess pressures inside the skull and on the brain Intravesicular manometry (bladder pressure) measurements to assess for intra-abdominal pressure Central line and peripherally inserted central catheter (PICC) lines for drug infusions, fluids or total parenteral nutrition Bronchoscopy to look at lungs and airways and sample fluid within the lungs Pulmonary artery catheter to monitor the function of the heart, blood volume, and tissue oxygenation Procedures and treatments Intensive care usually takes a system-by-system approach to treatment. In alphabetical order, the key systems considered in the intensive care setting are: airway management and anaesthesia, cardiovascular system, central nervous system, endocrine system, gastro-intestinal tract (and nutritional condition), hematology, integumentary system, microbiology (including sepsis status), renal (and metabolic), and respiratory system. Airway management and anaesthesia Bag valve mask ventilation and laryngoscopy Induction and maintenance of anaesthesia and sedation including rapid sequence induction for endotracheal intubation to facilitate mechanical ventilation. Cardiovascular Point of care echocardiography Central venous and arterial catherisation Temporary cardiac pacing catheters for atrial, ventricular, or dual-chamber pacing Intra-aortic balloon pumping to stabilize patients with cardiogenic shock Ventricular assist device to aid in the function of the left ventricle, commonly in patients with advanced heart failure Extracorporeal membranous oxygenation Gastro-intestinal tract Feeding tube for artificial nutrition Nasogastric intubation can be used to deliver artificial nutrition, but can also be used to remove stomach and intestinal contents Peritoneal aspiration and lavage to sample fluid in the abdominal cavity Renal Hemofiltration for acute kidney injury Respiratory Mechanical ventilation to assist breathing and oxygenation through an endotracheal tube, tracheotomy (invasive) or mask, helmet (non-invasive). Thoracentesis or tube thoracostomy to remove fluid or air in the pleural cavity Percutaneous dilatational tracheostomy insertion and ongoing management. Bronchoscopy including lavage. Drugs A wide array of drugs including but not limited to: inotropes such as Norepinephrine, sedatives such as Propofol, analgesics such as Fentanyl, neuromuscular blocking agents such as Rocuronium and Cisatracurium as well as broad spectrum antibiotics. Physiotherapy and mobilization Interventions such as early mobilization or exercises to improve muscle strength are sometimes suggested. Common complications in the ICU Intensive care units are associated with increased risk of various complications that may lengthen a patient's hospitalization. Common complications in the ICU include: Acute renal failure Catheter-associated bloodstream infection Catheter-associated urinary tract infection Delirium Gastrointestinal bleeding Pressure ulcer Venous thromboembolism Ventilator-associated pneumonia Ventilator-induced barotrauma Death Training ICU care requires more specialized patient care; this need has led to the use of a multidisciplinary team to provide care for patients. Staffing between Intensive care units by country, hospital, unit, or institution. Medicine Critical care medicine is an increasingly important medical specialty. Physicians with training in critical care medicine are referred to as intensivists. Most medical research has demonstrated that ICU care provided by intensivists produces better outcomes and more cost-effective care. This has led the Leapfrog Group to make a primary recommendation that all ICU patients be managed or co-managed by a dedicated intensivist who is exclusively responsible for patients in one ICU. In Australia In Australia, the training in intensive care medicine is through College of Intensive Care Medicine. In Germany In Germany, the German Society of Anaesthesiology and Intensive Care Medicine is a medical association of professionals in the anesthetics and intensive care fields. It was established in 1955 by members of the German Society of Surgery. In the United Kingdom In the UK, doctors can only enter intensive care medicine training after completing two foundation years and core training in either emergency medicine, anaesthetics, acute medicine or core medicine. Most trainees dual train with one of these specialties; however, it has recently become possible to train purely in intensive care medicine. It has also possible to train in sub-specialties of intensive care medicine including pre-hospital emergency medicine. In the United States In the United States, the specialty requires additional fellowship training for physicians having completed their primary residency training in internal medicine, pediatrics, anesthesiology, surgery or emergency medicine. US board certification in critical care medicine is available through all five specialty boards. Intensivists with a primary training in internal medicine sometimes pursue combined fellowship training in another subspecialty such as pulmonary medicine, cardiology, infectious disease, or nephrology. The American Society of Critical Care Medicine is a well-established multi professional society for practitioners working in the ICU including nurses, respiratory therapists, and physicians. Intensive care physicians have some of the highest percentages of physician burnout among all medical specialties, at 48 percent. In South Africa Intensive care training is provided as a fellowship and is awarded as a Sub-Specialty certificate of Critical Care (Cert. Critical Care) which is awarded by the Colleges of Medicine of South Africa. Candidates are eligible to enter sub specialty training after completing specialty training in Anaesthetics, Surgery, Internal Medicine, Obstetrics and Gynaecology, Paediatrics, Cardiothoracic surgery or Neurosurgery. Training usually takes place over 2 years during which time candidates rotate through different ICU's (Medical, Surgical, Paediatric etc.) In India Intensive care medicine (ICM) in India is a rapidly evolving field, responding to the increasing demand for specialized care in critical settings. Training in ICM is offered through various recognized programs that equip healthcare professionals with the necessary skills to manage critically ill patients. Training Programs DM (Doctor of Medicine): A three-year postgraduate degree focusing on critical care management, typically pursued by candidates from internal medicine, anesthesia, or pediatrics. DrNB (Doctorate of National Board): A three-year program recognized by the National Board of Examinations (NBE) that provides specialized training in critical care. The DrNB has replaced the FNB as the primary pathway for intensivist training in India. FNB (Fellowship of National Board): Previously a one- to two-year fellowship aimed at those who had completed a postgraduate degree in related fields. It offered advanced training in critical care, focusing on protocols, advanced life support, and practical experience in critical care units. The FNB has been phased out following the introduction of the DrNB program. IDCCM (Indian Diploma in Critical Care Medicine): A one-year diploma program designed for doctors seeking foundational knowledge in critical care. It is accessible to a broader audience, including those from emergency medicine. IFCCM (Indian Fellowship in Critical Care Medicine): An advanced one-year fellowship for residency graduates, focusing on comprehensive critical care practices. CTCCM (Certificate Course in Critical Care Medicine): A shorter certificate program providing essential training in critical care concepts, suitable for professionals looking to enhance their expertise. Feeder Specialties The feeder specialties for intensive care medicine in India include: Anesthesia: Provides expertise in airway management, sedation, and perioperative care. Pulmonology: Offers specialized knowledge in respiratory management and ventilatory support. Internal Medicine: Contributes a broad understanding of systemic diseases and comprehensive patient management. Emergency Medicine: Focuses on acute care and stabilization of critically ill patients, essential for ICM. Nursing Nurses that work in the critical care setting are typically registered nurses. Nurses may pursue additional education and training in critical care medicine leading to certification as a CCRN by the American Association of Critical Care Nurses a standard that was begun in 1975. These certifications became more specialized to the patient population in 1997 by the American Association of Critical care Nurses, to include pediatrics, neonatal and adult. Nurse practitioners and physician assistants Nurse practitioners and physician assistants are other types of non-physician providers that care for patients in ICUs. These providers have fewer years of in-school training, typically receive further clinical on the job education, and work as part of the team under the supervision of physicians. Pharmacists Critical care pharmacists work with the medical team in many aspects, but some include, monitoring serum concentrations of medication, past medication use, current medication use, and medication allergies. Their typically round with the team, but it may differ by institution. Some pharmacist after attaining their doctorate or pharmacy may pursue additional training in a postgraduate residency and become certified as critical care pharmacists. Pharmacists help manage all aspects of drug therapy and may pursue additional credentialing in critical care medicine as BCCCP by the Board of Pharmaceutical Specialties. Many critical care pharmacists are a part of the multi-professional Society of Critical Care Medicine. Inclusion of pharmacist decreases drug reactions and poor outcomes for patients. Registered dietitians Nutrition in intensive care units presents unique challenges due to changes in patient metabolism and physiology while critically ill. Critical care nutrition is rapidly becoming a subspecialty for dieticians who can pursue additional training and achieve certification in enteral and parenteral nutrition through the American Society for Parenteral and Enteral Nutrition (ASPEN). Respiratory therapists Respiratory therapists often work in intensive care units to monitor how well a patient is breathing. Respiratory therapists may pursue additional education and training leading to credentialing in adult critical care (ACCS) and neonatal and pediatric (NPS) specialties. Respiratory therapists have been trained to monitor a patient's breathing, provide treatments to help their breathing, evaluate for respiratory improvement, and manage mechanical ventilation parameters. They may be involved in emergency care like inserting and managing an airway, humidification of oxygen, administering diagnostic lung mechanics tests, invasive or non-invasive mechanical ventilation management, weaning the ventilator, aerosol therapy (pulmonary vasodilatory medications included), inhaled Nitric oxide therapy, arterial blood gas analysis, and providing physiotherapy. Additionally, Respiratory Therapists are commonly involved in ECMO management and many pursue certification in such therapies due to the intimate relationship of the heart and lungs. On-going critical care management of an ECMO patient commonly requires strict ventilator management in relation to the type of ECMO support used. Ethical and medicolegal issues Economics In general, it is the most expensive, technologically advanced and resource-intensive area of medical care. In the United States, estimates of the 2000 expenditure for critical care medicine ranged from US$19–55 billion. During that year, critical care medicine accounted for 0.56% of GDP, 4.2% of national health expenditure and about 13% of hospital costs. In 2011, hospital stays with ICU services accounted for just over one-quarter of all discharges (29.9%) but nearly one-half of aggregate total hospital charges (47.5%) in the United States. The mean hospital charge was 2.5 times higher for discharges with ICU services than for those without.
Biology and health sciences
Fields of medicine
Health
299462
https://en.wikipedia.org/wiki/Remotely%20operated%20underwater%20vehicle
Remotely operated underwater vehicle
A remotely operated underwater vehicle (ROUV) or remotely operated vehicle (ROV) is a free-swimming submersible craft used to perform underwater observation, inspection and physical tasks such as valve operations, hydraulic functions and other general tasks within the subsea oil and gas industry, military, scientific and other applications. ROVs can also carry tooling packages for undertaking specific tasks such as pull-in and connection of flexible flowlines and umbilicals, and component replacement. They are often used to visit wrecks at great depths beyond the capacities of submersibles for research purposes, such as the Titanic, amongst others. Description This meaning is different from remote control vehicles operating on land or in the air because ROVs are designed specifically to function in underwater environments, where conditions such as high pressure, limited visibility, and the effects of buoyancy and water currents pose unique challenges. While land and aerial vehicles use wireless communication for control, ROVs typically rely on a physical connection, such as a tether or umbilical cable, to transmit power, video, and data signals, ensuring reliable operation even at great depths. The tether also provides a stable means of communication, which is crucial in underwater conditions where radio waves are absorbed quickly by water, making wireless signals ineffective for long-range underwater us. ROVs are unoccupied, usually highly maneuverable, and operated by a crew either aboard a vessel/floating platform or on proximate land. They are common in deepwater industries such as offshore hydrocarbon extraction. They are generally, but not necessarily, linked to a host ship by a neutrally buoyant tether or, often when working in rough conditions or in deeper water, a load-carrying umbilical cable is used along with a tether management system (TMS). The TMS is either a garage-like device which contains the ROV during lowering through the splash zone or, on larger work-class ROVs, a separate assembly mounted on top of the ROV. The purpose of the TMS is to lengthen and shorten the tether so the effect of cable drag where there are underwater currents is minimized. The umbilical cable is an armored cable that contains a group of electrical conductors and fiber optics that carry electric power, video, and data signals between the operator and the TMS. Where used, the TMS then relays the signals and power for the ROV down the tether cable. Once at the ROV, the electric power is distributed between the components of the ROV. However, in high-power applications, most of the electric power drives a high-power electric motor which drives a hydraulic pump. The pump is then used for propulsion and to power equipment such as torque tools and manipulator arms where electric motors would be too difficult to implement subsea. Most ROVs are equipped with at least a video camera and lights. Additional equipment is commonly added to expand the vehicle's capabilities. These may include sonars, magnetometers, a still camera, a manipulator or cutting arm, water samplers, and instruments that measure water clarity, water temperature, water density, sound velocity, light penetration, and temperature. Terminology In the professional diving and marine contracting industry, the term remotely operated vehicle (ROV) is used. Classification Submersible ROVs are normally classified into categories based on their size, weight, ability or power. Some common ratings are: Micro - typically Micro-class ROVs are very small in size and weight. Today's Micro-Class ROVs can weigh less than 3 kg. These ROVs are used as an alternative to a diver, specifically in places where a diver might not be able to physically enter such as a sewer, pipeline or small cavity. Mini - typically Mini-Class ROVs weigh in around 15 kg. Mini-Class ROVs are also used as a diver alternative. One person may be able to transport the complete ROV system out with them on a small boat, deploy it and complete the job without outside help. Some Micro and Mini classes are referred to as "eyeball"-class to differentiate them from ROVs that may be able to perform intervention tasks. General - typically less than 5 HP (propulsion); occasionally small three finger manipulators grippers have been installed, such as on the very early RCV 225. These ROVs may be able to carry a sonar unit and are usually used on light survey applications. Typically the maximum working depth is less than 1,000 metres though one has been developed to go as deep as 7,000 m. Inspection Class - these are typically rugged commercial or industrial use observation and data gathering ROVs - typically equipped with live-feed video, still photography, sonar, and other data collection sensors. Inspection Class ROVs can also have manipulator arms for light work and object manipulation. Light Workclass - typically less than 50 hp (propulsion). These ROVs may be able to carry some manipulators. Their chassis may be made from polymers such as polyethylene rather than the conventional stainless steel or aluminium alloys. They typically have a maximum working depth less than 2000 m. Heavy Workclass - typically less than 220 hp (propulsion) with an ability to carry at least two manipulators. They have a working depth up to 3500 m. Trenching & Burial - typically more than 200 hp (propulsion) and not usually greater than 500 hp (while some do exceed that) with an ability to carry a cable laying sled and work at depths up to 6000 m in some cases. Submersible ROVs may be "free swimming" where they operate neutrally buoyant on a tether from the launch ship or platform, or they may be "garaged" where they operate from a submersible "garage" or "tophat" on a tether attached to the heavy garage that is lowered from the ship or platform. Both techniques have their pros and cons; however very deep work is normally done with a garage. History In the 1970s and '80s the Royal Navy used "Cutlet", a remotely operated submersible, to recover practice torpedoes and mines. RCA (Noise) maintained the "Cutlet 02" System based at BUTEC ranges, whilst the "03" system was based at the submarine base on the Clyde and was operated and maintained by RN personnel. The U.S. Navy funded most of the early ROV technology development in the 1960s into what was then named a "Cable-Controlled Underwater Recovery Vehicle" (CURV). This created the capability to perform deep-sea rescue operation and recover objects from the ocean floor, such as a nuclear bomb lost in the Mediterranean Sea after the 1966 Palomares B-52 crash. Building on this technology base; the offshore oil and gas industry created the work-class ROVs to assist in the development of offshore oil fields. More than a decade after they were first introduced, ROVs became essential in the 1980s when much of the new offshore development exceeded the reach of human divers. During the mid-1980s the marine ROV industry suffered from serious stagnation in technological development caused in part by a drop in the price of oil and a global economic recession. Since then, technological development in the ROV industry has accelerated and today ROVs perform numerous tasks in many fields. Their tasks range from simple inspection of subsea structures, pipelines, and platforms, to connecting pipelines and placing underwater manifolds. They are used extensively both in the initial construction of a sub-sea development and the subsequent repair and maintenance. The oil and gas industry has expanded beyond the use of work class ROVs to mini ROVs, which can be more useful in shallower environments. They are smaller in size, oftentimes allowing for lower costs and faster deployment times. Submersible ROVs have been used to identify many historic shipwrecks, including the RMS Titanic, the Bismarck, , the SM U-111, and SS Central America. In some cases, such as the Titanic and the SS Central America, ROVs have been used to recover material from the sea floor and bring it to the surface, the most recent being in July 2024 during a Titanic expedition in recovering artefacts for the first time through a magnetometer. While the oil and gas industry uses the majority of ROVs, other applications include science, military, and salvage. The military uses ROV for tasks such as mine clearing and inspection. Science usage is discussed below. Construction Work-class ROVs are built with a large flotation pack on top of an aluminium chassis to provide the necessary buoyancy to perform a variety of tasks. The sophistication of construction of the aluminum frame varies depending on the manufacturer's design. Syntactic foam is often used for the flotation material. A tooling skid may be fitted at the bottom of the system to accommodate a variety of sensors or tooling packages. By placing the light components on the top and the heavy components on the bottom, the overall system has a large separation between the center of buoyancy and the center of gravity: this provides stability and the stiffness to do work underwater. Thrusters are placed between center of buoyancy and center of gravity to maintain the attitude stability of the robot in maneuvers. Various thruster configurations and control algorithms can be used to give appropriate positional and attitude control during the operations, particularly in high current waters. Thrusters are usually in a balanced vector configuration to provide the most precise control possible. Electrical components can be in oil-filled water tight compartments or one-atmosphere compartments to protect them from corrosion in seawater and being crushed by the extreme pressure exerted on the ROV while working deep. The ROV will be fitted with thrusters, cameras, lights, tether, a frame, and pilot controls to perform basic work. Additional sensors, such as manipulators and sonar, can be fitted as needed for specific tasks. It is common to find ROVs with two robotic arms; each manipulator may have a different gripping jaw. The cameras may also be guarded for protection against collisions. The majority of the work-class ROVs are built as described above; however, this is not the only style in ROV building method. Smaller ROVs can have very different designs, each appropriate to its intended task. Larger ROVs are commonly deployed and operated from vessels, so the ROV may have landing skids for retrieval to the deck. Configurations Remotely operated vehicles have three basic configurations. Each of these brings specific limitations. Open or box frame ROVs - this is the most familiar of the ROV configurations - consisting of an open frame where all the operational sensors, thrusters, and mechanical components are enclosed. These are useful for free-swimming in light currents (less than 4 knots based upon manufacturer specifications). These are not suitable for towed applications due to their very poor hydrodynamic design. Most Work-Class and Heavy Work-Class ROVs are based upon this configuration. Torpedo shaped ROVs - this is a common configuration for data gathering or inspection class ROVs. The torpedo shape offers low hydrodynamic resistance, but comes with significant control limitations. The torpedo shape requires high speed (which is why this shape is used for military munitions) to remain positionally and attitudinally stable, but this type is highly vulnerable at high speed. At slow speeds (0–4 knots) suffers from numerous instabilities, such as tether induced roll and pitch, current induced roll, pitch, and yaw. It has limited control surfaces at the tail or stern, which easily cause over compensation instabilities. These are frequently referred to as "Tow Fish", since they are more often used as a towed ROV. Tether management ROVs require a tether, or an umbilical, (unlike an AUV) in order to transmit power and data between the vehicle and the surface. The size and weight of the tether should be considered: too large of a tether will adversely affect the drag of the vehicle, and too small may not be robust enough for lifting requirements during launch and recovery. The tether is typically spooled onto a tether management system (TMS) which helps manage the tether so that it does not become tangled or knotted. In some situations it can be used as a winch to lower or recover the vehicle. Applications Survey Survey or inspection ROVs are generally smaller than work class ROVs and are often sub-classified as either Class I: Observation Only or Class II Observation with payload. They are used to assist with hydrographic survey, i.e. the location and positioning of subsea structures, and also for inspection work for example pipeline surveys, jacket inspections and marine hull inspection of vessels. Survey ROVs (also known as "eyeballs"), although smaller than workclass, often have comparable performance with regard to the ability to hold position in currents, and often carry similar tools and equipment - lighting, cameras, sonar, ultra-short baseline (USBL) beacon, Raman spectrometer, and strobe flasher depending on the payload capability of the vehicle and the needs of the user. Support of diving operations ROV operations in conjunction with simultaneous diving operations are under the overall supervision of the diving supervisor for safety reasons. The International Marine Contractors Association (IMCA) published guidelines for the offshore operation of ROVs in combined operations with divers in the document Remotely Operated Vehicle Intervention During Diving Operations (IMCA D 054, IMCA R 020), intended for use by both contractors and clients. ROVs might be used during Submarine rescue operations. Military ROVs have been used by several navies for decades, primarily for minehunting and minebreaking. In October 2008 the U.S. Navy began to improve its locally piloted rescue systems, based on the Mystic DSRV and support craft, with a modular system, the SRDRS, based on a tethered, occupied ROV called a pressurized rescue module (PRM). This followed years of tests and exercises with submarines from the fleets of several nations. It also uses the uncrewed Sibitzky ROV for disabled submarine surveying and preparation of the submarine for the PRM. The US Navy also uses an ROV called AN/SLQ-48 Mine Neutralization Vehicle (MNV) for mine warfare. It can go away from the ship due to a connecting cable, and can reach deep. The mission packages available for the MNV are known as MP1, MP2, and MP3. The MP1 is a cable cutter to surface the moored mine for recovery exploitation or explosive ordnance disposal (EOD). The MP2 is a bomblet of polymer-bonded explosive PBXN-103 high explosive for neutralizing bottom/ground mines. The MP3 is a moored mine cable gripper and a float with the MP2 bomblet combination to neutralize moored mines underwater. The charges are detonated by acoustic signal from the ship. The AN/BLQ-11 autonomous unmanned undersea vehicle (UUV) is designed for covert mine countermeasure capability and can be launched from certain submarines. The U.S.Navy's ROVs are only on Avenger-class mine countermeasures ships. After the grounding of USS Guardian (MCM-5) and decommissioning of USS Avenger (MCM-1), and USS Defender (MCM-2), only 11 US Minesweepers remain operating in the coastal waters of Bahrain (USS Sentry (MCM-3), USS Devastator (MCM-6), USS Gladiator (MCM-11) and USS Dextrous (MCM-13)), Japan (USS Patriot (MCM-7), USS Pioneer (MCM-9), USS Warrior (MCM-10) and USS Chief (MCM-14)), and California (USS Champion (MCM-4), USS Scout (MCM-8), and USS Ardent (MCM-12) ). During August 19, 2011, a Boeing-made robotic submarine dubbed Echo Ranger was being tested for possible use by the U.S. military to stalk enemy waters, patrol local harbors for national security threats and scour ocean floors to detect environmental hazards. The Norwegian Navy inspected the ship Helge Ingstad by the Norwegian Blueye Pioneer underwater drone. As their abilities grow, smaller ROVs are also increasingly being adopted by navies, coast guards, and port authorities around the globe, including the U.S. Coast Guard and U.S. Navy, Royal Netherlands Navy, the Norwegian Navy, the Royal Navy and the Saudi Border Guard. They have also been widely adopted by police departments and search and recovery teams. Useful for a variety of underwater inspection tasks such as explosive ordnance disposal (EOD), meteorology, port security, mine countermeasures (MCM), and maritime intelligence, surveillance, reconnaissance (ISR). Science ROVs are also used extensively by the scientific community to study the ocean. A number of deep sea animals and plants have been discovered or studied in their natural environment through the use of ROVs; examples include the jellyfish Stellamedusa ventana and the eel-like halosaurs. In the US, cutting-edge work is done at several public and private oceanographic institutions, including the Monterey Bay Aquarium Research Institute (MBARI), the Woods Hole Oceanographic Institution (WHOI) (with Nereus), and the University of Rhode Island / Institute for Exploration (URI/IFE). In Europe, Alfred Wegener Institute use ROVs for Arctic and Antarctic surveys of sea ice, including measuring ice draft, light transmittance, sediments, oxygen, nitrate, seawater temperature, and salinity. For these purposes, it is equipped with a single- and multibeam sonar, spectroradiometer, manipulator, fluorometer, conductivity/ temperature/depth (salinity measurement) (CTD), optode, and UV-spectrometer. Science ROVs take many shapes and sizes. Since good video footage is a core component of most deep-sea scientific research, research ROVs tend to be outfitted with high-output lighting systems and broadcast quality cameras. Depending on the research being conducted, a science ROV will be equipped with various sampling devices and sensors. Many of these devices are one-of-a-kind, state-of-the-art experimental components that have been configured to work in the extreme environment of the deep ocean. Science ROVs also incorporate a good deal of technology that has been developed for the commercial ROV sector, such as hydraulic manipulators and highly accurate subsea navigation systems. They are also used for underwater archaeology projects such as the Mardi Gras Shipwreck Project in the Gulf of Mexico and the CoMAS project in the Mediterranean Sea. There are several larger high-end systems that are notable for their capabilities and applications. MBARI's Tiburon vehicle cost over $6 million US dollars to develop and is used primarily for midwater and hydrothermal research on the West Coast of the US. WHOI's Jason system has made many significant contributions to deep-sea oceanographic research and continues to work all over the globe. URI/IFE's Hercules ROV is one of the first science ROVs to fully incorporate a hydraulic propulsion system and is uniquely outfitted to survey and excavate ancient and modern shipwrecks. The Canadian Scientific Submersible Facility ROPOS system is continually used by several leading ocean sciences institutions and universities for challenging tasks such as deep-sea vents recovery and exploration to the maintenance and deployment of ocean observatories. Educational outreach The SeaPerch Remotely Operated Underwater Vehicle (ROV) educational program is an educational tool and kit that allows elementary, middle, and high-school students to construct a simple, remotely operated underwater vehicle, from polyvinyl chloride (PVC) pipe and other readily made materials. The SeaPerch program teaches students basic skills in ship and submarine design and encourages students to explore naval architecture and marine and ocean engineering concepts. SeaPerch is sponsored by the Office of Naval Research, as part of the National Naval Responsibility for Naval Engineering (NNRNE), and the program is managed by the Society of Naval Architects and Marine Engineers. Another innovative use of ROV technology was during the Mardi Gras Shipwreck Project. The "Mardi Gras Shipwreck" sank some 200 years ago about 35 miles off the coast of Louisiana in the Gulf of Mexico in of water. The shipwreck, whose real identity remains a mystery, lay forgotten at the bottom of the sea until it was discovered in 2002 by an oilfield inspection crew working for the Okeanos Gas Gathering Company (OGGC). In May 2007, an expedition, led by Texas A&M University and funded by OGGC under an agreement with the Minerals Management Service (now BOEM), was launched to undertake the deepest scientific archaeological excavation ever attempted at that time to study the site on the seafloor and recover artifacts for eventual public display in the Louisiana State Museum. As part of the educational outreach Nautilus Productions in partnership with BOEM, Texas A&M University, the Florida Public Archaeology Network and Veolia Environmental produced a one-hour HD documentary about the project, short videos for public viewing and provided video updates during the expedition. Video footage from the ROV was an integral part of this outreach and used extensively in the Mystery Mardi Gras Shipwreck documentary. The Marine Advanced Technology Education (MATE) Center uses ROVs to teach middle school, high school, community college, and university students about ocean-related careers and help them improve their science, technology, engineering, and math skills. MATE's annual student ROV competition challenges student teams from all over the world to compete with ROVs that they design and build. The competition uses realistic ROV-based missions that simulate a high-performance workplace environment, focusing on a different theme that exposes students to many different aspects of marine-related technical skills and occupations. The ROV competition is organized by MATE and the Marine Technology Society's ROV Committee and funded by organizations such as the National Aeronautics and Space Administration (NASA), National Oceanic and Atmospheric Administration (NOAA), and Oceaneering, and many other organizations that recognize the value of highly trained students with technology skills such as ROV designing, engineering, and piloting. MATE was established with funding from the National Science Foundation and is headquartered at Monterey Peninsula College in Monterey, California. List of scientific ROVs Media As cameras and sensors have evolved and vehicles have become more agile and simple to pilot, ROVs have become popular particularly with documentary filmmakers due to their ability to access deep, dangerous, and confined areas unattainable by divers. There is no limit to how long an ROV can be submerged and capturing footage, which allows for previously unseen perspectives to be gained. ROVs have been used in the filming of several documentaries, including Nat Geo's Shark Men and The Dark Secrets of the Lusitania and the BBC Wildlife Special Spy in the Huddle. Due to their extensive use by military, law enforcement, and coastguard services, ROVs have also featured in crime dramas such as the popular CBS series CSI. Hobby With an increased interest in the ocean by many people, both young and old, and the increased availability of once expensive and non-commercially available equipment, ROVs have become a popular hobby amongst many. This hobby involves the construction of small ROVs that generally are made out of PVC piping and often can dive to depths between 50 and 100 feet but some have managed to get to 300 feet. STEM education This new interest in ROVs has led to the formation of many competitions, including MATE (Marine Advanced Technology Education), NURC (National Underwater Robotics Challenge), and RoboSub. These are competitions in which competitors, most commonly schools and other organizations, compete against each other in a series of tasks using ROVs that they have built. Most hobby ROVs are tested in swimming pools and lakes where the water is calm, however some have tested their own personal ROVs in the sea. Doing so, however, creates many difficulties due to waves and currents that can cause the ROV to stray off course or struggle to push through the surf due to the small size of engines that are fitted to most hobby ROVs.
Technology
Naval transport
null
299641
https://en.wikipedia.org/wiki/Epithelium
Epithelium
Epithelium or epithelial tissue is a thin, continuous, protective layer of cells with little extracellular matrix. An example is the epidermis, the outermost layer of the skin. Epithelial (mesothelial) tissues line the outer surfaces of many internal organs, the corresponding inner surfaces of body cavities, and the inner surfaces of blood vessels. Epithelial tissue is one of the four basic types of animal tissue, along with connective tissue, muscle tissue and nervous tissue. These tissues also lack blood or lymph supply. The tissue is supplied by nerves. There are three principal shapes of epithelial cell: squamous (scaly), columnar, and cuboidal. These can be arranged in a singular layer of cells as simple epithelium, either simple squamous, simple columnar, or simple cuboidal, or in layers of two or more cells deep as stratified (layered), or compound, either squamous, columnar or cuboidal. In some tissues, a layer of columnar cells may appear to be stratified due to the placement of the nuclei. This sort of tissue is called pseudostratified. All glands are made up of epithelial cells. Functions of epithelial cells include diffusion, filtration, secretion, selective absorption, germination, and transcellular transport. Compound epithelium has protective functions. Epithelial layers contain no blood vessels (avascular), so they must receive nourishment via diffusion of substances from the underlying connective tissue, through the basement membrane. Cell junctions are especially abundant in epithelial tissues. Classification Simple epithelium Simple epithelium is a single layer of cells with every cell in direct contact with the basement membrane that separates it from the underlying connective tissue. In general, it is found where absorption and filtration occur. The thinness of the epithelial barrier facilitates these processes. In general, epithelial tissues are classified by the number of their layers and by the shape and function of the cells. The basic cell types are squamous, cuboidal, and columnar, classed by their shape. By layer, epithelium is classed as either simple epithelium, only one cell thick (unilayered), or stratified epithelium having two or more cells in thickness, or multi-layered – as stratified squamous epithelium, stratified cuboidal epithelium, and stratified columnar epithelium, and both types of layering can be made up of any of the cell shapes. However, when taller simple columnar epithelial cells are viewed in cross section showing several nuclei appearing at different heights, they can be confused with stratified epithelia. This kind of epithelium is therefore described as pseudostratified columnar epithelium. Transitional epithelium has cells that can change from squamous to cuboidal, depending on the amount of tension on the epithelium. Stratified epithelium Stratified or compound epithelium differs from simple epithelium in that it is multilayered. It is therefore found where body linings have to withstand mechanical or chemical insult such that layers can be abraded and lost without exposing subepithelial layers. Cells flatten as the layers become more apical, though in their most basal layers, the cells can be squamous, cuboidal, or columnar. Stratified epithelia (of columnar, cuboidal, or squamous type) can have the following specializations: Structure Epithelial tissue cells can adopt shapes of varying complexity from polyhedral to scutoidal to punakoidal. They are tightly packed and form a continuous sheet with almost no intercellular spaces. All epithelia is usually separated from underlying tissues by an extracellular fibrous basement membrane. The lining of the mouth, lung alveoli and kidney tubules are all made of epithelial tissue. The lining of the blood and lymphatic vessels are of a specialised form of epithelium called endothelium. Location Epithelium lines both the outside (skin) and the inside cavities and lumina of bodies. The outermost layer of human skin is composed of dead stratified squamous, keratinized epithelial cells. Tissues that line the inside of the mouth, the esophagus, the vagina, and part of the rectum are composed of nonkeratinized stratified squamous epithelium. Other surfaces that separate body cavities from the outside environment are lined by simple squamous, columnar, or pseudostratified epithelial cells. Other epithelial cells line the insides of the lungs, the gastrointestinal tract, the reproductive and urinary tracts, and make up the exocrine and endocrine glands. The outer surface of the cornea is covered with fast-growing, easily regenerated epithelial cells. A specialised form of epithelium, endothelium, forms the inner lining of blood vessels and the heart, and is known as vascular endothelium, and lining lymphatic vessels as lymphatic endothelium. Another type, mesothelium, forms the walls of the pericardium, pleurae, and peritoneum. In arthropods, the integument, or external "skin", consists of a single layer of epithelial ectoderm from which arises the cuticle, an outer covering of chitin, the rigidity of which varies as per its chemical composition. Basement membrane The basal surface of epithelial tissue rests on a basement membrane and the free/apical surface faces body fluid or outside. The basement membrane acts as a scaffolding on which epithelium can grow and regenerate after injuries. Epithelial tissue has a nerve supply, but no blood supply and must be nourished by substances diffusing from the blood vessels in the underlying tissue. The basement membrane acts as a selectively permeable membrane that determines which substances will be able to enter the epithelium. The basal lamina is made up of laminin (glycoproteins) secreted by epithelial cells. The reticular lamina beneath the basal lamina is made up of collagen proteins secreted by connective tissue. Cell junctions Cell junctions are especially abundant in epithelial tissues. They consist of protein complexes and provide contact between neighbouring cells, between a cell and the extracellular matrix, or they build up the paracellular barrier of epithelia and control the paracellular transport. Cell junctions are the contact points between plasma membrane and tissue cells. There are mainly 5 different types of cell junctions: tight junctions, adherens junctions, desmosomes, hemidesmosomes, and gap junctions. Tight junctions are a pair of trans-membrane protein fused on outer plasma membrane. Adherens junctions are a plaque (protein layer on the inside plasma membrane) which attaches both cells' microfilaments. Desmosomes attach to the microfilaments of cytoskeleton made up of keratin protein. Hemidesmosomes resemble desmosomes on a section. They are made up of the integrin (a transmembrane protein) instead of cadherin. They attach the epithelial cell to the basement membrane. Gap junctions connect the cytoplasm of two cells and are made up of proteins called connexins (six of which come together to make a connexion). Development Epithelial tissues are derived from all of the embryological germ layers: from ectoderm (e.g., the epidermis); from endoderm (e.g., the lining of the gastrointestinal tract); from mesoderm (e.g., the inner linings of body cavities). However, pathologists do not consider endothelium and mesothelium (both derived from mesoderm) to be true epithelium. This is because such tissues present very different pathology. For that reason, pathologists label cancers in endothelium and mesothelium sarcomas, whereas true epithelial cancers are called carcinomas. Additionally, the filaments that support these mesoderm-derived tissues are very distinct. Outside of the field of pathology, it is generally accepted that the epithelium arises from all three germ layers. Cell turnover Epithelia turn over at some of the fastest rates in the body. For epithelial layers to maintain constant cell numbers essential to their functions, the number of cells that divide must match those that die. They do this mechanically. If there are too few of the cells, the stretch that they experience rapidly activates cell division. Alternatively, when too many cells accumulate, crowding triggers their death by activation epithelial cell extrusion. Here, cells fated for elimination are seamlessly squeezed out by contracting a band of actin and myosin around and below the cell, preventing any gaps from forming that could disrupt their barriers. Failure to do so can result in aggressive tumors and their invasion by aberrant basal cell extrusion. Functions Epithelial tissues have as their primary functions: to protect the tissues that lie beneath from radiation, desiccation, toxins, invasion by pathogens, and physical trauma the regulation and exchange of chemicals between the underlying tissues and a body cavity the secretion of hormones into the circulatory system, as well as the secretion of sweat, mucus, enzymes, and other products that are delivered by ducts to provide sensation Absorb water and digested food in the lining of digestive canal. Glandular tissue Glandular tissue is the type of epithelium that forms the glands from the infolding of epithelium and subsequent growth in the underlying connective tissue. They may be specialized columnar or cuboidal tissues consisting of goblet cells, which secrete mucus. There are two major classifications of glands: endocrine glands and exocrine glands: Endocrine glands secrete their product into the extracellular space where it is rapidly taken up by the circulatory system. Exocrine glands secrete their products into a duct that then delivers the product to the lumen of an organ or onto the free surface of the epithelium. Their secretions include tears, saliva, oil (sebum), enzyme, digestive juices, sweat, etc. Sensing the extracellular environment Some epithelial cells are ciliated, especially in respiratory epithelium, and they commonly exist as a sheet of polarised cells forming a tube or tubule with cilia projecting into the lumen." Primary cilia on epithelial cells provide chemosensation, thermoception, and mechanosensation of the extracellular environment by playing "a sensory role mediating specific signalling cues, including soluble factors in the external cell environment, a secretory role in which a soluble protein is released to have an effect downstream of the fluid flow, and mediation of fluid flow if the cilia are motile. Host immune response Epithelial cells express many genes that encode immune mediators and proteins involved in cell-cell communication with hematopoietic immune cells. The resulting immune functions of these non-hematopoietic, structural cells contribute to the mammalian immune system ("structural immunity"). Relevant aspects of the epithelial cell response to infections are encoded in the epigenome of these cells, which enables a rapid response to immunological challenges. Clinical significance The slide shows at (1) an epithelial cell infected by Chlamydia pneumoniae; their inclusion bodies shown at (3); an uninfected cell shown at (2) and (4) showing the difference between an infected cell nucleus and an uninfected cell nucleus. Epithelium grown in culture can be identified by examining its morphological characteristics. Epithelial cells tend to cluster together, and have a "characteristic tight pavement-like appearance". But this is not always the case, such as when the cells are derived from a tumor. In these cases, it is often necessary to use certain biochemical markers to make a positive identification. The intermediate filament proteins in the cytokeratin group are almost exclusively found in epithelial cells, so they are often used for this purpose. Cancers originating from the epithelium are classified as carcinomas. In contrast, sarcomas develop in connective tissue. When epithelial cells or tissues are damaged from cystic fibrosis, sweat glands are also damaged, causing a frosty coating of the skin. Etymology and pronunciation The word epithelium uses the Greek roots ἐπί (epi), "on" or "upon", and θηλή (thēlē), "nipple". Epithelium is so called because the name was originally used to describe the translucent covering of small "nipples" of tissue on the lip. The word has both mass and count senses; the plural form is epithelia. Additional images
Biology and health sciences
Tissues
null
2703136
https://en.wikipedia.org/wiki/Trans-Mongolian%20Railway
Trans-Mongolian Railway
The Trans-Mongolian Railway (, ) connects Ulan-Ude on the Trans-Siberian Railway in Buryatia, Russia, with Ulanqab in Inner Mongolia, China, via Ulaanbaatar, the capital of Mongolia. It was completed in 1956, and runs from northwest to southeast with major stations at Naushki/Sükhbaatar on the Russian border, Darkhan, Züünkharaa, Choir, Sainshand, and Zamyn-Üüd/Erenhot on the Chinese border, where the railway changes from single-track to double-track and its gauge changes from 1,520 mm Russian gauge to 1,435 mm standard gauge. The railway also has important branch lines to Erdenet and Baganuur. History Railway development came late to Mongolia. In 1937, a line was built from Ulan-Ude in the Soviet Union to Naushki on the border with Mongolia. In 1939, a paved road was extended to Ulaanbaatar, the country's capital. Construction of a rail line from Naushki to Ulaanbaatar was delayed by World War II, and completed in November 1949. The Soviet Union, Mongolia, and the People's Republic of China agreed to extend the line from Ulaanbaatar to the Chinese border. In Mongolia, the railway was built by the Soviet 505th Penal Unit, made up of soldiers mainly imprisoned for surrendering during the war. The railway was opened by Inner Mongolian leader Ulanhu on 1 January 1956. In 1958, the railway switched to diesel engines and automated switching. Branches were built to the coal mines at Sharyngol in 1963 () and at Baganuur in 1982 (), the copper mine at Erdenet in 1975 (), the fluorspar mine at Bor-Öndör in 1987 (), and the oil refinery at Züünbayan (). Modernization in the 1990s replaced some old Soviet-made locomotives with more powerful American models, and installed fiber-optic trackside cables for communications and signaling. In 2022, lines opened linking the branch at Züünbayan with Khangi on the Chinese border, and the coal mines at Tavan Tolgoi with Gashuun Sukhait on the border. A new line linking Züünbayan with Tavan Tolgoi is under construction. Operation The of the railway in Mongolia (as of 2017) are managed by UBTZ (the Ulaanbaatar Railway Company), a 50/50 Russian–Mongolian joint-stock company. Rail transport in Mongolia, which also includes the unconnected Choibalsan–Borzya line built in 1938–39, in 1998 carried 96 percent of the country's freight transportation and 55 percent of passenger traffic. In Mongolia it is mostly single-tracked, with some 60 stations and double-tracked passing sidings. At Erenhot station in Inner Mongolia, the railway's Russian gauge track meets with China's standard gauge. There are trans-shipping facilities and rolling-stock equipment for bogie exchange. As of 2000, the railway had nine container terminals, the largest at Zamyn-Üüd, and UBTZ operated 60 locomotives, 300 passenger cars, and 2,400 freight wagons, including 140 container wagons. The primary international service on the railway is the China Railway K3/4 train, which began service in 1959 and connects Beijing with Moscow. Proposed lines A 2010 Mongolian government plan proposed of new track, for the primary purpose of connecting Dalanzadgad and Choibalsan, to be built in three stages: the first stage, totaling and linking Dalanzadgad–Tavan Tolgoi mine–Tsagaan Suvarga mine–Züünbayan (), Sainshand–Baruun-Urt (), Baruun-Urt–Khööt mine (), and Khööt–Choibalsan (); the second stage, totaling and connecting the first stage with the Chinese border, linking Nariin Sukhait mine–Shivee Khüren (), Tavan Tolgoi–Gashuun Sukhait (), Khööt–Tamsagbulag–Nömrög (), and Khööt–Bichigt (); and the third stage, totaling ) and not described in detail, but including a link with Tsagaannuur on the Russian border and a line from Ulaanbaatar to Kharkhorin. In 2012, a line connecting Erdenet–Mörön–Ovoot mine–Arts Suuri on the Russian border () was approved, but never built. In 2014, it was announced that the planned Tavan Tolgoi–Gashuun Sukhait and Khööt–Bichigt lines were to be of Chinese gauge, while the Dalanzadgad–Choibalsan, Khööt–Nömrög, and Erdenet–Artssuuri lines were to be of Russian gauge. In 2016, a line linking Züünbayan to Khangi on the Chinese border () was approved; it was completed in 2023. A 2017 government plan, greatly reduced in scope from the 2010 one, proposed linking Khööt–Choibalsan, Nariin Sukhait–Shivee Khüren, Khööt–Bichigt, and Züünbayan–Khangi. Gallery
Technology
Railway lines
null
4962816
https://en.wikipedia.org/wiki/Picul
Picul
A picul , dan or tam, is a traditional Asian unit of weight, defined as "as much as a man can carry on a shoulder-pole". Historically, it was defined as equivalent to 100 or 120 catties, depending on time and region. The picul is most commonly used in southern China and Maritime Southeast Asia. History The unit originated in China during the Qin dynasty (221–206 BC), where it was known as the shi (石 "stone"). During the Han dynasty, one stone was equal to 120 catties. Government officials were paid in grain, counted in stones, with top ranked ministers being paid 2000 stones. As a unit of measurement, the word shi (石) can also be pronounced dan. To avoid confusion, the character is sometimes changed to 擔 (dàn), meaning "burden" or "load". Likewise, in Cantonese the word is pronounced sek (石) or daam (擔), and in Hakka it is pronounced tam (擔). The word picul appeared as early as the mid 9th century in Javanese. In modern Malay, pikul is also a verb meaning 'to carry on the shoulder'. In the early days of Hong Kong as a British colony, the stone (石, with a Cantonese pronunciation given as shik) was used as a measurement of weight equal to 120 catties or , alongside the picul of 100 catties. It was made obsolete by subsequent overriding legislation in 1885, which included the picul but not the stone, to avoid confusion with European-origin measures that are similarly called stone. Following Spanish, Portuguese, British and most especially the Dutch colonial maritime trade, the term picul was both a convenient unit, and a lingua franca unit that was widely understood and employed by other Austronesians (in modern Malaysia and the Philippines) and their centuries-old trading relations with Indians, Chinese and Arabs. It remained a convenient reference unit for many commercial trade journals in the 19th century. One example is Hunts Merchant Magazine of 1859 giving detailed tables of expected prices of various commodities, such as coffee, e.g. one picul of Javanese coffee could be expected to be bought from 8 to 8.50 Spanish dollars in Batavia and Singapore. Definitions As for any traditional measurement unit, the exact definition of the picul varied historically and regionally. In imperial China and later, the unit was used for a measure equivalent to 100 catties. In 1831, the Dutch East Indies authorities acknowledged local variances in the definition of the pikul. In Hong Kong, one picul was defined in Ordinance No. 22 of 1844 as avoirdupois pounds. The modern definition is exactly 60.478982 kilograms. The measure was and remains used on occasion in Taiwan where it is defined as 60 kg. The last, a measure of rice, was 20 picul, or 1,200 kg.
Physical sciences
Chinese
Basics and measurement
4963211
https://en.wikipedia.org/wiki/Silver%20carp
Silver carp
The silver carp or silverfin (Hypophthalmichthys molitrix) is a species of freshwater cyprinid fish, a variety of Asian carp native to China and eastern Siberia, from the Amur River drainage in the north to the Xi Jiang River drainage in the south. Although a threatened species in its natural habitat, it has long been cultivated in China as one of the "Four Famous Domestic Fish" (四大家鱼) together with Bighead carp, Black carp and Grass carp. By weight, more silver carp are produced worldwide in aquaculture than any other species of fish except for the grass carp. Silver carp are usually farmed in polyculture with other Asian carp, or sometimes with catla or other shark species. The species has also been introduced, or spread by connected water, to at least 1 country around the world. The reason for importation was generally for use in aquaculture, but enhancement of wild fisheries and water quality control have also been intended on occasion. In some of these places, the species is considered invasive. The silver carp reaches a typical length of with a maximum length of and weight of . Diet The silver carp is a filter feeder, and possesses a specialized feeding apparatus capable of filtering particles as small as 4 μm. The gill rakers are fused into a sponge-like filter, and an epibranchial organ secretes mucus, which assists in trapping small particles. A strong buccal pump forces water through this filter. Silver carp, like all Hypophthalmichthys species, have no stomachs; they are thought to feed more or less constantly, largely on phytoplankton, and also consume zooplankton and detritus. In places where this plankton-feeding species has been introduced, they are thought to compete with native planktivorous fishes, which in North America include paddlefish (Polyodon spathula), bigmouth buffalo (Ictiobus cyprinellus), gizzard shad (Dorosoma cepedianum), and young fish of almost all species. Because they feed on plankton, they are sometimes successfully used for controlling water quality, especially in the control of noxious blue-green algae (cyanobacteria). Certain species of blue-green algae, notably the often toxic Microcystis, can pass through the gut of silver carp unharmed, picking up nutrients in the process. Thus, in some cases, blue-green algae blooms have been exacerbated by silver carp, and Microcystis has also been shown to produce more toxins in the presence of silver carp. These carp, which have natural defenses to their toxins, sometimes can contain enough algal toxins in their systems to become hazardous to eat. Ecology and conservation The silver carp in its natural range migrate upstreams for spawning; eggs and larvae then drift downstream, and young fish hatch in the floodplain zone. Larvae and small juveniles feed on zooplankton, switching to phytoplankton once a certain size is reached. The species is somewhat sensitive to low oxygen conditions. The species is currently classified as near threatened in its original range, as its habitat and reproductive behavior are impacted by construction of dams, pollution, and overfishing. Population declines appear to have been particularly significant in the Chinese parts of its range. Sport fishing Silver carp are filter feeders, thus are difficult to catch on typical hook-and-line gear. Special methods have been developed for these fish, the most important being the "suspension method", usually consisting of a large dough ball that disintegrates slowly, surrounded by a nest of tiny hooks embedded in the bait. The entire apparatus is suspended below a large bobber. The fish feed on the small particles released from the dough ball and bump against the dough ball, with the intention of breaking off more small particles that can be filtered from the water, eventually becoming hooked on the tiny hooks. In some areas, using "snagging gear", in which large weighted treble hooks are jerked through the water, is legal to snag the fish. In the United States, silver carp are also popular targets for bowfishing; they are shot both in the water and in the air. In the latter case, powerboats are used to scare the fish and entice them to jump out of the water, and the fish are shot when they are airborne. Related species Two other species are in the genus Hypophthalmichthys, the bighead carp (H. nobilis) and the largescale silver carp (H. harmandi). The genus name Aristichthys has also sometimes been used for bighead carp, but is deprecated. The bighead carp differs from the silver carp in its behavior (it does not leap from the water when startled) and also in its diet. Bighead carp are also filter feeders, but they filter larger particles than silver carp, and in general consume a greater proportion of zooplankton in their diets than silver carp, which consume more phytoplankton. In at least some parts of the United States, bighead and silver carp hybridize in the wild and produce fertile offspring. The largescale silver carp is closely related to the silver carp, but its native range is to the south of that of the silver carp, mostly within Vietnam. Unlike bighead and silver carp, largescale silver carp have not been widely introduced around the world for use in aquaculture, although at least one introduction was made to some waters of the Soviet Union, where they hybridized with the introduced silver carp. In North America Silver carp were imported to North America in the 1970s to control algal growth in aquaculture and municipal wastewater treatment facilities, but escaped from captivity soon after their importation, and are now considered a highly invasive species. Silver carp, with the closely related bighead carp, often reach extremely high population densities, and are known to have undesirable effects on the local environments and native species, including the bigmouth buffalo. They have spread into the Mississippi, Illinois, Ohio, Missouri, Tennessee, Wabash Rivers, and many of their tributaries in the United States, and are abundant in the Mississippi catchment from Louisiana to South Dakota and Illinois. Dams seem to have slowed their advance up the Mississippi River, and until late November 2008, silver carp had not been captured north of central Iowa on the Mississippi. Dams that do not have navigation locks are complete barriers to natural upstream movement of silver carp, unless fishermen unintentionally assist this movement by the use of silver carp as bait. In 2020, Alabama Department of Conservation and Natural Resources found silver carp in Alabama's Pickwick and Wheeler reservoirs on the Tennessee River, but the species has not expanded its range in Alabama’s waterways. The Tennessee Valley Authority (TVA) has considered several methods to control the spread of Asian carp, including fish barriers at 10 locks controlled by the TVA. One is a bioacoustics fish fence, which uses a combination of sound, light and air bubbles. These barriers are installed at Barkley Lock and Dam in Kentucky, and are currently being studied for their effectiveness in deterring Asian carp. Other types of barriers used for Asian carp include carbon dioxide and electricity. The TVA has conducted environmental impact studies to minimize the impact of the barriers on native species. The TVA has also considered adjusting flow rates during Asian carp spawning periods, which are usually during high-water events, as Asian carp eggs are only semibuoyant and will sink to the bottom and die with low river flow. The silver carp is sometimes called the "flying" carp for its tendency to leap from the water when startled; it can leap up to into the air. Boaters traveling in uncovered high-speed watercraft have been reported to be injured by running into airborne fish while at speed. A leaping silver carp broke the jaw of a teenager being pulled on an inner tube, and water skiing in areas where silver carp are present is extremely dangerous. Peculiarly, the extreme jumping behavior appears to be unique to silver carp of North America; those in their native Asian range and introduced to other parts of the world are much less prone to jumping. Although theories have been proposed (for example, the high densities the species reaches in parts of North America, or that the introduced North American population may have been based on a small number of particularly "jumpy" individuals), the reason for these geographic differences is not known for certain.
Biology and health sciences
Cypriniformes
Animals
25480850
https://en.wikipedia.org/wiki/Dog%20flea
Dog flea
The dog flea (Ctenocephalides canis) is a species of flea that lives as an ectoparasite on a wide variety of mammals, particularly the domestic dog and cat. It closely resembles the cat flea, Ctenocephalides felis, which can live on a wider range of animals and is generally more prevalent worldwide. The dog flea is troublesome because it can spread Dipylidium caninum. Although they feed on the blood of dogs and cats, they sometimes bite humans. They can live without food for several months, but females must have a blood meal before they can produce eggs. They can deliver about 4000 eggs on the host's fur. The eggs go through four lifecycle stages: embryo, larva, pupa, and imago (adult). This whole life cycle from egg to adult takes from two to three weeks, although this depends on the temperature. It may take longer in cool conditions. Anatomy The dog flea's mouthparts are adapted for piercing skin and sucking blood. Dog fleas are external parasites, living by hematophagy off the blood of dogs. The dog often experiences severe itching in all areas where the fleas may reside. Fleas do not have wings and their hard bodies are compressed laterally and have hairs and spines, which makes it easy for them to travel through hair. They have relatively long hind legs for jumping. The dog flea can be distinguished from the very similar cat flea by its head, which is anteriorly rounded rather than elongate, and the tibiae of its hind legs, which exhibit eight setae-bearing notches rather than six. Signs and symptoms Flea infestations can be not only annoying for both dogs and cats and humans, but also very dangerous. Problems caused by fleas may range from mild to severe itching and discomfort to skin problems and infections. Anemia may also result from flea bites in extreme circumstances. Furthermore, fleas can transmit tapeworms and diseases to pets. When fleas bite humans, they may develop an itching rash with small bumps that may bleed. This rash is usually located on the armpit or fold of a joint such as the elbow, knee, or ankle. When the area is pressed, it turns white. When dogs are troubled by fleas, they scratch and bite themselves, especially in areas such as the head, neck, and around the tail. Fleas normally concentrate in such areas. This incessant scratching and biting may cause the dog's skin to become red and inflamed. This is easily noticeable when the fur has been parted and the dog's skin is exposed. Flea allergy dermatitis is developed by those dogs allergic to flea saliva. In this case, the symptom previously mentioned are more pronounced. Because of compulsive scratching and biting, the dog may lose hair, get bald spots, exhibit hot spots due to extreme irritation, and develop infections that result in smelly skin. Treatment and prevention Preventing and controlling flea infestations is a multi-step process. Prevention in the case of flea infestations can sometimes be difficult, but is the most effective way to ensure the dog will not get reinfected. Controlling flea infestations implies not only the pet has been cured and the fleas living on it are killed, but environment in which the pet lives is free of these parasites. Of all these, removing the fleas from the pet may be the easiest and simplest step given the many products especially designed to kill fleas available on the market. Every female flea on the pet is likely to have laid eggs in the environment in which the pet lives. Therefore, effective prevention and control of flea infestations involves the removal of the fleas from both indoor and outdoor environments, from all pets, and not allowing immature forms of fleas to develop. Removing the fleas in indoor environments consists of removing them mechanically. This can be done by a thorough vacuuming, especially in places where fleas are more likely to be found, such as below drapes, the place where the pet sleeps, and under furniture edges. Vacuuming can remove an estimated 50% of flea eggs. After vacuuming, using a specially designed product is recommended to kill the remaining fleas and to stop the development of eggs and larvae. The products available on the market may include carpet powders, sprays or foggers, which contain adult insecticides and insect growth regulators. Special attention should be paid to the dog's bedding. This should be washed every week; also the bed and surrounding areas should be treated with adult insecticides and insect growth regulators. Cleaning should be done at the same time in the cars, garage, pet carrier, basement, or any other place where the dog is known to spend time. Preventing flea infestations must include eliminating the parasites from the yard or kennel areas, the two places where fleas are most likely to live. Dog houses, patios or porches are some of the outdoor areas in which it is more likely to find fleas and those should be thoroughly cleaned. Fleas can also be carried by wild animals, such as opossums, chipmunks and raccoons. One is recommended to discourage these wild animals from their property and pets by never feeding them. Flea-control products are available in once-a-month topicals, dog collars, sprays, dips, powders, shampoos, and injectable and oral products.(link not found in September 2022) Many of these products contain an insecticide as an active ingredient which kills the adult fleas when coming into contact with them. Fleas absorb the insecticide which either paralyzes them or kills them. Other products do not target adult fleas at all, but instead prevent the flea eggs from hatching, thus breaking the life cycle. A very important part of flea prevention is to persist with the same control measures for as long as possible. Though the initial cleaning process may be thorough, fleas in incipient stages likely still exist around the house or on the pet. The life cycle of fleas can take up to one year, so maintaining the prevention measures for as long as half a year is recommended.
Biology and health sciences
Insects: General
Animals
18835541
https://en.wikipedia.org/wiki/Narcolepsy
Narcolepsy
Narcolepsy is a chronic neurological disorder that impairs the ability to regulate sleep–wake cycles, and specifically impacts REM (rapid eye movement) sleep. The pentad symptoms of narcolepsy include excessive daytime sleepiness (EDS), sleep-related hallucinations, sleep paralysis, disturbed nocturnal sleep (DNS), and cataplexy. People with narcolepsy tend to sleep about the same number of hours per day as people without it, but the quality of sleep is typically compromised. There are two recognized forms of narcolepsy, narcolepsy type 1 and type 2. Narcolepsy type 1 (NT1) can be clinically characterized by symptoms of EDS and cataplexy, and/or will have cerebrospinal fluid (CSF) orexin levels of less than 110 pg/ml. Cataplexy are transient episodes of aberrant tone, most typically loss of tone, that can be associated with strong emotion. In pediatric-onset narcolepsy, active motor phenomena are not uncommon. Cataplexy may be mistaken for syncope, tics, or seizures. Narcolepsy type 2 (NT2) does not have features of cataplexy and CSF orexin levels are normal. Sleep-related hallucinations, also known as hypnogogic (going to sleep) and hypnopompic (on awakening) are vivid hallucinations that can be auditory, visual, or tactile and may occur independent of or in combination with an inability to move (sleep paralysis). Narcolepsy is a clinical syndrome of hypothalamic disorder, but the exact cause of narcolepsy is unknown, with potentially several causes. A leading consideration for the cause of narcolepsy type 1 is that it is an autoimmune disorder. Proposed pathophysiology as an autoimmune disease suggest antigen presentation by DQ0602 to specific CD4+ T cells resulting in CD8+ T-cell activation and consequent injury to orexin producing neurons. Familial trends of narcolepsy are suggested to be higher than previously appreciated. Familial risk of narcolepsy among first degree relatives is high. Relative risk for narcolepsy in a first degree relative has been reported to be 361.8. However, there is a spectrum of symptoms found in this study, including asymptomatic abnormal sleep test findings to significantly symptomatic. The autoimmune process is thought to be triggered in genetically susceptible individuals by an immune provoking experience, such as infection with H1N1 influenza. Secondary narcolepsy can occur as a consequence of another neurological disorder. Secondary narcolepsy can be seen in some individuals with traumatic brain injury, tumors, Prader–Willi syndrome or other diseases affecting the parts of the brain that regulate wakefulness or REM sleep. Diagnosis is typically based on the symptoms and sleep studies, after excluding alternative causes of EDS. EDS can also be caused by other sleep disorders such as insufficient sleep syndrome, sleep apnea, major depressive disorder, anemia, heart failure, and drinking alcohol. While there is no cure, behavioral strategies, lifestyle changes, social support and medications may help. Lifestyle and behavioral strategies can include identifying and avoiding or desensitizing emotional triggers for cataplexy, dietary strategies that may reduce sleep inducing foods and drinks, scheduled or strategic naps and maintaining a regular sleep wake schedule. Social support, social networks, and social integration are resources that may lie in the communities related to living with narcolepsy. Medications used to treat narcolepsy are primarily targeting EDS and/or cataplexy. These medications include alerting agents (e.g., modafinil, armodafinil, pitolisant, solriamfetol), oxybate medications (e.g., twice nightly sodium oxybate, twice nightly mixed oxybate salts, and once nightly extended-release sodium oxybate), and other stimulants (e.g., methylphenidate, amphetamine). There is also the use of antidepressants such as tricyclic antidepressants, selective serotonin reuptake inhibitors (SSRIs), and serotonin–norepinephrine reuptake inhibitors (SNRIs) for the treatment of cataplexy. Estimates of frequency range from 0.2 to 600 per 100,000 people in various countries. The condition often begins in childhood, with males and females being affected equally. Untreated narcolepsy increases the risk of motor vehicle collisions and falls. Narcolepsy generally occurs anytime between early childhood and 50 years of age, and most commonly between 15 and 36 years of age. However, it may also rarely appear at any time outside of this range. Signs and symptoms There are two main characteristics of narcolepsy: excessive daytime sleepiness and abnormal REM sleep. Excessive daytime sleepiness occurs even after adequate night time sleep. A person with narcolepsy is likely to become drowsy or fall asleep, often at inappropriate or undesired times and places, or just be very tired throughout the day. Narcoleptics may not be able to experience the amount of restorative deep sleep that healthy people experience due to abnormal REM regulation – they are not "over-sleeping." Narcoleptics typically have higher REM sleep density than non-narcoleptics, but also experience more REM sleep without atonia. Many narcoleptics have sufficient REM sleep, but do not feel refreshed or alert throughout the day. This can feel like living their entire lives in a constant state of sleep deprivation. Excessive sleepiness can vary in severity, and it appears most commonly during monotonous situations that do not require much interaction. Daytime naps may occur with little warning and may be physically irresistible. These naps can occur several times a day. They are typically refreshing, but only for a few hours or less. Vivid dreams may be experienced on a regular basis, even during very brief naps. Drowsiness may persist for prolonged periods or remain constant. In addition, night-time sleep may be fragmented, with frequent awakenings. A second prominent symptom of narcolepsy is abnormal REM sleep. Narcoleptics are unique in that they enter into the REM phase of sleep in the beginnings of sleep, even when sleeping during the day. The classic symptoms of the disorder, often referred to as the "tetrad of narcolepsy", are cataplexy, sleep paralysis, hypnagogic hallucinations, and excessive daytime sleepiness. Other symptoms may include automatic behaviors and night-time wakefulness. These symptoms may not occur in all people with narcolepsy. An episodic loss of muscle function, known as cataplexy, ranging from slight weakness such as limpness at the neck or knees, sagging facial muscles, weakness at the knees often referred to as "knee buckling", or inability to speak clearly, to a complete body collapse. Episodes may be triggered by sudden emotional reactions such as laughter, anger, surprise, or fear. The person remains conscious throughout the episode. In some cases, cataplexy may resemble epileptic seizures. Usually speech is slurred and vision is impaired (double vision, inability to focus), but hearing and awareness remain normal. Cataplexy also has a severe emotional impact on narcoleptics, as it can cause extreme anxiety, fear, and avoidance of people or situations that might elicit an attack. Cataplexy is generally considered to be unique to narcolepsy and is analogous to sleep paralysis in that the usually protective paralysis mechanism occurring during sleep is inappropriately activated. The opposite of this situation (failure to activate this protective paralysis) occurs in rapid eye movement behavior disorder. Periods of wakefulness at night The temporary inability to talk or move when waking (or less often, when falling asleep), known as sleep paralysis. It may last a few seconds to minutes. This is often frightening but is not dangerous. Vivid, often frightening, dreamlike experiences that occur while dozing or falling asleep, known as hypnagogic hallucinations. Hypnopompic hallucinations refer to the same sensations while awakening from sleep. These hallucinations may manifest in the form of visual or auditory sensations. In most cases, the first symptom of narcolepsy to appear is excessive and overwhelming daytime sleepiness. The other symptoms may begin alone or in combination months or years after the onset of the daytime naps. There are wide variations in the development, severity, and order of appearance of cataplexy, sleep paralysis, and hypnagogic hallucinations in individuals. Only about 20 to 25 percent of people with narcolepsy experience all four symptoms. The excessive daytime sleepiness generally persists throughout life, but sleep paralysis and hypnagogic hallucinations may not. Many people with narcolepsy also have insomnia for extended periods of time. The excessive daytime sleepiness and cataplexy often become severe enough to cause serious problems in a person's social, personal, and professional life. Normally, when an individual is awake, brain waves show a regular rhythm. When a person first falls asleep, the brain waves become slower and less regular, which is called non-rapid eye movement (NREM) sleep. After about an hour and a half of NREM sleep, the brain waves begin to show a more active pattern again, called REM sleep (rapid eye movement sleep), when most remembered dreaming occurs. Associated with the EEG-observed waves during REM sleep, muscle atonia is present called REM atonia. In narcolepsy, the order and length of NREM and REM sleep periods are disturbed, with REM sleep occurring at sleep onset instead of after a period of NREM sleep. Also, some aspects of REM sleep that normally occur only during sleep, like lack of muscular control, sleep paralysis, and vivid dreams, occur at other times in people with narcolepsy. For example, the lack of muscular control can occur during wakefulness in a cataplexy episode; it is said that there is an intrusion of REM atonia during wakefulness. Sleep paralysis and vivid dreams can occur while falling asleep or waking up. Simply put, the brain does not pass through the normal stages of dozing and deep sleep but goes directly into (and out of) rapid eye movement (REM) sleep. As a consequence night time sleep does not include as much deep sleep, so the brain tries to "catch up" during the day, hence excessive daytime sleepiness. People with narcolepsy may visibly fall asleep at unpredicted moments (such motions as head bobbing are common). People with narcolepsy fall quickly into what appears to be very deep sleep, and they wake up suddenly and can be disoriented when they do (dizziness is a common occurrence). They have very vivid dreams, which they often remember in great detail. People with narcolepsy may dream even when they only fall asleep for a few seconds. Along with vivid dreaming, people with narcolepsy are known to have audio or visual hallucinations prior to falling asleep or before waking up. Narcoleptics can gain excess weight; children can gain when they first develop narcolepsy; in adults the body-mass index is about 15% above average. Causes The exact cause of narcolepsy is unknown, and it may be caused by several distinct factors. The mechanism involves the loss of orexin-releasing neurons within the lateral hypothalamus (about 70,000 neurons). Some researches indicated that people with type 1 narcolepsy (narcolepsy with cataplexy) have a lower level of orexin (hypocretin), which is a chemical contributing to the regulation of wakefulness and REM sleep. It also acts as a neurotransmitter to enable nerve cells to communicate. In up to 10% of cases there is a family history of the disorder. Family history is more common in narcolepsy with cataplexy. There is a strong link with certain genetic variants, which may make T-cells susceptible to react to the orexin-releasing neurons (autoimmunity) after being stimulated by infection with H1N1 influenza. In addition to genetic factors, low levels of orexin peptides have been correlated with a history of infection, diet, contact with toxins such as pesticides, and brain injuries due to head trauma, brain tumors or strokes. Genetics The primary genetic factor that has been strongly implicated in the development of narcolepsy involves an area of chromosome 6 known as the human leukocyte antigen (HLA) complex. Specific variations in HLA genes are strongly correlated with the presence of narcolepsy (HLA DQB1*06:02, frequently in combination with HLA DRB1*15:01); however, these variations are not required for the condition to occur and sometimes occur in individuals without narcolepsy. These genetic variations in the HLA complex are thought to increase the risk of an auto-immune response to orexin-releasing neurons in the lateral hypothalamus. The allele HLA-DQB1*06:02 of the human gene HLA-DQB1 was reported in more than 90% of people with narcolepsy, and alleles of other HLA genes such as HLA-DQA1*01:02 have been linked. A 2009 study found a strong association with polymorphisms in the TRAC gene locus (dbSNP IDs rs1154155, rs12587781, and rs1263646). A 2013 review article reported additional but weaker links to the loci of the genes TNFSF4 (rs7553711), Cathepsin H (rs34593439), and P2RY11-DNMT1 (rs2305795). Another gene locus that has been associated with narcolepsy is EIF3G (rs3826784). H1N1 influenza Type 1 narcolepsy is caused by hypocretin/orexin neuronal loss. T-cells have been demonstrated to be cross-reactive to both a particular piece of the hemagglutinin flu protein of the pandemic 2009 H1N1 and the amidated terminal ends of the secreted hypocretin peptides. Genes associated with narcolepsy mark the particular HLA heterodimer (DQ0602) involved in presentation of these antigens and modulate expression of the specific T cell receptor segments (TRAJ24 and TRBV4-2) involved in T cell receptor recognition of these antigens, suggesting causality. A link between GlaxoSmithKline's H1N1 flu vaccine Pandemrix and narcolepsy has been found in both children and adults. In 2010, Finland's National Institute of Health and Welfare recommended that Pandemrix vaccinations be suspended pending further investigation into narcolepsy. In 2018, it was demonstrated that T-cells stimulated by Pandemrix were cross-reactive by molecular mimicry with part of the hypocretin peptide, the loss of which is associated with type I narcolepsy. Pathophysiology Loss of neurons Orexin, otherwise known as hypocretin, is a neuropeptide that acts within the brain to regulate appetite and wakefulness as well as a number of other cognitive and physiological processes. Loss of these orexin-producing neurons causes narcolepsy and most individuals with narcolepsy have a reduced number of these neurons in their brains. Selective destruction of the HCRT/OX neurons with preservation of proximate structures suggests a highly specific autoimmune pathophysiology. Cerebrospinal fluid HCRT-1/OX-A is undetectable in up to 95% of patients with type 1 narcolepsy. The system which regulates sleep, arousal, and transitions between these states in humans is composed of three interconnected subsystems: the orexin projections from the lateral hypothalamus, the reticular activating system, and the ventrolateral preoptic nucleus. In narcoleptic individuals, these systems are all associated with impairments due to a greatly reduced number of hypothalamic orexin projection neurons and significantly fewer orexin neuropeptides in cerebrospinal fluid and neural tissue, compared to non-narcoleptic individuals. Those with narcolepsy generally experience the REM stage of sleep within five minutes of falling asleep, while people who do not have narcolepsy (unless they are significantly sleep deprived) do not experience REM until after a period of slow-wave sleep, which lasts for about the first hour or so of a sleep cycle. Disturbed sleep states The neural control of normal sleep states and the relationship to narcolepsy are only partially understood. In humans, narcoleptic sleep is characterized by a tendency to go abruptly from a waking state to REM sleep with little or no intervening non-REM sleep. The changes in the motor and proprioceptive systems during REM sleep have been studied in both human and animal models. During normal REM sleep, spinal and brainstem alpha motor neuron hyperpolarization produces almost complete atonia of skeletal muscles via an inhibitory descending reticulospinal pathway. Acetylcholine may be one of the neurotransmitters involved in this pathway. In narcolepsy, the reflex inhibition of the motor system seen in cataplexy has features normally seen only in normal REM sleep. Diagnosis The third edition of the International Classification of Sleep Disorders (ICSD-3) differentiates between narcolepsy with cataplexy (type 1) and narcolepsy without cataplexy (type 2), while the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5) uses the diagnosis of narcolepsy to refer to type 1 narcolepsy only. The DSM-5 refers to narcolepsy without cataplexy as hypersomnolence disorder. The most recent edition of the International Classification of Diseases, ICD-11, currently identifies three types of narcolepsy: type 1 narcolepsy, type 2 narcolepsy, and unspecified narcolepsy. ICSD-3 diagnostic criteria posits that the individual must experience "daily periods of irrepressible need to sleep or daytime lapses into sleep" for both subtypes of narcolepsy. This symptom must last for at least three months. For a diagnosis of type 1 narcolepsy, the person must present with either cataplexy, a mean sleep latency of less than 8 minutes, and two or more sleep-onset REM periods (SOREMPs), or they must present with a hypocretin-1 concentration of less than 110 pg/mL. A diagnosis of type 2 narcolepsy requires a mean sleep latency of less than 8 minutes, two or more SOREMPs, and a hypocretin-1 concentration of more than 110 pg/mL. In addition, the hypersomnolence and sleep latency findings cannot be better explained by other causes. DSM-5 narcolepsy criteria requires that the person to display recurrent periods of "an irrepressible need to sleep, lapsing into sleep, or napping" for at least three times a week over a period of three months. The individual must also display one of the following: cataplexy, hypocretin-1 concentration of less than 110 pg/mL, REM sleep latency of less than 15 minutes, or a multiple sleep latency test (MSLT) showing sleep latency of less than 8 minutes and two or more SOREMPs. For a diagnosis of hypersomnolence disorder, the individual must present with excessive sleepiness despite at least 7 hours of sleep as well as either recurrent lapses into daytime sleep, nonrestorative sleep episodes of 9 or more hours, or difficulty staying awake after awakening. In addition, the hypersomnolence must occur at least three times a week for a period of three months, and must be accompanied by significant distress or impairment. It also cannot be explained by another sleep disorder, coexisting mental or medical disorders, or medication. Tests Diagnosis is relatively easy when all the symptoms of narcolepsy are present, but if the sleep attacks are isolated and cataplexy is mild or absent, diagnosis is more difficult. Three tests that are commonly used in diagnosing narcolepsy are polysomnography (PSG), the multiple sleep latency test (MSLT), and the Epworth Sleepiness Scale (ESS). These tests are usually performed by a sleep specialist. Polysomnography involves the continuous recording of sleep brain waves and a number of nerve and muscle functions during night time sleep. When tested, people with narcolepsy fall asleep rapidly, enter REM sleep early, and may often awaken during the night. The polysomnogram also helps to detect other possible sleep disorders that could cause daytime sleepiness. The Epworth Sleepiness Scale is a brief questionnaire that is administered to determine the likelihood of the presence of a sleep disorder, including narcolepsy. The multiple sleep latency test is performed after the person undergoes an overnight sleep study. The person will be asked to sleep once every 2 hours, and the time it takes for them to do so is recorded. Most individuals will fall asleep within 5 to 8 minutes, as well as display REM sleep faster than non-narcoleptic people. Measuring orexin levels in a person's cerebrospinal fluid sampled in a spinal tap may help in diagnosing narcolepsy, with abnormally low levels serving as an indicator of the disorder. This test can be useful when MSLT results are inconclusive or difficult to interpret. Treatment Orexin replacement People with narcolepsy can be substantially helped, but not cured currently. However, the technology exists in early form such as experiments in using the prepro-orexin transgene via gene editing restored normal function in mice models by making other neurons produce orexin after the original set have been destroyed, or replacing the missing orexinergic neurons with hypocretin stem cell transplantation, are both steps in that direction for fixing the biology effectively permanently once applied in humans. Additionally effective ideal non-gene editing and chemical-drug methods involve hypocretin treatments methods such as future drugs like hypocretin agonists (such as danavorexton) or hypocretin replacement, in the form of hypocretin 1 given intravenous (injected into the veins), intracisternal (direct injection into the brain), and intranasal (sprayed through the nose), the latter being low in efficacy, at the low amount used in current experiments but may be effective at very high doses in the future. Behavioral General strategies like people and family education, sleep hygiene and medication compliance, and discussion of safety issues for example driving license can be useful. Potential side effects of medication can also be addressed. Regular follow-up is useful to be able to monitor the response to treatment, to assess the presence of other sleep disorders like obstructive sleep apnea, and to discuss psychosocial issues. In many cases, planned regular short naps can reduce the need for pharmacological treatment of the EDS, but only improve symptoms for a short duration. A 120-minute nap provided benefit for 3 hours in the person's alertness whereas a 15-minute nap provided no benefit. Daytime naps are not a replacement for night time sleep. Ongoing communication between the health care provider, person, and their family members is important for optimal management of narcolepsy. Medications As described above, medications used to treat narcolepsy primarily target EDS and/or cataplexy. Internationally, there are differences in the availability of medications as well as guidelines for treatment. The alerting agents are medications typically used to improve wakefulness and include modafinil, armodafinil, pitolisant and solriamfetol. In late 2007, an alert for severe adverse skin reactions to modafinil was issued by the FDA. Solriamfetol is a new molecule indicated for narcolepsy of type 1 and 2. Solriamfetol works by inhibiting the reuptake of the monoamines via the interaction with both the dopamine transporter and the norepinephrine transporter. This mechanism differs from that of the wake-promoting agents modafinil and armodafinil. These are thought to bind primarily at the dopamine transporter to inhibit the reuptake of dopamine. Solriamfetol also differs from amphetamines as it does not promote the release of norepinephrine in the brain. Uniquely, Pitolisant has a novel mechanism of action as an H3 antagonists, which promotes the release of the wakefulness-promoting molecule amine histamine. It was initially available in France, United Kingdom's (NHS ) after being given marketing authorisation by European Commission on the advice of the European Medicines Agency and then in the United States by the approval of the Food and Drug Administration (FDA) . Pemoline was previously used but was withdrawn due to toxicity. Traditional stimulants, such as methylphenidate, amphetamine and dextroamphetamine can be used, but are commonly considered second- or third-line therapy. Sodium oxybate, also known as sodium gamma-hydroxybutyrate (GHB), can be used for cataplexy associated with narcolepsy and excessive daytime sleepiness associated with narcolepsy. There are now three formulations of oxybate medications (twice-nightly sodium oxybate, twice nightly mixed salts oxybate, and once-nightly extended-release sodium oxybate). This class of medication is taken once or twice during the night, as opposed to other medications for EDS and cataplexy that are typically taken during the day. Other medications that suppress REM sleep may also be used for the treatment of cataplexy as well as potentially other REM dissociative symptoms. Tricyclic antidepressants (clomipramine, imipramine, or protriptyline), selective serotonin reuptake inhibitors (SSRIs), and selective norepinephrine reuptake inhibitors (SNRIs) (Venlafaxine) are used for the treatment of cataplexy. Atomoxetine, a non-stimulant and a norepinephrine reuptake inhibitor (NRI), which has no addiction liability or recreational effects, has been used with variable benefit. Other NRIs like viloxazine and reboxetine have also been used in the treatment of narcolepsy. Additional related medications include mazindol and selegiline. Children Common behavioral treatments for childhood narcolepsy include improved sleep hygiene, scheduled naps, and physical exercise. Many medications are used in treating adults and may be used to treat children. These medications include central nervous system stimulants such as methylphenidate, modafinil, amphetamine, and dextroamphetamine. Other medications, such as sodium oxybate or atomoxetine, may also be used to counteract sleepiness. Medications such as sodium oxybate, venlafaxine, fluoxetine, and clomipramine may be prescribed if the child presents with cataplexy. Epidemiology Estimates of frequency range from 0.2 per 100,000 in Israel to 600 per 100,000 in Japan. These differences may be due to how the studies were conducted or the populations themselves. In the United States, narcolepsy is estimated to affect as many as 200,000 Americans, but fewer than 50,000 are diagnosed. The prevalence of narcolepsy is about 1 per 2,000 persons. Narcolepsy is often mistaken for depression, epilepsy, the side effects of medications, poor sleeping habits or recreational drug use, making misdiagnosis likely. While narcolepsy symptoms are often confused with depression, there is a link between the two disorders. Research studies have mixed results on co-occurrence of depression in people with narcolepsy, as the numbers quoted by different studies are anywhere between 6% and 50%. Narcolepsy can occur in both men and women at any age, although typical symptom onset occurs in adolescence and young adulthood. There is about a ten-year delay in diagnosing narcolepsy in adults. Cognitive, educational, occupational, and psychosocial problems associated with the excessive daytime sleepiness of narcolepsy have been documented. For these to occur in the crucial teen years when education, development of self-image, and development of occupational choice are taking place is especially devastating. While cognitive impairment does occur, it may only be a reflection of the excessive daytime somnolence. Society and culture In 2015, it was reported that the British Department of Health was paying for sodium oxybate medication at a cost of £12,000 a year for 80 people who are taking legal action over problems linked to the use of the Pandemrix swine flu vaccine. Sodium oxybate is not available to people with narcolepsy through the National Health Service. Name The term "narcolepsy" is from the French narcolepsie. The French term was first used in 1880 by Jean-Baptiste-Édouard Gélineau, who used the Greek νάρκη (narkē), meaning "numbness", and λῆψις (lepsis) meaning "attack". Research GABA-directed medications Given the possible role of hyper-active GABAA receptors in the primary hypersomnias (narcolepsy and idiopathic hypersomnia), medications that could counteract this activity are being studied to test their potential to improve sleepiness. These currently include clarithromycin and flumazenil. Flumazenil Flumazenil is the only GABAA receptor antagonist on the market as of January 2013, and it is currently manufactured only as an intravenous formulation. Given its pharmacology, researchers consider it to be a promising medication in the treatment of primary hypersomnias. Results of a small, double-blind, randomized, controlled clinical trial were published in November 2012. This research showed that flumazenil provides relief for most people whose CSF contains the unknown "somnogen" that enhances the function of GABAA receptors, making them more susceptible to the sleep-inducing effect of GABA. For one person, daily administration of flumazenil by sublingual lozenge and topical cream has proven effective for several years. A 2014 case report also showed improvement in primary hypersomnia symptoms after treatment with a continuous subcutaneous flumazenil infusion. The supply of generic flumazenil was initially thought to be too low to meet the potential demand for treatment of primary hypersomnias. However, this scarcity has eased, and dozens of people are now being treated with flumazenil off-label. Clarithromycin In a test tube model, clarithromycin (an antibiotic approved by the FDA for the treatment of infections) was found to return the function of the GABA system to normal in people with primary hypersomnias. Investigators therefore treated a few people with narcolepsy with off-label clarithromycin, and most felt their symptoms improved with this treatment. In order to help further determine whether clarithromycin is truly beneficial for the treatment of narcolepsy and idiopathic hypersomnia, a small, double-blind, randomized, controlled clinical trial was completed in 2012. "In this pilot study, clarithromycin improved subjective sleepiness in GABA-related hypersomnia. Larger trials of longer duration are warranted." In 2013, a retrospective review evaluating longer-term clarithromycin use showed efficacy in a large percentage of people with GABA-related hypersomnia. "It is important to note that the positive effect of clarithromycin is secondary to a benzodiazepine antagonist-like effect, not its antibiotic effects, and treatment must be maintained." Orexin receptor agonists Orexin-A ( hypocretin-1) has been shown to be strongly wake-promoting in animal models, but it does not cross the blood–brain barrier. The first line treatment for narcolepsy, modafinil, has been found to interact indirectly with the orexin system. It is also likely that an orexin receptor agonist will be found and developed for the treatment of hypersomnia. One such agent which is currently in clinical trials is danavorexton. L-carnitine Abnormally low levels of acylcarnitine have been observed in people with narcolepsy. These same low levels have been associated with primary hypersomnia in general in mouse studies. "Mice with systemic carnitine deficiency exhibit a higher frequency of fragmented wakefulness and rapid eye movement (REM) sleep, and reduced locomotor activity." Administration of acetyl-L-carnitine was shown to improve these symptoms in mice. A subsequent human trial found that people with narcolepsy given L-carnitine spent less total time in daytime sleep than people who were given a placebo. Animal models Animal studies try to mimic the disorder in humans by either modifying the Hypocretin/Orexin receptors or by eliminating this peptide. An orexin deficit caused by the degeneration of hypothalamic neurons is suggested to be one of the causes of narcolepsy. More recent clinical studies on both animals and humans have also revealed that hypocretin is involved in other functions beside regulation of wakefulness and sleep. These functions include autonomic regulation, emotional processing, reward learning behaviour or energy homeostasis. In studies where the concentration of the hypocretin was measured under different circumstances, it was observed that the hypocretin levels increased with the positive emotion, anger or social interaction but stayed low during sleep or during pain experience. The most reliable and valid animal models developed are the canine (narcoleptic dogs) and the rodent (orexin-deficient mice) ones which helped investigating the narcolepsy and set the focus on the role of orexin in this disorder. Dog models Dogs, as well as other species like cats or horses, can also exhibit spontaneous narcolepsy with similar symptoms as the ones reported in humans. The attacks of cataplexy in dogs can involve partial or full collapse. Narcolepsy with cataplexy was identified in a few breeds like Labrador retrievers or Doberman pinschers where it was investigated the possibility to inherit this disorder in the autosomal recessive mode. According to a reliable canine model for narcolepsy would be the one in which the narcoleptic symptoms are the result of a mutation in the gene HCRT 2. The animals affected exhibited excessive daytime sleepiness with a reduced state of vigilance and severe cataplexy resulted after palatable food and interactions with the owners or with other animals. Rodent models Mice that are genetically engineered to lack orexin genes demonstrate many similarities to human narcolepsy. During nocturnal hours, when mice are normally present, those lacking orexin demonstrated murine cataplexy and displayed brain and muscle electrical activity similar to the activity present during REM and NREM sleep. This cataplexy is able to be triggered through social interaction, wheel running, and ultrasonic vocalizations. Upon awakening, the mice also display behavior consistent with excessive daytime sleepiness. Mouse models have also been used to test whether the lack of orexin neurons is correlated with narcolepsy. Mice whose orexin neurons have been ablated have shown sleep fragmentation, SOREMPs, and obesity. Rat models have been used to demonstrate the association between orexin deficiency and narcoleptic symptoms. Rats who lost the majority of their orexinergic neurons exhibited multiple SOREMPs as well as less wakefulness during nocturnal hours, shortened REM latency, and brief periods of cataplexy.
Biology and health sciences
Mental disorders
Health
6520040
https://en.wikipedia.org/wiki/Trapping
Trapping
Animal trapping, or simply trapping or ginning, is the use of a device to remotely catch and often kill an animal. Animals may be trapped for a variety of purposes, including for meat, fur/feathers, sport hunting, pest control, and wildlife management. History Neolithic hunters, including the members of the Cucuteni-Trypillian culture of Romania and Ukraine (), used traps to capture their prey. An early mention in written form is a passage from the self-titled book by Taoist philosopher Zhuangzi which describes Chinese methods used for trapping animals during the 4th century BCE. The Zhuangzi reads: "The sleek-furred fox and the elegantly spotted leopard ... can't seem to escape the disaster of nets and traps." "Modern" steel jaw-traps were first described in western sources as early as the late 16th century. The first mention comes from Leonard Mascall's book on animal trapping. It reads: "a griping trappe made all of yrne, the lowest barre, and the ring or hoope with two clickets". The mousetrap, with a strong spring device mounted on a wooden base, was first patented by William C. Hooker of Abingdon, Illinois, in 1894. Reasons Trapping is carried out for a variety of reasons. Originally, it was for food, fur, and other animal products. Trapping has since been expanded to encompass pest control, wildlife management, the pet trade, and zoological specimens. Fur clothing In the early days of the colonization settlement of North America, the trading of furs was common between the Dutch, French, or English and the indigenous populations inhabiting their respective colonized territories. Many locations where trading took place were referred to as trading posts. Much trading occurred along the Hudson River area in the early 1600s. In some locations in the US and in many parts of southern and western Europe, trapping generates much controversy because it is a contributing factor to declining populations in some species, such as the Canadian Lynx. In the 1970s and 1980s, the threat to lynx from trapping reached a new height when the price for hides rose to as much as $600 each. By the early 1990s, the Canada lynx was a clear candidate for Endangered Species Act (ESA) protection. In response to the lynx's plight, more than a dozen environmental groups petitioned FWS in 1991 to list lynx in the lower 48 states. Fish and Wildlife Services (FWS) regional offices and field biologists supported the petition, but FWS officials in the Washington, D.C. headquarters turned it down. In March 2000, the FWS listed the lynx as threatened in the lower 48. The prices of fur pelts have significantly declined. Some trappers have considered forgoing trapping because the cost of trapping exceeds the return on the furs sold at the end of the season. Perfume Beaver castors are used in many perfumes as a sticky substance. Trappers are paid by the government of Ontario to harvest the castor sacs of beavers and are paid from 10 to 40 dollars per dry pound when sold to the Northern Ontario Fur Trappers Association. In the early 1900s, muskrat glands were used in making perfume, or women just crushed the glands and rubbed them onto their body. Pest control Trapping is regularly used for pest control of beaver, coyote, raccoon, cougar, bobcat, Virginia opossum, fox, squirrel, rat, mouse and mole in order to limit damage to households, food supplies, farming, ranching, and property. Traps are used as a method of pest control as an alternative to pesticides. Commonly spring traps which holds the animal are used—mousetraps for mice, or the larger rat traps for larger rodents like rats and squirrel. Specific traps are designed for invertebrates such as cockroaches and spiders. Some mousetraps can also double as an insect or universal trap, like the glue traps which catch any small animal that walks upon them. Although it is common to state that trapping is an effective means of pest control, a counter-example is found in the work of Jon Way, a biologist in Massachusetts. Way reported that the death or disappearance of a territorial male coyote can lead to double litters, and postulates a possible resultant increase in coyote density. Coexistence programs that take this scientific research into account are being pursued by groups such as the Association for the Protection of Fur-Bearing Animals. Wildlife management Animals are frequently trapped in many parts of the world to prevent damage to personal property, including the killing of livestock by predatory animals. Many wildlife biologists support the use of regulated trapping for the sustained harvest of some species of furbearers. Research shows that trapping can be an effective method of managing or studying furbearers, controlling damage caused by furbearers, and at times reducing the spread of harmful diseases. The research shows that regulated trapping is a safe, efficient, and practical means of capturing individual animals without impairing the survival of furbearer populations or damaging the environment. Wildlife biologists also support regulatory and educational programs, research to evaluate trap performance and the implementation of improvements in trapping technology in order to improve animal welfare. Trapping is useful to control over population of certain species. Trapping is also used for research and relocation of wildlife. Federal authorities in the United States use trapping as the primary means to control predators that prey on endangered species such as the San Joaquin kit fox (Vulpes macrotis mutica), California least tern (Sterna antillarum browni) and desert tortoise (Gopherus agassizii). Other reasons Animals may be trapped for public display, for natural history displays, or for such purposes as obtaining elements used in the practice of traditional medicine. Trapping may also be done for hobby and conservation purposes. Types Most of the traps used for mammals can be divided into six types: foothold traps, body gripping traps, snares, deadfalls, cages, and glue traps. Some of the traditional kinds have changed little since the Stone Age. Foothold traps Foothold traps were invented in the 17th century for use against humans (see mantrap), to keep poachers out of European estates. The device uses a pressure plate between two metal arms, or "jaws", lined with spiked protrusions, or "teeth". Once the plate has been stepped on, the arms close on the ensnared person or animal's foot. Blacksmiths made traps of iron in the early 1700s for trappers. By the 1800s, companies began to manufacture steel foothold traps. Traps are designed in different sizes for different sized animals. In recent decades, the use of foothold traps in trapping and hunting has become controversial. Anti-fur campaigns have protested foothold traps as inhumane, with some claiming that an animal caught in a foothold trap will frequently chew off its leg to escape the trap. The practice has been banned in 101 countries as well as 10 states in the United States. Modern variations of the foothold trap have been designed to reduce instances of the animal fighting the trap, possibly injuring itself or getting loose in the process. These include traps with offset jaws and lamination, which decrease pressure on the animals' legs, and padded jaws with rubber inserts, which reduce animal injuries. Manufacturers of traps designed to work only on raccoons are referred to as dog-proof. These traps are small, and rely on the raccoon's grasping nature to trigger the trap. Body gripping/conibear traps Body-gripping traps are designed to kill animals quickly. They are often called "Conibear" traps after Canadian inventor Frank Conibear who began their manufacture in the late 1950s as the Victor-Conibear trap. Many trappers consider these traps to be one of the best trapping innovations of the 20th century; when they work as intended, animals that are caught squarely on the neck are killed quickly, and are therefore not left to suffer or given a chance to escape. The general category of body-gripping traps may include snap-type mouse and rat traps, but the term is more often used to refer to the larger, all-steel traps that are used to catch fur-bearing animals. These larger traps are made from bent round steel bars. These traps come in several sizes including model #110 or #120 at about for muskrat and mink, model #220 at about for raccoon and possum, and model #330 at about for beaver and otter. An animal may be lured into a body-gripping trap with bait, or the trap may be placed on an animal path to catch the animal as it passes. In any case, it is important that the animal is guided into the correct position before the trap is triggered. The standard trigger is a pair of wires that extend between the jaws of the set trap. The wires may be bent into various shapes, depending on the size and behavior of the target animal. Modified triggers include pans and bait sticks. The trap is designed to close on the neck and/or torso of an animal. When it closes on the neck, it closes the trachea and the blood vessels to the brain, and often fractures the spinal column; the animal loses consciousness within a few seconds and dies soon thereafter. If it closes on the foot, leg, snout, or other part of an animal, the results are less predictable. Trapping ethics call for precautions to avoid the accidental killing of non-target species (including domestic animals and people) by body-gripping traps. Note on terminology: the term "body-gripping trap" (and its variations including "body gripping", "body-grip", "body grip", etc.) is often used by animal-protection advocates to describe any trap that restrains an animal by holding onto any part of its body. In this sense, the term is defined to include foothold/foothold traps, Conibear-type traps, snares, and cable restraints; it does not include cage traps or box traps that restrain animals solely by containing them inside the cages or boxes without exerting pressure on the animals; it generally does not include suitcase-type traps that restrain animals by containing them inside the cages under pressure. Deadfall traps A deadfall is a heavy rock or log that is tilted at an angle and held up with sections of branches, with one of them serving as a trigger. When the animal moves the trigger, which may have bait on or near it, the rock or log falls, crushing the animal. The figure-four deadfall is a popular and simple trap constructed from materials found in the bush (three sticks with notches cut into them, plus a heavy rock or other heavy object). Also popular, and easier to set, is the Paiute deadfall, consisting of three long sticks, plus a much shorter stick, along with a cord or fiber material taken from the bush to interconnect the much shorter stick (sometimes called catch stick or trigger stick) with one of the longer sticks, plus a rock or other heavy object. Snares Snares are anchored cable or wire nooses set to catch wild animals such as squirrels and rabbits. In the US, they are most commonly used for capture and control of surplus furbearers and especially for food collection. They are also widely used by subsistence and commercial hunters for bushmeat consumption and trade in African forest regions and in Cambodia. Snares are one of the simplest traps and are very effective. They are cheap to produce and easy to set in large numbers. A snare traps an animal around the neck or the body; a snare consists of a noose made usually by wire or a strong string. Snares are widely criticised by animal welfare groups for their cruelty. UK users of snares accept that over 40% of animals caught in some environments will be non-target animals, although non-target captures range from 21% to 69% depending on the environment. In the US, non-target catches reported by users of snares in Michigan were . Snares are regulated in many jurisdictions, but are illegal in other jurisdictions, such as in much of Europe. Different regulations apply to snares in those areas where they are legal. In Iowa, snares have to have a "deer stop" which stops a snare from closing all the way. In the United Kingdom, snares must be "free-running" so that they can relax once an animal stops pulling, thereby allowing the trapper to decide whether to kill the animal or release it. Following a consultation on options to ban or regulate the use of snares, the Scottish Executive announced a series of measures on the use of snares, such as the compulsory fitting of safety stops, ID tags and marking areas where snaring takes place with signs. In some jurisdictions, swivels on snares are required, and dragging (non-fixed) anchors are prohibited. Trapping pit Trapping pits are deep pits dug into the ground, or built from stone, in order to trap animals. Like cage traps they are usually employed for catching animals without harming them. Cage traps (live traps) Cage traps are designed to catch live animals in a cage. They are usually baited, sometimes with food bait and sometimes with a live "lure" animal. Common baits include cat food and fish. Cage traps usually have a trigger located in the back of the cage that causes a door to shut; some traps with two doors have a trigger in the middle of the cage that causes both doors to shut. In either type of cage, the closure of the doors and the falling of a lock mechanism prevents the animal from escaping by locking the door(s) shut. Cage-trap for squirrels With two doors open, the squirrel can see through the opening on the opposite end. Peanut butter is placed in the trap as bait to attract the squirrel. In some locations, the traps can be placed in alignment with a building, wall, or fence (nearly under one edge of a bush). The wall does not present a threat to the squirrel, and the bush reduces the exposure and view of the squirrel. A blind area (by using natural or cardboard materials) surrounding the end of the trap presents a darker, safe hiding space near the trigger and bait of the trap. Where two-door traps are not available, a piece of cardboard held in place with a brick can be put behind the rear of the trap. Glue traps In cold climates, cockroaches may move indoors, seeking warmer environments and food. Cockroaches may enter houses via wastewater plumbing, underneath doors, or via air ducts or other openings in the walls, windows or foundation. Cockroach populations may be controlled through the use of glue board traps or insecticides. Glue board traps (also called adhesive or sticky traps) are made using adhesive applied to cardboard or similar material. Bait can be placed in the center or a scent may be added to the adhesive. Inexpensive glue board traps are normally placed in warm indoor locations readily accessible to insects but not likely to be encountered by people: underneath refrigerators or freezers, behind trash cans, etc. Covering any cracks or crevices through which cockroaches may enter, sealing food inside insect-proof containers, and quickly cleaning any spills or messes that have been made is beneficial. Another way to prevent an infestation is to thoroughly check any materials brought inside: cockroaches and their egg cases (ootheca) can be hidden inside or on furniture, or inside boxes, suitcases, grocery bags, etc. Upon finding an egg case, use a napkin to pick it up and then forcefully crush it; the resulting fluid leakage will then indicate the destruction of the eggs inside. Discard the napkin and the destroyed egg case as garbage. Domestic animals accidentally captured in glue traps can be released by carefully applying cooking oil or baby oil to the contact areas and gently working until the animal is free. Many animal rights groups, such as the Humane Society of the United States and In Defense of Animals, oppose the use of glue traps for their cruelty to animals. Glue traps were made illegal in Wales in October 2023, marking the first such ban in the United Kingdom. A ban on the sale and use of rodent glue traps is due to come into force in West Hollywood, California, in January 2024, making it the first such city ban in the United States. Glue traps are also banned in the Australian Capital Territory, Tasmania, and Victoria in Australia. Types of sets The most productive set for foothold traps is a dirt hole, a hole dug in the ground with a trap positioned in front. An attractant is placed inside the hole. The hole for the set is usually made in front of some type of object which is where medium-sized animals such as coyotes, fox or bobcats would use for themselves to store food. This object could be a tuft of taller grass, a stone, a stump, or some other natural object. The dirt from the hole is sifted over the trap and a lure applied around the hole. A flat set is another common use of the foothold trap. It is very similar to the dirt hole trap set, simply with no hole to dig. The attractant is placed on the object near the trap and a urine scent sprayed to the object. The cubby set simulates a den in which a small animal would live, but could be adapted for larger game. It could be made from various materials such as rocks, logs or bark, but the back must be closed to control the animals approach. The bait and/or lure is placed in the back of the cubby. The water set is usually described as a body-gripping trap or snare set so that the trap jaws or snare loop are partially submerged. The conibear is a type of trap used in water trapping and can also be used on land and is heavily regulated. The regulations vary from jurisdiction to jurisdiction. It is normally used without bait and has a wire trigger in the middle of its square-shaped, heavy-gauge wire jaws. It is placed in places that are frequented by the fur bearing animals. Unwanted catches Trappers can employ a variety of devices and strategies to avoid unwanted catches. Ideally, if a non-target animal (such as a domestic cat or dog) is caught in a non-lethal trap, it can be released without harm. A careful choice of set and lure may help to catch target animals while avoiding non-target animals. Although trappers cannot always guarantee that unwanted animals will not be caught, they can take precautions to avoid unwanted catches or release them unharmed. The snaring of non-target animals can be minimized using methods that exclude animals larger or smaller than the target animal. For example, deer stops are designed to avoid the snaring of deer or cattle by the leg; they are required in some parts of the US. Other precautions include setting snares at specific heights, diameters, and locations. In a study of foxes in the UK, researchers were unintentionally snaring brown hares about as frequently as the intended foxes until they improved their methods, using larger wire with rabbit stops to eliminate the unwanted catch of the brown hares. Controversy Any type of trap—whether it be a foothold/leghold, conibear, or snare/cable restraint—can get an unwanted catch, including endangered species and pets. Wildlife Services, a branch of the U.S. Department of Agriculture, estimated that between 2003 and 2013 hundreds of pets were killed by body-gripping traps, and that the agency itself has killed thousands of non-target animals in several states, from pet dogs to endangered species. The number of non-target animals killed has caused national and regional animal-protection organizations such as the Humane Society of the United States, American Society for the Prevention of Cruelty to Animals, Massachusetts Society for the Prevention of Cruelty to Animals, and others to continue to lobby for stricter controls over the use of these traps in the United States. Trapping might lead to stress, pain, or death for the animal, depending on the type of trap. [snare’s] Traps that work by catching limbs can cause injuries on the limbs, especially if used improperly and leave the animal unattended until the trapper comes. [depending on the laws in the state it could be 24 to 72 hours] The animal might die from the injury, starvation, or attacks from other animals. Many states employ the regulation that a trap must be checked at least every 36 hours to minimize risks to the animals. Trapping requires time, hard work, and money but can be efficient. Trapping has become expensive for the trapper, and in modern times it has become controversial. In part to address these concerns, in 1996, the Association of Fish and Wildlife Agencies, an organization made up of U.S. state and federal fish and wildlife agency professionals, began testing traps and compiling recommendations "to improve and modernize the technology of trapping through scientific research" known as Best Management Practices. As of February 2013, twenty best management practice recommendations have been published, covering nineteen species of common furbearers across North America. Trapping in Manitoba, Canada the average 2019–2020 pelt values for a red squirrel was CA$0.54 and for a black bear was $153.41
Technology
Hunting and fishing
null
860921
https://en.wikipedia.org/wiki/Ankylosauria
Ankylosauria
Ankylosauria is a group of herbivorous dinosaurs of the clade Ornithischia. It includes the great majority of dinosaurs with armor in the form of bony osteoderms, similar to turtles. Ankylosaurs were bulky quadrupeds, with short, powerful limbs. They are known to have first appeared in North Africa during the Middle Jurassic, and persisted until the end of the Late Cretaceous. The two main families of ankylosaurians, Nodosauridae and Ankylosauridae are primarily known from the Northern Hemisphere (North America, Europe and Asia), but the more basal Parankylosauria are known from southern Gondwana (South America, Australia and Antarctica) during the Cretaceous. Ankylosauria was first named by Henry Fairfield Osborn in 1923. In the Linnaean classification system, the group is usually considered either a suborder or an infraorder. It is contained within the group Thyreophora, which also includes the stegosaurs, armored dinosaurs known for their combination of plates and spikes. Etymology The name of this group of dinosaurs is associated with a number of anatomical features in which small and large bony shields fused together, completely covering their back and sides. On the skull these shields fused with the underlying bones, and the dorsal ribs fit snugly to the vertebrae. The Latin name Ankylosauria is derived from the Greek ἀγκύλος [ankylos] — "curved", "bent" with the anatomical meaning "hard" or "fused" and σαῦρος [sauros] — "lizard". In the 1908 description of the genus Ankylosaurus, Barnum Brown described the family Ankylosauridae as a group of representatives with a "rigid spine", but noted the wide, curved shape of the ribs, suggesting a "strongly curved" back (an error based on the alleged similarity to stegosaurs and glyptodonts, as ankylosaurs have flat backs). Therefore, "rigid lizard" and "curved lizard" could be additional meanings applied to the name of ankylosaurs. Classification Ankylosauria and Stegosauria together form the two major subgroups of Thyreophora, a group of armoured dinosaurs distinct from ornithopods and marginocephalians. Historically used for forms lacking large vertical plates, Kenneth Carpenter proposed in 1997 the first informal definition of the group, as all ornithischians closer to Ankylosaurus than Stegosaurus. This definition was further refined by Paul Sereno in 2005 to specify Ankylosaurus magniventris and Stegosaurus stenops, the type species of both genera, a definition that was followed by Madzia and colleagues in 2021 when the group name and definition was formalized following the PhyloCode. Phylogenetic and morphological studies have differed on the inclusion of certain early taxa into Ankylosauria, especially the armoured Early Jurassic form Scelidosaurus. As some analyses, like that of Carpenter from 2001 or David B. Norman in 2021 find Scelidosaurus and possibly other early forms like Emausaurus and Scutellosaurus to fall closer to Ankylosaurus than Stegosaurus, Carpenter and later Norman suggested redefining Ankylosauria to limit it to the two subclades Nodosauridae and Ankylosauridae, creating the new clade Ankylosauromorpha for all taxa closer to Ankylosaurus than Stegosaurus. However, as historically even these primitive forms were considered ankylosaurs if they were more derived than Stegosaurus, Madzia and colleagues considered a redefinition of Ankylosauria to be undesirable, instead preferring to abandon Ankylosauromorpha as a same-definition junior synonym of Ankylosauria. The clade Euankylosauria was named by Soto-Acuña and colleagues in 2021 in their description of a unique basal ankylosaur Stegouros to represent the grouping uniting ankylosaurids and nodosaurids, to the exclusion of their newly discovered clade Parankylosauria. This clade is formally defined in the PhyloCode as "the largest clade containing Ankylosaurus magniventris and Nodosaurus textilis, but not Stegouros elengassen". A 2023 review of Thyreophora rejects the traditional Ankylosauridae-Nodosauridae split, instead finding "nodosaurids" to be referrable to three separate families: Panoplosauridae, Polacanthidae, and Struthiosauridae. Evolution The origin of ankylosaurs is poorly understood, and only a few specimens from the Middle Jurassic are known. The ancestry of ankylosaurs has long been sought among stegosaurs, the closest group to ankylosaurs compared to other dinosaurs. Currently, ankylosaurs are a close group of stegosaurs within the Eurypoda clade. They are united by the presence of osteoderms in the skin, the narrow triangular skull of stegosaurs is similar to that of nodosaurids, and some similarity is found in the structure of the palate. Since stegosaurs are known from the Middle Jurassic, ankylosaurs are probably of the same age. They may have split up during the Aalenian period, more than 170 million years ago, but they were definitely in Africa by the Bathonian due to the presence of Spicomellus in Morocco. There are no well-preserved remains of ankylosaurs of that age. An incomplete radius and ulna from the Isle of Skye in Scotland are known, the exact affiliation of which to ankylosaurs or stegosaurs is not established. Most likely, ankylosaurs followed a different evolutionary path than stegosaurs, although it is unknown when and how they split off. In the latter, the osteoderms become raised, and the lateral protection disappears. Ankylosaurs evolved towards the development of osteoderms on the surface of the skull, increased armor and further consolidation of the carapace, which suggests that the ancestor of the carapace consisted of separate non-fused osteoderms. Paleobiology Possible neonate-sized ankylosaur fossils have been documented in the scientific literature. Armor All ankylosaurians had armor over much of their bodies, mostly scutes and nodules, with large spines in some cases. The scutes, or plates, are rectangular to oval objects organized in transverse (side to side) rows, often with keels on the upper surface. Smaller nodules and plates filled in the open spaces between large plates. In all three groups, the first two rows of plates tend to form a sort of half-ring around the neck; in nodosaurids, this comes from adjacent plates fusing with each other (and there is a third row as well), while ankylosaurids usually have the plates fused to the top of another band of bone. The skull has armor plastered on to it, including a distinctive piece on the outside-rear of the lower jaw. Diet and feeding Ankylosaurs were built low to the ground, typically one foot off the ground surface. They had small, triangular teeth that were loosely packed, similar to stegosaurs. The large hyoid bones left in skeletons indicates that they had long, flexible tongues. They also had a large, side secondary palate. This means that they could breathe while chewing, unlike crocodiles. Their expanded gut region suggests the use of fermentation to digest their food, using symbiotic bacteria and gut flora. Their diet likely consisted of ferns, cycads, and angiosperms. Mallon et al. (2013) examined herbivore coexistence on the island continent of Laramidia during the Late Cretaceous. It was concluded that ankylosaurs were generally restricted to feeding on vegetation at, or below, the height of 1 meter. Vocalization In February 2023, scientists reported that the possible sounds ankylosaurs may have made were bird-like vocalizations based on a finding of a fossilized larynx from the ankylosaur Pinacosaurus grangeri.
Biology and health sciences
Ornitischians
Animals
861683
https://en.wikipedia.org/wiki/Nautiloid
Nautiloid
Nautiloids are a group of marine cephalopods (Mollusca) which originated in the Late Cambrian and are represented today by the living Nautilus and Allonautilus. Fossil nautiloids are diverse and species rich, with over 2,500 recorded species. They flourished during the early Paleozoic era, when they constituted the main predatory animals. Early in their evolution, nautiloids developed an extraordinary diversity of shell shapes, including coiled morphologies and giant straight-shelled forms (orthocones). No orthoconic and only a handful of coiled species, the nautiluses, survive to the present day. In a broad sense, "nautiloid" refers to a major cephalopod subclass or collection of subclasses (Nautiloidea sensu lato). Nautiloids are typically considered one of three main groups of cephalopods, along with the extinct ammonoids (ammonites) and living coleoids (such as squid, octopus, and kin). While ammonoids and coleoids are monophyletic clades with exclusive ancestor-descendant relationships, this is not the case for nautiloids. Instead, nautiloids are a paraphyletic grade of various early-diverging cephalopod lineages, including the ancestors of ammonoids and coleoids. Some authors prefer a narrower definition of Nautiloidea (Nautiloidea sensu stricto), as a singular subclass including only those cephalopods which are closer to living nautiluses than they are to either ammonoids or coleoids. Taxonomic relationships Nautiloids are among the group of animals known as cephalopods, an advanced class of mollusks which also includes ammonoids, belemnites and modern coleoids such as octopus and squid. Other mollusks include gastropods, scaphopods and bivalves. Traditionally, the most common classification of the cephalopods has been a four-fold division (by Bather, 1888), into the orthoceratoids, nautiloids, ammonoids, and coleoids. This article is about nautiloids in that broad sense, sometimes called Nautiloidea sensu lato. Cladistically speaking, nautiloids are a paraphyletic assemblage united by shared primitive (plesiomorphic) features not found in derived cephalopods. In other words, they are a grade group that is thought to have given rise to orthoceratoids, ammonoids and coleoids, and are defined by the exclusion of those descendent groups. Both ammonoids and coleoids have traditionally been assumed to have descended from bactritids, which in turn arose from straight-shelled orthoceratoids. The ammonoids appeared early in the Devonian period (some 400 million years ago) and became abundant in the Mesozoic era, before their extinction at the end of the Cretaceous. Some workers apply the name Nautiloidea to a more exclusive group, called Nautiloidea sensu stricto. This taxon consists only of those orders that are clearly related to the modern nautilus to the exclusion of other modern cephalopods. In this restricted definition, membership is somewhat variable between authors, but it usually includes Tarphycerida, Oncocerida, and Nautilida. Shell All nautiloids have a large external shell, divided into a narrowing chambered region (the phragmocone) and a broad, open body chamber occupied by the animal in life. The outer wall of the shell, also known as the conch, defines its overall shape and texture. The chambers (camerae) of the phragmocone are separated from each other by thin curved walls (septa), which formed during growth spurts of the animal. During a growth spurt, the rear of the mantle secretes a new septum, adding another chamber to the series of shell chambers. At the same time, shell material is added around the shell opening (aperture), enlarging the body chamber and providing more room for the growing animal. Sutures (or suture lines) appear where each septum contacts the wall of the outer shell. In life, they are visible as a series of narrow wavy lines on the outer surface of the shell. Like their underlying septa, the sutures of the nautiloids are simple in shape, being either straight or slightly curved. This is different from the "zigzag" sutures of the goniatites and the highly complex sutures of the ammonites. The septa are perforated by the siphuncle, a fleshy tube which runs through each of the internal chambers of the shell. Surrounding the fleshy tube of the siphuncle are structures made of aragonite (a polymorph of calcium carbonate – which during fossilisation is often recrystallized to calcite, a more stable form of calcium carbonate [CaCO3]): septal necks and connecting rings. Some of the earlier nautiloids deposited calcium carbonate in the empty chambers (called cameral deposits) or within the siphuncle (endosiphuncular deposits), a process which may have been connected with controlling buoyancy. The nature of the siphuncle and its position within the shell are important in classifying nautiloids and can help distinguish them from ammonoids. The siphuncle is on the shell periphery in most ammonoids whereas it runs through the center of the chambers in some nautiloids, including living nautiluses. The subclass Nautiloidea, in its broader definition, is distinguished from other cephalopods by two main characteristics: the septa are smoothly concave in the forward direction, producing external sutures which are generally simple and smooth. The siphuncle is supported by septal necks which point to the rear (i.e. retrosiphonate) throughout the ontogeny of the animal. Modern nautiluses have deeply coiled shells which are involute, meaning that the larger and more recent whorls overlap and obscure older whorls. The shells of fossil nautiloids may be either straight (i.e., orthoconic as in Orthoceras and Rayonnoceras), curved (as in Cyrtoceras) coiled (as in Cenoceras), or rarely a helical coil (as in Lorieroceras). Some species' shells—especially in the late Paleozoic and early Mesozoic—are ornamented with spines and ribs, but most have a smooth shell. The shells are formed of aragonite, although the cameral deposits may consist of primary calcite. The coloration of the shell of the modern nautilus is quite prominent, and, although somewhat rarely, the shell coloration has been known to be preserved in fossil nautiloids. They often show color patterns only on the dorsal side, suggesting that the living animals swam horizontally. Modern nautiloids Much of what is known about the extinct nautiloids is based on what we know about modern nautiluses, such as the chambered nautilus, which is found in the southwest Pacific Ocean from Samoa to the Philippines, and in the Indian Ocean off the coast of Australia. It is not usually found in waters less than deep and may be found as far down as . Nautili are free swimming animals that possess a head with two simple lens-free eyes and arms (or tentacles). They have a smooth shell over a large body chamber, which is divided into subchambers filled with an inert gas (similar to the composition of atmospheric air, but with more nitrogen and less oxygen) making the animal neutrally buoyant in the water. As many as 90 tentacles are arranged in two circles around the mouth. The animal is predatory, and has jaws which are horny and beak-like, allowing it to feed on crustaceans. Empty nautilus shells may drift a considerable distance and have been reported from Japan, India and Africa. Undoubtedly the same applies to the shells of fossil nautiloids, the gas inside the shell keeping it buoyant for some time after the animal's death, allowing the empty shell to be carried some distance from where the animal lived before finally sinking to the seafloor. Nautili propel themselves by jet propulsion, expelling water from an elongated funnel called the hyponome, which can be pointed in different directions to control their movement. Unlike the belemnites and other cephalopods, modern nautili do not have an ink sac, and there is no evidence to suggest that the extinct forms possessed one either. Furthermore, unlike the extinct ammonoids, the modern nautilus lacks an aptychus, a biomineralized plate which is proposed to act as an operculum which closes the shell to protect the body. However, aptychus-like plates are known from some extinct nautiloids, and they may be homologous to the fleshy hood of a modern nautilus. Fossil record Nautiloids are often found as fossils in early Palaeozoic rocks (less so in more recent strata). The rocks of the Ordovician period in the Baltic coast and parts of the United States contain a variety of nautiloid fossils, and specimens such as Discitoceras and Rayonnoceras may be found in the limestones of the Carboniferous period in Ireland. The marine rocks of the Jurassic period in Britain often yield specimens of Cenoceras, and nautiloids such as Eutrephoceras are also found in the Pierre Shale formation of the Cretaceous period in the north-central United States. Specimens of the Ordovician nautiloid Endoceras have been recorded measuring up to in shell length, and there is a description of a specimen estimated to have reached , although that specimen is reported as destroyed. These large nautiloids would have been formidable predators of other marine animals at the time they lived. In some localities, such as Scandinavia and Morocco, the fossils of orthoconic nautiloids accumulated in such large numbers that they form limestones composed of nonspecific assemblages known as cephalopod beds, cephalopod limestones, nautiloid limestones, or Orthoceras limestones in the geological literature. Although the term Orthoceras now only refers to a Baltic coast Ordovician genus, in prior times it was employed as a general name given to all straight-shelled nautiloids that lived from the Ordovician to the Triassic periods (but were most common in the early Paleozoic era). Evolutionary history Nautiloids are first known from the late Cambrian Fengshan Formation of northeastern China, where they seem to have been quite diverse (at the time this was a warm shallow sea rich in marine life). However, although four orders have been proposed from the 131 species named, there is no certainty that all of these are valid, and indeed it is likely that these taxa are seriously oversplit. Most of these early forms died out, but a single family, the Ellesmeroceratidae, survived to the early Ordovician, where it ultimately gave rise to all subsequent cephalopods. In the Early and Middle Ordovician the nautiloids underwent an evolutionary radiation. Some eight new orders appeared at this time, covering a great diversity of shell types and structure, and ecological lifestyles. Nautiloids remained at the height of their range of adaptations and variety of forms throughout the Ordovician, Silurian, and Devonian periods, with various straight, curved and coiled shell forms coexisting at the same time. Several of the early orders became extinct over that interval, but others rose to prominence. Nautiloids began to decline in the Devonian, perhaps due to competition with their descendants and relatives the Ammonoids and Coleoids, with only the Nautilida holding their own (and indeed increasing in diversity). Their shells became increasingly tightly coiled, while both numbers and variety of non-nautilid species continued to decrease throughout the Carboniferous and Permian. The massive extinctions at the end of the Permian were less damaging to nautiloids than to other taxa and a few groups survived into the early Mesozoic, including pseudorthocerids, bactritids, nautilids and possibly orthocerids. The last straight-shelled forms were long thought to have disappeared at the end of the Triassic, but a possible orthocerid has been found in Cretaceous rocks. Apart from this exception, only a single nautiloid suborder, the Nautilina, continued throughout the Mesozoic, where they co-existed quite happily with their more specialised ammonoid cousins. Most of these forms differed only slightly from the modern nautilus. They had a brief resurgence in the early Tertiary (perhaps filling the niches vacated by the ammonoids in the end Cretaceous extinction), and maintained a worldwide distribution up until the middle of the Cenozoic Era. With the global cooling of the Miocene and Pliocene, their geographic distribution shrank and these hardy and long-lived animals declined in diversity again. Today there are only six living species, all belonging to two genera, Nautilus (the pearly nautilus), and Allonautilus. The recent decrease in the once worldwide distribution of nautiloids is now believed to have been caused by the spread of pinnipeds. From the Oligocene onward, the appearance of pinnipeds in the geological record of a region coincides with the disappearance of nautiloids from that region. As a result, nautiloids are now limited to their current distribution in the tropical Indo-Pacific Ocean, where pinnipeds are absent. The genus Aturia seem to have temporarily survive regions where pinnipeds were present through adaptations to fast and agile swimming, but eventually went extinct as well. Predation by short-snouted whales and the development of OMZs, preventing nautiloids from retreating into deeper water, are also cited as other potential causes of extinction. Timeline of orders Classification Older classification systems A consensus on nautiloid classification has traditionally been elusive and subject to change, as different workers emphasize different fundamental traits when reconstructing evolutionary events. The largest and most widely cited publication on nautiloid taxonomy is the Treatise on Invertebrate Paleontology Part K by Teichert et al. 1964, though new information has rendered this volume outdated and in need of revision. Treatise Part K was based on previous classification schemes by Flower & Kummel (1950) and the Russian Osnovy Paleontologii Vol. 5 (1962) textbook. Other comprehensive taxonomic schemes have been devised by Wade (1988), Teichert (1988), and Shevyrev (2006). Wade (1988) divided the subclass Nautiloidea (sensu lato) into 6 superorders, incorporating orders that are phylogenetically related. They are: †Plectronoceratoidea = †Plectronocerida, †Protactinocerida, †Yanhecerida, and †Ellesmerocerida. †Endoceratoidea = †Endocerida †Orthoceratoidea = †Orthocerida, †Ascocerida, and †Pseudorthocerida (the Orthoceratoidea of Kröger 2007) Nautilitoidea = †Tarphycerida, †Oncocerida, and Nautilida. †Actinoceratoidea = †Actinocerida †Discosoritoidea = †Discosorida Three of these superorders were established for orders of uncertain placement: Endocerida, Actinocerida, and Discosorida. The other three unite related orders which share a common ancestor and form a branch of the nautiloid taxonomic tree: Plectronoceratoidea, which consists mostly of small Cambrian forms that include the ancestors of subsequent stocks; Orthoceratoidea, which unites different primarily orthoconic orders (including the ancestors for Bacritida and Ammonoidea); and Nautilitoidea, which includes the first coiled cephalopods, Tarphycerida, as well as Nautilida, which includes the recent Nautilus. Another order, Bactritida, which is derived from Orthocerida, is sometimes included with Nautiloidea, sometimes with Ammonoidea, and sometimes placed in a subclass of its own, Bactritoidea. Recently some workers in the field have come to recognize Dissidocerida as a distinct order, along with Pseudorthocerida, both previously included in Orthocerida as subtaxa. Early cladistic efforts Cladistic approaches are rare in nautiloid systematics. Many nautiloid orders (not to mention the group as a whole) are not monophyletic clades, but rather paraphyletic grades. This means that they include some descendant taxa while excluding others. For example, the paraphyletic order Orthocerida includes numerous orthocerids stretching through the Paleozoic, but it excludes colloids, despite colloids having a well-established ancestry among the orthocerids. Interpretations by Engeser (1996–1998) suggests that nautiloids, and indeed cephalopods in general, should be split into two main clades: Palcephalopoda (including all the nautiloids except Orthocerida and Ascocerida) and Neocephalopoda (the rest of the cephalopods). Palcephalopoda is meant to correspond to groups which are closer to living nautilus, while Neocephalopoda is meant to correspond to groups closer to living coleoids. One issue which this scheme is the necessity of establishing a firm ancestry for nautilus, to contextualize which cephalopods are closer to which of the two living end members. On the basis of morphological traits, Nautilida is most similar to coiled early nautiloids such as the Tarphycerida and Oncocerida. However, these orders diverged from coleoid ancestors in the early Ordovician at the latest, while genetic divergence estimates suggest that Nautilida diverged in the Silurian or Devonian. A more recent phylogenetic study by Lindgren et al. (2004), which supports the monophyly of cephalopods, does not bear on the Palcephalopod/Neocephalopod question, since the only cephalopods included were Nautilus and coleoids. Recent revisions For an in-process revision of Treatise Part K, King & Evans (2019) reclassified nautiloids sensu lato into five subclasses. Major groups were primarily defined by variation in their muscle attachment types. Other traits referenced during this reclassification include protoconch morphology, connecting ring structure, and the extent of cameral and endosiphuncular deposits. While most previous studies referred to subclasses with the suffix '-oidea', these authors instead opted for the suffix '-ia', to prevent confusion between group levels. For example, Nautiloidea sensu stricto was renamed to Nautilia, to differentiate it from the informal broader definition of "nautiloid". In addition, they used the unsimplified names for orders, with the suffix '-atida' rather than the common simplified form, '-ida'. Subclass †Plectronoceratia (formerly Plectronoceratoidea) Order †Plectronoceratida Order †Yanheceratida Order †Protactinoceratida Subclass †Multiceratia (formerly Multiceratoidea) Order †Ellesmeroceratida Order †Cyrtocerinida Order †Bisonoceratida Order †Oncoceratida Order †Discosorida Subclass †Tarphyceratia Order †Tarphyceratida Order †Ascoceratida Subclass Nautilia (formerly Nautiloidea sensu stricto) Order Nautilida Subclass †Orthoceratia (formerly Orthoceratoidea) Order †Rioceratida Order †Dissidoceratida Order †Orthoceratida Order †Pseudorthoceratida Order †Actinoceratida Order †Astroviida (suborders †Lituitina and †Pallioceratina) Order †Endoceratida Traditional nautiloid classification schemes emphasize certain character traits over others, potentially involving personal bias as to which traits are worth emphasizing according to different authors. This issue may be resolved by sampling all morphological traits equally through bayesian phylogenetic inference. The first cephalopod-focused paper to use this technique was published by Pohle et al. (2022). They recovered several previously hypothesized groups, though many orders were determined to be paraphyletic. The study was focused on early cephalopod diversification in the Late Cambrian and Ordovician, and did not discuss in detail the origin of post-Ordovician groups. The following is a simplified version of their cladogram, showing early cephalopod relationships to the order level (although various isolated families also originated during this diversification event): Gallery
Biology and health sciences
Cephalopods
Animals
862291
https://en.wikipedia.org/wiki/Gastornis
Gastornis
Gastornis is an extinct genus of large, flightless birds that lived during the mid-Paleocene to mid-Eocene epochs of the Paleogene period. Most fossils have been found in Europe, and some species typically referred to the genus are known from North America and Asia. Several genera, including the well-studied genus Diatryma, have historically been considered junior synonyms of Gastornis. However, this interpretation has been challenged recently, and some researchers currently consider Diatryma to be a valid genus. Gastornis species were very large birds that were traditionally thought to have been predators of various smaller mammals, such as ancient, diminutive equids. However, several lines of evidence, including the lack of hooked claws (in known Gastornis footprints), studies of their beak structure and isotopic signatures of their bones, have caused scientists to now consider that these birds were probably herbivorous, feeding on tough plant material and seeds. Gastornis is, generally, agreed to be related to the Galloanserae, the group containing waterfowl and gamebirds. History Gastornis was first described in 1855 from a fragmentary skeleton. It was named after Gaston Planté, described as a "studious young man full of zeal", who had discovered the first fossils in clay () formation deposits at Meudon, near Paris. The discovery was notable, due to the large size of the specimens, and because, at the time, Gastornis represented one of the oldest known birds. Additional bones of the first known species, G. parisiensis, were found in the mid-1860s. Somewhat more-complete specimens, then referred to the new species G. edwardsii (now considered a synonym of G. parisiensis), were found a decade later. These specimens, found in the 1870s, formed the basis for a widely- circulated and reproduced skeletal restoration by Lemoine. The skulls of these original Gastornis fossils were unknown, other than nondescript fragments and several bones used in Lemoine's illustration, which turned out to be those of other animals. Thus, this European specimen was long reconstructed as a sort of gigantic "crane-like" bird. In 1874, the American paleontologist Edward Drinker Cope discovered another fragmentary set of fossils at the Wasatch Formation, New Mexico. Cope considered the fossils to be of a distinct genus and species of giant ground bird; in 1876, he named the remains Diatryma gigantea ( ), from the Ancient Greek διάτρημα (diatrema), meaning "through a hole", in reference to the large foramina (perforations) that penetrated some of the foot bones. In 1894, a single gastornithid toe bone from New Jersey was described by Cope's "rival" Othniel Charles Marsh, and classified as a new genus and species: Barornis regens. In 1911, it was recognized that this, too, could be considered a junior synonym of Diatryma (and therefore, later, Gastornis). Additional, fragmentary specimens were found in Wyoming in 1911, and assigned (in 1913) to the new species Diatryma ajax (also now considered a synonym of G. giganteus). In 1916, an American Museum of Natural History expedition to the Bighorn Basin (Willwood Formation, Wyoming) found the first nearly-complete skull and skeleton, which was described in 1917 and gave scientists their first clear picture of the bird. Matthew, Granger, and Stein (1917) classified this specimen as yet another new species, Diatryma steini. After the description of Diatryma, most new European specimens were referred to this genus, instead of Gastornis; however, after the initial discovery of Diatryma, researchers recognized the similarity between the two genera as early as 1884 when Elliott Coues placed Diatryma gigantea under the genus Gastornis as G. giganteus, a synonymy agreed upon by the American Ornithologists' Union in 1886. Further meaningful comparisons between Gastornis and Diatryma were made more difficult by Lemoine's incorrect skeletal illustration, the composite nature of which was not discovered until the early 1980s. Following this, several authors began to recognize a greater degree of similarity between the European and North American birds, often placing both in the same order (†Gastornithiformes) or even family (†Gastornithidae). This newly-realized degree of similarity caused many scientists to, tentatively, accept the animals' synonymy pending a comprehensive review of the anatomy of both genera, in which Gastornis has the taxonomic priority. Some subsequent studies either continued to use the genus Diatryma or argued against the synonymy, since a detailed comparison of type specimens has not been done yet and notable differences can be found in the species originally assigned to Diatryma from the type species of Gastornis. Description Gastornis is known from a large amount of fossil remains, but the clearest picture of the bird comes from a few nearly complete specimens of the species G. giganteus. These were generally very large birds, with huge beaks and massive skulls superficially similar to the carnivorous South American "terror birds" (phorusrhacids). The largest known species, G. giganteus would have reached about in maximum height, and up to in mass. The skull of G. giganteus was huge compared to the body and powerfully built. The beak was extremely tall and compressed (flattened from side to side). Unlike other species of Gastornis, G. giganteus lacked characteristic grooves and pits on the underlying bone. The 'lip' of the beak was straight, without a raptorial hook as found in the predatory phorusrhacids. The nostrils were small and positioned close to the front of the eyes about midway up the skull. The vertebrae were short and massive, even in the neck. The neck was relatively short, consisting of at least 13 massive vertebrae. The torso was relatively short. The wings were vestigial, with the upper wing-bones small and highly reduced, similar in proportion to the wings of the cassowary. A largely complete skull specimen (GMH XVIII-1178-1958) of G. geiselensis was also described in 2024 after its discovery in 1958. The upper beaks of G. geiselensis show possible sexual dimorphism and are wider than those of G. giganteus and proportionally longer than those of G. laurenti. Classification Gastornis and its close relatives are classified together in the family Gastornithidae, and were long considered to be members of the order Gruiformes. However, the traditional concept of Gruiformes has since been shown to be an unnatural grouping. Beginning in the late 1980s with the first phylogenetic analysis of gastornithid relationships, consensus began to grow that they were close relatives of the lineage that includes waterfowl and screamers, the Anseriformes. A 2007 study showed that gastornithids were a very early-branching group of anseriformes, and formed the sister group to all other members of that lineage. Recognizing the apparent close relationship between gastornithids and waterfowl, some researchers classify gastornithids within the anseriform group itself. Others restrict the name Anseriformes only to the crown group formed by all modern species, and label the larger group including extinct relatives of anseriformes, like the gastornithids, with the name Anserimorphae. Gastornithids are therefore sometimes placed in their own order, Gastornithiformes. A 2024 study, however, found little support for Gastornithiformes and instead places Gastornis as a member of the Galliformes crown group, as more closely related to Phasianoidea than to megapodes, being sister to the extinct Sylviornithidae, a recently extinct group of medium-sized flightless birds known from subfossil deposits in the Western Pacific. A simplified version of the family tree found by Agnolin et al. in 2007 is reproduced below. As of 2024, at least three species are confidently placed within the genus Gastornis: G. parisiensis (type species), G. russelli and G. laurenti. The type species, Gastornis parisiensis, was named and described by Hébert in two 1855 papers. It is known from fossils found in western and central Europe, dating from the late Paleocene to the early Eocene. Other species previously considered distinct, but which are now considered synonymous with G. parisiensis, include G. edwardsii (Lemoine, 1878) and G. klaasseni (Newton, 1885). Additional European species of Gastornis are G. russelli (Martin, 1992) from the late Paleocene of Berru, France, and G. sarasini (Schaub, 1929) from the early-middle Eocene. The supposed small species G. minor is considered to be a nomen dubium. Named in 2020, G. laurenti is the most recently described species of Gastornis from southwestern France. The holotype (MHNT.PAL.2018.20.1) is a nearly complete mandible which differs from other species within the genus, and the paratypes consist of the maxilla, right quadrate, femur shaft, tibiotarsus (two left and one right) and six cervical vertebrae. A 2024 study attributed more postcranial remains from the same locality to G. laurenti. Possible species and synonyms Gastornis giganteus (Cope, 1876), formerly Diatryma gigantea, dates from the middle Eocene of western North America. Its junior synonyms include Barornis regens (Marsh, 1894) and possibly Omorhamphus storchii (Sinclair, 1928). O. storchii was described based on fossils from lower Eocene rocks of Wyoming. The species was named in honor of T. C. von Storch, who found the fossils remains in Princeton 1927 Expedition. The fossil bones originally described as Omorhamphus storchii are considered to be the remains of a juvenile Gastornis giganteus by Brodkorb (1967), but Louchart et al. (2021) argued that no definitive juvenile specimens of G. giganteus are known and that the two taxa have no known association, so there is no unambiguous evidence to support this synonymy. Specimen YPM PU 13258 from lower Eocene Willwood Formation rocks of Park County, Wyoming also seems to be a juvenile – perhaps also of G. giganteus, in which case it would be an even younger individual. G. geiselensis, from the middle Eocene of Messel, Germany, has been considered a synonym of G. sarasini; however, other researchers have stated that there is currently insufficient evidence to synonymize the two, and that they should be kept separate at least pending a more detailed comparison of all gastornithids. In 2024, Gerald Mayr and colleagues argued against the synonymy of Diatryma with Gastornis based on the distinct features of the coracoid and tarsometatarsus of G. giganteus and G. geiselensis, referred to as D. gigantea and D. geiselensis in the paper, when compared to those of G. parisiensis. They further suggested that these two features support the placement of G. sarasini within Diatryma as D. sarasini, and that assigning all species of gastornithiforms to the genus Gastornis would not properly reflect the interrelationships of this taxonomic group. A simplified version of their phylogenetic analysis is reproduced below: A tibiotarsus (upper foot bone) originally described in 1980 as Zhongyuanus xichuanensis from the early Eocene of Henan, China, was suggested to be an Asian species of Gastornis in 2013. However, the 2024 study which argued against the synonymy of Diatryma with Gastornis suggested that this fragmentary Chinese taxon cannot be confidently assigned to either Diatryma or Gastornis, and thus more evaluation is required to clarify its taxonomic affinities. Paleobiology Diet A long-standing debate surrounding Gastornis is the interpretation of its diet. It has often been depicted as a predator of contemporary small mammals, which famously included the early horse Eohippus. However, with the size of Gastornis legs, the bird would have had to have been more agile to catch fast-moving prey than the fossils suggest it to have been. Consequently, Gastornis has been suspected to have been an ambush hunter and/or used pack hunting techniques to pursue or ambush prey; if Gastornis was a predator, it would have certainly needed some other means of hunting prey through the dense forest. Alternatively, it could have used its strong beak for eating large or strong vegetation. The skull of Gastornis is massive in comparison to those of living ratites of similar body size. Biomechanical analysis of the skull suggests that the jaw-closing musculature was enormous. The lower jaw is very deep, resulting in a lengthened moment arm of the jaw muscles. Both features strongly suggest that Gastornis could generate a powerful bite. Some scientists have proposed that the skull of Gastornis was ‘overbuilt’ for a herbivorous diet and support the traditional interpretation of Gastornis as a carnivore that used its powerfully constructed beak to subdue struggling prey and crack open bones to extract marrow. Others have noted the apparent lack of predatory features in the skull, such as a prominently hooked beak, as evidence that Gastornis was a specialized herbivore (or even an omnivore) of some sort, perhaps having used its large beak to crack hard foods like nuts and seeds. Footprints attributed to gastornithids (possibly a species of Gastornis itself), described in 2012, showed that these birds lacked strongly hooked talons on the hind legs, another line of evidence suggesting that they did not have a predatory lifestyle. Recent evidence suggests that Gastornis was likely a true herbivore. Studies of the calcium isotopes in the bones of specimens of Gastornis by Thomas Tutken and colleagues showed no evidence that it had meat in its diet. The geochemical analysis further revealed that its dietary habits were similar to those of both herbivorous dinosaurs and mammals when it was compared to known fossil carnivores, such as Tyrannosaurus rex, leaving phorusrhacids and bathornithids as the only major carnivorous flightless birds. The first in situ preserved gastroliths in a specimen of G. geiselensis (or D. geiselensis) also conforms to its herbivorous diet. Eggs In Late Paleocene deposits of Spain and early Eocene deposits of France, shell fragments of huge eggs have turned up, namely in Provence. These were described as the ootaxon Ornitholithus and are presumably from Gastornis. While no direct association exists between Ornitholithus and Gastornis fossils, no other birds of sufficient size are known from that time and place; while the large Diogenornis and Eremopezus are known from the Eocene, the former lived in South America (still separated from North America by the Tethys Ocean then) and the latter is only known from the Late Eocene of North Africa, which also was separated by an (albeit less wide) stretch of the Tethys Ocean from Europe. Some of these fragments were complete enough to reconstruct a size of 24 by 10 cm (about 9.5 by 4 inches) with shells 2.3–2.5 mm (0.09–0.1 in) thick, roughly half again as large as an ostrich egg and very different in shape from the more rounded ratite eggs. If Remiornis is indeed correctly identified as a ratite (which is quite doubtful, however), Gastornis remains as the only known animal that could have laid these eggs. At least one species of Remiornis is known to have been smaller than Gastornis, and was initially described as Gastornis minor by Mlíkovský in 2002. This would nicely match the remains of eggs a bit smaller than those of the living ostrich, which have also been found in Paleogene deposits of Provence, were it not for the fact that these eggshell fossils also date from the Eocene, but no Remiornis bones are yet known from that time. Footprints Several sets of fossil footprints are suspected to belong to Gastornis. One set of footprints was reported from late Eocene gypsum at Montmorency and other locations of the Paris Basin in the 19th century, from 1859 onwards. Described initially by Jules Desnoyers, and later on by Alphonse Milne-Edwards, these trace fossils were celebrated among French geologists of the late 19th century. They were discussed by Charles Lyell in his Elements of Geology as an example of the incompleteness of the fossil record – no bones had been found associated with the footprints. Unfortunately, these fine specimens, which sometimes even preserved details of the skin structure, are now lost. They were brought to the Muséum national d'histoire naturelle when Desnoyers started to work there, and the last documented record of them deals with their presence in the geology exhibition of the MNHN in 1912. The largest of these footprints, although only consisting of a single toe's impression, was 40 cm (16 in) long. The large footprints from the Paris Basin could also be divided into huge and merely large examples, much like the eggshells from southern France, which are 20 million years older. Another footprint record consists of a single imprint that still exists, though it has proven to be even more controversial. It was found in late Eocene Puget Group rocks in the Green River valley near Black Diamond, Washington. After its discovery, it raised considerable interest in the Seattle area in May–July 1992, being subject of at least two longer articles in the Seattle Times. Variously declared a hoax or genuine, this apparent impression of a single bird foot measures about wide by long and lacks a hallux (hind toe); it was described as the ichnotaxon Ornithoformipes controversus. Fourteen years after the initial discovery, the debate about the find's authenticity was still unresolved. The specimen is now at Western Washington University.The problem with these early trace fossils is that no fossil of Gastornis has been found to be younger than about 45 million years. In North America, the fossil record of unequivocal gastornithids seems to end even earlier than in Europe. However, in 2009, a landslide near Bellingham, Washington exposed at least 18 tracks on 15 blocks in the Eocene Chuckanut Formation. The anatomy and age (about 53.7 Ma old) of the tracks suggest that the track maker was Gastornis. Although these birds have long been considered to be predators or scavengers, the absence of raptor-like claws supports earlier suggestions that they were herbivores. The Chuckanut tracks are named as the ichnotaxon Rivavipes giganteus, inferred to belong to the extinct family Gastornithidae. At least 10 of the tracks are on display at Western Washington University. Feathers The plumage of Gastornis has generally been depicted in art as a hair-like covering similar to some ratites. This has been based in part on some fibrous strands recovered from a Green River Formation deposit at Roan Creek, Colorado, which were initially believed to represent Gastornis feathers and named Diatryma? filifera. Subsequent examination has shown the fossil material to not actually be feathers, but root fibers and the species renamed as Cyperacites filiferus. A second possible Gastornis feather has since been identified, also from the Green River Formation. Unlike the filamentous plant material, this single isolated feather resembles the body feathers of flighted birds, being broad and vaned. It was tentatively identified as a possible Gastornis feather based on its size; the feather measured long and must have belonged to a gigantic bird. Distribution It has been argued that Gastornis has a Holarctic distribution with fossils found in western Europe, North America (including an indeterminate specimen identified as Gastornis sp. from Arctic Canada), and possibly central China. The earliest (Paleocene) fossils all come from Europe, and it is likely that the genus originated there. Europe in this epoch was an island continent, and Gastornis was the largest terrestrial tetrapod of the landmass. This offers parallels with the Malagasy elephant birds, herbivorous birds that were similarly the largest land animals in the isolated landmass of Madagascar, in spite of otherwise mammalian megafauna. All other fossil remains are from the Eocene, though it is currently unknown how the genus Gastornis dispersed out of Europe and into other continents, and whether such assertion is even true given the potential validity of Diatryma. Given the possible presence of Gastornis fossils in the early Eocene of western China, these birds may have spread east from Europe and crossed into North America via the Bering land bridge. Gastornis also may have spread both east and west, arriving separately in eastern Asia and in North America across the Turgai Strait. Direct landbridges with North America are also known. European Gastornis survived somewhat longer than their North American counterparts, which seems to coincide with a period of increased isolation of the continent. Extinction The reason for the extinction of Gastornis is currently unclear. Competition with mammals has often been cited as a possible factor, but Gastornis did occur in faunas dominated by mammals, and did co-exist with several megafaunal forms like pantodonts. Likewise, extreme climatic events like the Paleocene–Eocene Thermal Maximum (PETM) appear to have had little impact. Nonetheless, the extended survival in Europe is thought to coincide with increased isolation of the landmass.
Biology and health sciences
Prehistoric birds
Animals
862361
https://en.wikipedia.org/wiki/H%C3%BCckel%27s%20rule
Hückel's rule
In organic chemistry, Hückel's rule predicts that a planar ring molecule will have aromatic properties if it has 4n + 2 π-electrons, where n is a non-negative integer. The quantum mechanical basis for its formulation was first worked out by physical chemist Erich Hückel in 1931. The succinct expression as the 4n + 2 rule has been attributed to W. v. E. Doering (1951), although several authors were using this form at around the same time. In agreement with the Möbius–Hückel concept, a cyclic ring molecule follows Hückel's rule when the number of its π-electrons equals 4n + 2, although clearcut examples are really only established for values of n = 0 up to about n = 6. Hückel's rule was originally based on calculations using the Hückel method, although it can also be justified by considering a particle in a ring system, by the LCAO method and by the Pariser–Parr–Pople method. Aromatic compounds are more stable than theoretically predicted using hydrogenation data of simple alkenes; the additional stability is due to the delocalized cloud of electrons, called resonance energy. Criteria for simple aromatics are: the molecule must have 4n + 2 (a so-called "Hückel number") π electrons (2, 6, 10, ...) in a conjugated system of p orbitals (usually on sp2-hybridized atoms, but sometimes sp-hybridized); the molecule must be (close to) planar (p orbitals must be roughly parallel and able to interact, implicit in the requirement for conjugation); the molecule must be cyclic (as opposed to linear); the molecule must have a continuous ring of p atomic orbitals (there cannot be any sp3 atoms in the ring, nor do exocyclic p orbitals count). Monocyclic hydrocarbons The rule can be used to understand the stability of completely conjugated monocyclic hydrocarbons (known as annulenes) as well as their cations and anions. The best-known example is benzene (C6H6) with a conjugated system of six π electrons, which equals 4n + 2 for n = 1. The molecule undergoes substitution reactions which preserve the six π electron system rather than addition reactions which would destroy it. The stability of this π electron system is referred to as aromaticity. Still, in most cases, catalysts are necessary for substitution reactions to occur. The cyclopentadienyl anion () with six π electrons is planar and readily generated from the unusually acidic cyclopentadiene (pKa 16), while the corresponding cation with four π electrons is destabilized, being harder to generate than a typical acyclic pentadienyl cations and is thought to be antiaromatic. Similarly, the tropylium cation (), also with six π electrons, is so stable compared to a typical carbocation that its salts can be crystallized from ethanol. On the other hand, in contrast to cyclopentadiene, cycloheptatriene is not particularly acidic (pKa 37) and the anion is considered nonaromatic. The cyclopropenyl cation () and the triboracyclopropenyl dianion () are considered examples of a two π electron system, which are stabilized relative to the open system, despite the angle strain imposed by the 60° bond angles. Planar ring molecules with 4n π electrons do not obey Hückel's rule, and theory predicts that they are less stable and have triplet ground states with two unpaired electrons. In practice, such molecules distort from planar regular polygons. Cyclobutadiene (C4H4) with four π electrons is stable only at temperatures below 35 K and is rectangular rather than square. Cyclooctatetraene (C8H8) with eight π electrons has a nonplanar "tub" structure. However, the dianion (cyclooctatetraenide anion), with ten π electrons obeys the 4n + 2 rule for n = 2 and is planar, while the 1,4-dimethyl derivative of the dication, with six π electrons, is also believed to be planar and aromatic. The Cyclononatetraenide anion () is the largest all-cis monocyclic annulene/annulenyl system that is planar and aromatic. These bond angles (140°) differ significantly from the ideal angles of 120°. Larger rings possess trans bonds to avoid the increased angle strain. However, 10 to 14-membered systems all experience considerable transannular strain. Thus, these systems are either nonaromatic or experience modest aromaticity. This changes when we get to [18]annulene, with (4×4) + 2 = 18 π electrons, which is large enough to accommodate six interior hydrogen atoms in a planar configuration (3 cis double bonds and 6 trans double bonds). Thermodynamic stabilization, NMR chemical shifts, and nearly equal bond lengths all point to considerable aromaticity for [18]annulene. The (4n+2) rule is a consequence of the degeneracy of the π orbitals in cyclic conjugated hydrocarbon molecules. As predicted by Hückel molecular orbital theory, the lowest π orbital in such molecules is non-degenerate and the higher orbitals form degenerate pairs. Benzene's lowest π orbital is non-degenerate and can hold 2 electrons, and its next 2 π orbitals form a degenerate pair which can hold 4 electrons. Its 6 π electrons therefore form a stable closed shell in a regular hexagonal molecule. However for cyclobutadiene or cyclooctatrene with regular geometries, the highest molecular orbital pair is occupied by only 2 π electrons forming a less stable open shell. The molecules therefore stabilize by geometrical distortions which separate the degenerate orbital energies so that the last two electrons occupy the same orbital, but the molecule as a whole is less stable in the presence of such a distortion. Heteroatoms Hückel's rule can also be applied to molecules containing other atoms such as nitrogen or oxygen. For example, pyridine (C5H5N) has a ring structure similar to benzene, except that one -CH- group is replaced by a nitrogen atom with no hydrogen. There are still six π electrons and the pyridine molecule is also aromatic and known for its stability. Polycyclic hydrocarbons Hückel's rule is not valid for many compounds containing more than one ring. For example, pyrene and trans-bicalicene contain 16 conjugated electrons (8 bonds), and coronene contains 24 conjugated electrons (12 bonds). Both of these polycyclic molecules are aromatic, even though they fail the 4n + 2 rule. Indeed, Hückel's rule can only be theoretically justified for monocyclic systems. Three-dimensional rule In 2000, Andreas Hirsch and coworkers in Erlangen, Germany, formulated a rule to determine when a spherical compound will be aromatic. They found that closed-shell compounds were aromatic when they had 2(n + 1)2 π-electrons, for instance the buckminsterfullerene species C6010+. In 2011, Jordi Poater and Miquel Solà expanded the rule to open-shell spherical compounds, finding they were aromatic when they had 2n2 + 2n + 1 π-electrons, with spin S = (n + 1/2) - corresponding to a half-filled last energy level with the same spin. For instance C601– is also observed to be aromatic with a spin of 11/2.
Physical sciences
Aromatic hydrocarbons
Chemistry
862621
https://en.wikipedia.org/wiki/Fumigation
Fumigation
Fumigation is a method of pest control or the removal of harmful microorganisms by completely filling an area with gaseous pesticides, or fumigants, to suffocate or poison the pests within. It is used to control pests in buildings (structural fumigation), soil, grain, and produce. Fumigation is also used during the processing of goods for import or export to prevent the transfer of exotic organisms. Structural fumigation targets pests inside buildings (usually residences), including pests that inhabit the physical structure itself, such as woodborers and drywood termites. Commodity fumigation, on the other hand, is also to be conducted inside a physical structure, such as a storage unit, but it aims to eliminate pests from infesting physical goods, usually food products, by killing pests within the container which will house them. Each fumigation lasts for a certain duration. This is because after spraying the pesticides, or fumigants, only the pests around are eradicated. Process Fumigation generally involves the following phases: first, humans are evacuated from the area intended for fumigation and the area covered to create a sealed environment. Next, the fumigant is released into the space to be fumigated. The space is held for a set period while the fumigant gas percolates through the space and acts on/kills any infestation in the area. Finally, the space is ventilated so that the poisonous gases are allowed to escape from the space, rendering it safe for humans to enter. If successful, the fumigated area is now safe and pest free. Tent fumigation Structural fumigation techniques differ from building to building. In a residential setting, a "rubber" tent or tents, typically made of plastic/pvc coated canvas material, may be placed over the entire structure while the pesticides are being released into the vacant structure. This process is called tent fumigation, or "tenting". The sealed tent contains the poisonous gases and prevents them from escaping into the environment. This process is commonly used for the treatment of drywood termites and/or bedbugs, using sulfuryl fluoride as the pesticide (sulfuryl fluoride is a naturally occurring gas, used in much higher concentration than found in the natural atmosphere, and which leaves no physical residue). The fumigated structure can be re-occupied after the tent has been removed and the pesticide has dissipated to a safe level, with no need for physical cleaning. Operating theatres Fumigation of hospital rooms with high concentrations of toxic chemicals has been proposed to reduce microbial agents on hospital surfaces and to control surgical site infections. Formaldehyde fumigation has long been an accepted method for areas where microbiological cleanliness is required. Fumigation with formaldehyde vapor is the recognized and most commonly used method because it is a cost-effective procedure. However, alternative methods are sought due to safety and efficacy concerns. Vaporized hydrogen peroxide is a dry gaseous method that has been used as a reliable alternative for aseptic processing isolators, and more recently, for room/facility decontamination. Hydrogen peroxide and silver in solution and diluted in water is a non-toxic and low cost agent. For example, to fumigate a 1000ft3 (~28.32 m3) area, a 20% solution (200mL of solution in 1000mL demineralized water) would be sprayed via fogger for 30 minutes. Fogging may be done at a rate of up to 130mL/minute and the contact time should be at least one hour. Chemicals At the heart of this technology is the use of chemicals. Ideally, these chemicals kill or passivate the targeted creatures without harming others. Usually such a feat is impossible, so fumigation is conducted in the absence of humans. Discontinued or rarely used Many chemicals have been discontinued owing to safety issues. Ethylene dibromide, carcinogenic Methallyl chloride, carcinogenic dazomet (methyl isothiocyanate precursor) DBCP formaldehyde, carcinogenic and explosive hydrogen cyanide, extremely toxic iodoform, expensive vs methyl bromide methyl isocyanate Safety Fumigation is a hazardous operation. Generally it is a legal requirement that the operator who carries out the fumigation operation holds official certification to perform the fumigation, as the chemicals used are toxic to most forms of life, including humans. Post operation ventilation of the area is a critical safety aspect of fumigation. It is important to distinguish between the pack or source of the fumigant gas and the environment which has been fumigated. While the fumigant pack may be safe and spent, the space will still hold the fumigant gas until it has been ventilated.
Technology
Pest and disease control
null
862627
https://en.wikipedia.org/wiki/Cadmium%20sulfide
Cadmium sulfide
Cadmium sulfide is the inorganic compound with the formula CdS. Cadmium sulfide is a yellow salt. It occurs in nature with two different crystal structures as the rare minerals greenockite and hawleyite, but is more prevalent as an impurity substituent in the similarly structured zinc ores sphalerite and wurtzite, which are the major economic sources of cadmium. As a compound that is easy to isolate and purify, it is the principal source of cadmium for all commercial applications. Its vivid yellow color led to its adoption as a pigment for the yellow paint "cadmium yellow" in the 1800s. Production Cadmium sulfide can be prepared by the precipitation from soluble cadmium(II) salts with sulfide ion. This reaction has been used for gravimetric analysis and qualitative inorganic analysis.The preparative route and the subsequent treatment of the product, affects the polymorphic form that is produced (i.e., cubic vs hexagonal). It has been asserted that chemical precipitation methods result in the cubic zincblende form. Pigment production usually involves the precipitation of CdS, the washing of the solid precipitate to remove soluble cadmium salts followed by calcination (roasting) to convert it to the hexagonal form followed by milling to produce a powder. When cadmium sulfide selenides are required the CdSe is co-precipitated with CdS and the cadmium sulfoselenide is created during the calcination step. Cadmium sulfide is sometimes associated with sulfate reducing bacteria. Routes to thin films of CdS Special methods are used to produce films of CdS as components in some photoresistors and solar cells. In the chemical bath deposition method, thin films of CdS have been prepared using thiourea as the source of sulfide anions and an ammonium buffer solution to control pH: Cd2+ + H2O + (NH2)2CS + 2 NH3 → CdS + (NH2)2CO + 2 NH4+ Cadmium sulfide can be produced using metalorganic vapour phase epitaxy and MOCVD techniques by the reaction of dimethylcadmium with diethyl sulfide: Cd(CH3)2 + Et2S → CdS + CH3CH3 + C4H10 Other methods to produce films of CdS include Sol–gel techniques Sputtering Electrochemical deposition Spraying with precursor cadmium salt, sulfur compound and dopant Screen printing using a slurry containing dispersed CdS Reactions Cadmium sulfide can be dissolved in acids. CdS + 2 HCl → CdCl2 + H2S When solutions of sulfide containing dispersed CdS particles are irradiated with light, hydrogen gas is generated: H2S → H2 + S ΔfH = +9.4 kcal/mol The proposed mechanism involves the electron/hole pairs created when incident light is absorbed by the cadmium sulfide followed by these reacting with water and sulfide: Production of an electron–hole pair CdS + hν → e− + h+ Reaction of electron 2e− + 2H2O → H2 + 2OH− Reaction of hole 2h+ + S2− → S Structure and physical properties Cadmium sulfide has, like zinc sulfide, two crystal forms. The more stable hexagonal wurtzite structure (found in the mineral Greenockite) and the cubic zinc blende structure (found in the mineral Hawleyite). In both of these forms the cadmium and sulfur atoms are four coordinate. There is also a high pressure form with the NaCl rock salt structure. Cadmium sulfide is a direct band gap semiconductor (gap 2.42 eV). The proximity of its band gap to visible light wavelengths gives it a coloured appearance. As well as this obvious property other properties result: the conductivity increases when irradiated, (leading to uses as a photoresistor) when combined with a p-type semiconductor it forms the core component of a photovoltaic (solar) cell and a CdS/Cu2S solar cell was one of the first efficient cells to be reported (1954) when doped with for example Cu+ ("activator") and Al3+ ("coactivator") CdS luminesces under electron beam excitation (cathodoluminescence) and is used as phosphor both polymorphs are piezoelectric and the hexagonal is also pyroelectric electroluminescence CdS crystals can act as a gain medium in solid state laser In thin-film form, CdS can be combined with other layers for use in certain types of solar cells. CdS was also one of the first semiconductor materials to be used for thin-film transistors (TFTs). However interest in compound semiconductors for TFTs largely waned after the emergence of amorphous silicon technology in the late 1970s. Thin films of CdS can be piezoelectric and have been used as transducers which can operate at frequencies in the GHz region. Nanoribbons of CdS show a net cooling due annihilation of phonons, during anti-Stokes luminescence at ~510 nm. As a result, a maximum temperature drop of 40 and 15 K has been demonstrated when the nanoribbons are pumped with a 514 or 532 nm laser. Applications Pigment CdS is used as pigment in plastics, showing good thermal stability, light and weather fastness, chemical resistance and high opacity. As a pigment, CdS is known as cadmium yellow (CI pigment yellow 37). About 2000 tons are produced annually as of 1982, representing about 25% of the cadmium processed commercially. Historical use in art The general commercial availability of cadmium sulfide from the 1840s led to its adoption by artists, notably Van Gogh, Monet (in his London series and other works) and Matisse (Bathers by a River 1916–1919). The presence of cadmium in paints has been used to detect forgeries in paintings alleged to have been produced prior to the 19th century. CdS-CdSe solutions CdS and CdSe form solid solutions with each other. Increasing amounts of cadmium selenide, gives pigments verging toward red, for example CI pigment orange 20 and CI pigment red 108.Such solid solutions are components of photoresistors (light dependent resistors) sensitive to visible and near infrared light. Safety Cadmium sulfide is toxic, especially dangerous when inhaled as dust, and cadmium compounds in general are classified as carcinogenic. Problems of biocompatibility have been reported when CdS is used as colors in tattoos. CdS has an LD50 of approximately 7,080 mg/kg in rats - which is higher than other cadmium compounds due to its low solubility.
Physical sciences
Sulfide salts
Chemistry
862694
https://en.wikipedia.org/wiki/Photometer
Photometer
A photometer is an instrument that measures the strength of electromagnetic radiation in the range from ultraviolet to infrared and including the visible spectrum. Most photometers convert light into an electric current using a photoresistor, photodiode, or photomultiplier. Photometers measure: Illuminance Irradiance Light absorption Scattering of light Reflection of light Fluorescence Phosphorescence Luminescence Historically, photometry was done by estimation, comparing the luminous flux of a source with a standard source. By the 19th century, common photometers included Rumford's photometer, which compared the depths of shadows cast by different light sources, and Ritchie's photometer, which relied on equal illumination of surfaces. Another type was based on the extinction of shadows. Modern photometers utilize photoresistors, photodiodes or photomultipliers to detect light. Some models employ photon counting, measuring light by counting individual photons. They are especially useful in areas where the irradiance is low. Photometers have wide-ranging applications including photography, where they determine the correct exposure, and science, where they are used in absorption spectroscopy to calculate the concentration of substances in a solution, infrared spectroscopy to study the structure of substances, and atomic absorption spectroscopy to determine the concentration of metals in a solution. History Before electronic light sensitive elements were developed, photometry was done by estimation by the eye. The relative luminous flux of a source was compared with a standard source. The photometer is placed such that the illuminance from the source being investigated is equal to the standard source, as the human eye can judge equal illuminance. The relative luminous fluxes can then be calculated as the illuminance decreases proportionally to the inverse square of distance. A standard example of such a photometer consists of a piece of paper with an oil spot on it that makes the paper slightly more transparent. When the spot is not visible from either side, the illuminance from the two sides is equal. By 1861, three types were in common use. These were Rumford's photometer, Ritchie's photometer, and photometers that used the extinction of shadows, which was considered to be the most precise. Rumford's photometer Rumford's photometer (also called a shadow photometer) depended on the principle that a brighter light would cast a deeper shadow. The two lights to be compared were used to cast a shadow onto paper. If the shadows were of the same depth, the difference in distance of the lights would indicate the difference in intensity (e.g. a light twice as far would be four times the intensity). Ritchie's photometer Ritchie's photometer depends upon equal illumination of surfaces. It consists of a box (a,b) six or eight inches long, and one in width and depth. In the middle, a wedge of wood (f,e,g) was angled upwards and covered with white paper. The user's eye looked through a tube (d) at the top of a box. The height of the apparatus was also adjustable via the stand (c). The lights to compare were placed at the side of the box (m, n)—which illuminated the paper surfaces so that the eye saw both surfaces at once. By changing the position of the lights, they were made to illuminate both surfaces equally, with the difference in intensity corresponding to the square of the difference in distance. Method of extinction of shadows This type of photometer depended on the fact that if a light throws the shadow of an opaque object onto a white screen, there is a certain distance that, if a second light is brought there, obliterates all traces of the shadow. Principle of photometers Most photometers detect the light with photoresistors, photodiodes or photomultipliers. To analyze the light, the photometer may measure the light after it has passed through a filter or through a monochromator for determination at defined wavelengths or for analysis of the spectral distribution of the light. Photon counting Some photometers measure light by counting individual photons rather than incoming flux. The operating principles are the same but the results are given in units such as photons/cm2 or photons·cm−2·sr−1 rather than W/cm2 or W·cm−2·sr−1. Due to their individual photon counting nature, these instruments are limited to observations where the irradiance is low. The irradiance is limited by the time resolution of its associated detector readout electronics. With current technology this is in the megahertz range. The maximum irradiance is also limited by the throughput and gain parameters of the detector itself. The light sensing element in photon counting devices in NIR, visible and ultraviolet wavelengths is a photomultiplier to achieve sufficient sensitivity. In airborne and space-based remote sensing such photon counters are used at the upper reaches of the electromagnetic spectrum such as the X-ray to far ultraviolet. This is usually due to the lower radiant intensity of the objects being measured as well as the difficulty of measuring light at higher energies using its particle-like nature as compared to the wavelike nature of light at lower frequencies. Conversely, radiometers are typically used for remote sensing from the visible, infrared though radio frequency range. Photography Photometers are used to determine the correct exposure in photography. In modern cameras, the photometer is usually built in. As the illumination of different parts of the picture varies, advanced photometers measure the light intensity in different parts of the potential picture and use an algorithm to determine the most suitable exposure for the final picture, adapting the algorithm to the type of picture intended (see Metering mode). Historically, a photometer was separate from the camera and known as an exposure meter. The advanced photometers then could be used either to measure the light from the potential picture as a whole, to measure from elements of the picture to ascertain that the most important parts of the picture are optimally exposed, or to measure the incident light to the scene with an integrating adapter. Visible light reflectance photometry A reflectance photometer measures the reflectance of a surface as a function of wavelength. The surface is illuminated with white light, and the reflected light is measured after passing through a monochromator. This type of measurement has mainly practical applications, for instance in the paint industry to characterize the colour of a surface objectively. UV and visible light transmission photometry These are optical instruments for measurement of the absorption of light of a given wavelength (or a given range of wavelengths) of coloured substances in solution. From the light absorption, Beer's law makes it possible to calculate the concentration of the coloured substance in the solution. Due to its wide range of application and its reliability and robustness, the photometer has become one of the principal instruments in biochemistry and analytical chemistry. Absorption photometers for work in aqueous solution work in the ultraviolet and visible ranges, from wavelength around 240 nm up to 750 nm. The principle of spectrophotometers and filter photometers is that (as far as possible) monochromatic light is allowed to pass through a container (cell) with optically flat windows containing the solution. It then reaches a light detector, that measures the intensity of the light compared to the intensity after passing through an identical cell with the same solvent but without the coloured substance. From the ratio between the light intensities, knowing the capacity of the coloured substance to absorb light (the absorbency of the coloured substance, or the photon cross section area of the molecules of the coloured substance at a given wavelength), it is possible to calculate the concentration of the substance using Beer's law. Two types of photometers are used: spectrophotometer and filter photometer. In spectrophotometers a monochromator (with prism or with grating) is used to obtain monochromatic light of one defined wavelength. In filter photometers, optical filters are used to give the monochromatic light. Spectrophotometers can thus easily be set to measure the absorbance at different wavelengths, and they can also be used to scan the spectrum of the absorbing substance. They are in this way more flexible than filter photometers, also give a higher optical purity of the analyzing light, and therefore they are preferably used for research purposes. Filter photometers are cheaper, robuster and easier to use and therefore they are used for routine analysis. Photometers for microtiter plates are filter photometers. Infrared light transmission photometry Spectrophotometry in infrared light is mainly used to study structure of substances, as given groups give absorption at defined wavelengths. Measurement in aqueous solution is generally not possible, as water absorbs infrared light strongly in some wavelength ranges. Therefore, infrared spectroscopy is either performed in the gaseous phase (for volatile substances) or with the substances pressed into tablets together with salts that are transparent in the infrared range. Potassium bromide (KBr) is commonly used for this purpose. The substance being tested is thoroughly mixed with specially purified KBr and pressed into a transparent tablet, that is placed in the beam of light. The analysis of the wavelength dependence is generally not done using a monochromator as it is in UV-Vis, but with the use of an interferometer. The interference pattern can be analyzed using a Fourier transform algorithm. In this way, the whole wavelength range can be analyzed simultaneously, saving time, and an interferometer is also less expensive than a monochromator. The light absorbed in the infrared region does not correspond to electronic excitation of the substance studied, but rather to different kinds of vibrational excitation. The vibrational excitations are characteristic of different groups in a molecule, that can in this way be identified. The infrared spectrum typically has very narrow absorption lines, which makes them unsuited for quantitative analysis but gives very detailed information about the molecules. The frequencies of the different modes of vibration varies with isotope, and therefore different isotopes give different peaks. This makes it possible also to study the isotopic composition of a sample with infrared spectrophotometry. Atomic absorption photometry Atomic absorption photometers are photometers that measure the light from a very hot flame. The solution to be analyzed is injected into the flame at a constant, known rate. Metals in the solution are present in atomic form in the flame. The monochromatic light in this type of photometer is generated by a discharge lamp where the discharge takes place in a gas with the metal to be determined. The discharge then emits light with wavelengths corresponding to the spectral lines of the metal. A filter may be used to isolate one of the main spectral lines of the metal to be analyzed. The light is absorbed by the metal in the flame, and the absorption is used to determine the concentration of the metal in the original solution.
Technology
Measuring instruments
null
862717
https://en.wikipedia.org/wiki/Projectile%20motion
Projectile motion
Projectile motion is a form of motion experienced by an object or particle (a projectile) that is projected in a gravitational field, such as from Earth's surface, and moves along a curved path (a trajectory) under the action of gravity only. In the particular case of projectile motion on Earth, most calculations assume the effects of air resistance are passive. Galileo Galilei showed that the trajectory of a given projectile is parabolic, but the path may also be straight in the special case when the object is thrown directly upward or downward. The study of such motions is called ballistics, and such a trajectory is described as ballistic. The only force of mathematical significance that is actively exerted on the object is gravity, which acts downward, thus imparting to the object a downward acceleration towards Earth's center of mass. Due to the object's inertia, no external force is needed to maintain the horizontal velocity component of the object's motion. Taking other forces into account, such as aerodynamic drag or internal propulsion (such as in a rocket), requires additional analysis. A ballistic missile is a missile only guided during the relatively brief initial powered phase of flight, and whose remaining course is governed by the laws of classical mechanics. Ballistics () is the science of dynamics that deals with the flight, behavior and effects of projectiles, especially bullets, unguided bombs, rockets, or the like; the science or art of designing and accelerating projectiles so as to achieve a desired performance. The elementary equations of ballistics neglect nearly every factor except for initial velocity, the launch angle and a gravitational acceleration assumed constant. Practical solutions of a ballistics problem often require considerations of air resistance, cross winds, target motion, acceleration due to gravity varying with height, and in such problems as launching a rocket from one point on the Earth to another, the horizon's distance vs curvature R of the Earth (its local speed of rotation ). Detailed mathematical solutions of practical problems typically do not have closed-form solutions, and therefore require numerical methods to address. Kinematic quantities In projectile motion, the horizontal motion and the vertical motion are independent of each other; that is, neither motion affects the other. This is the principle of compound motion established by Galileo in 1638, and used by him to prove the parabolic form of projectile motion. A ballistic trajectory is a parabola with homogeneous acceleration, such as in a space ship with constant acceleration in absence of other forces. On Earth the acceleration changes magnitude with altitude as and direction (faraway targets) with latitude/longitude along the trajectory. This causes an elliptic trajectory, which is very close to a parabola on a small scale. However, if an object was thrown and the Earth was suddenly replaced with a black hole of equal mass, it would become obvious that the ballistic trajectory is part of an elliptic orbit around that "black hole", and not a parabola that extends to infinity. At higher speeds the trajectory can also be circular (cosmonautics at LEO?, geostationary satellites at 5 R), parabolic or hyperbolic (unless distorted by other objects like the Moon or the Sun). In this article a homogeneous gravitational acceleration is assumed. Acceleration Since there is acceleration only in the vertical direction, the velocity in the horizontal direction is constant, being equal to . The vertical motion of the projectile is the motion of a particle during its free fall. Here the acceleration is constant, being equal to g. The components of the acceleration are: , .* *The y acceleration can also be referred to as the force of the earth on the object(s) of interest. Velocity Let the projectile be launched with an initial velocity , which can be expressed as the sum of horizontal and vertical components as follows: . The components and can be found if the initial launch angle θ is known: , The horizontal component of the velocity of the object remains unchanged throughout the motion. The vertical component of the velocity changes linearly, because the acceleration due to gravity is constant. The accelerations in the x and y directions can be integrated to solve for the components of velocity at any time t, as follows: , . The magnitude of the velocity (under the Pythagorean theorem, also known as the triangle law): . Displacement At any time , the projectile's horizontal and vertical displacement are: , . The magnitude of the displacement is: . Consider the equations, and . If t is eliminated between these two equations the following equation is obtained: Here R is the range of a projectile. Since g, θ, and v0 are constants, the above equation is of the form , in which a and b are constants. This is the equation of a parabola, so the path is parabolic. The axis of the parabola is vertical. If the projectile's position (x,y) and launch angle (θ or α) are known, the initial velocity can be found solving for v0 in the afore-mentioned parabolic equation: . Displacement in polar coordinates The parabolic trajectory of a projectile can also be expressed in polar coordinates instead of Cartesian coordinates. In this case, the position has the general formula . In this equation, the origin is the midpoint of the horizontal range of the projectile, and if the ground is flat, the parabolic arc is plotted in the range . This expression can be obtained by transforming the Cartesian equation as stated above by and . Properties of the trajectory Time of flight or total time of the whole journey The total time t for which the projectile remains in the air is called the time-of-flight. After the flight, the projectile returns to the horizontal axis (x-axis), so . Note that we have neglected air resistance on the projectile. If the starting point is at height y0 with respect to the point of impact, the time of flight is: As above, this expression can be reduced (y0 is 0) to = if θ equals 45°. Time of flight to the target's position As shown above in the Displacement section, the horizontal and vertical velocity of a projectile are independent of each other. Because of this, we can find the time to reach a target using the displacement formula for the horizontal velocity: This equation will give the total time t the projectile must travel for to reach the target's horizontal displacement, neglecting air resistance. Maximum height of projectile The greatest height that the object will reach is known as the peak of the object's motion. The increase in height will last until , that is, . Time to reach the maximum height(h): . For the vertical displacement of the maximum height of the projectile: The maximum reachable height is obtained for θ=90°: If the projectile's position (x,y) and launch angle (θ) are known, the maximum height can be found by solving for h in the following equation: Angle of elevation (φ) at the maximum height is given by: Relation between horizontal range and maximum height The relation between the range d on the horizontal plane and the maximum height h reached at is: × . If Maximum distance of projectile The range and the maximum height of the projectile do not depend upon its mass. Hence range and maximum height are equal for all bodies that are thrown with the same velocity and direction. The horizontal range d of the projectile is the horizontal distance it has traveled when it returns to its initial height (). . Time to reach ground: . From the horizontal displacement the maximum distance of the projectile: , so Note that d has its maximum value when which necessarily corresponds to , or . The total horizontal distance (d) traveled. When the surface is flat (initial height of the object is zero), the distance traveled: Thus the maximum distance is obtained if θ is 45 degrees. This distance is: Application of the work energy theorem According to the work-energy theorem the vertical component of velocity is: . These formulae ignore aerodynamic drag and also assume that the landing area is at uniform height 0. Angle of reach The "angle of reach" is the angle (θ) at which a projectile must be launched in order to go a distance d, given the initial velocity v. There are two solutions: (shallow trajectory) and because , (steep trajectory) Angle θ required to hit coordinate (x, y) To hit a target at range x and altitude y when fired from (0,0) and with initial speed v, the required angle(s) of launch θ are: The two roots of the equation correspond to the two possible launch angles, so long as they aren't imaginary, in which case the initial speed is not great enough to reach the point (x,y) selected. This formula allows one to find the angle of launch needed without the restriction of . One can also ask what launch angle allows the lowest possible launch velocity. This occurs when the two solutions above are equal, implying that the quantity under the square root sign is zero. This, tan θ = v2/gx, requires solving a quadratic equation for , and we find This gives If we denote the angle whose tangent is by , then its reciprocal: This implies In other words, the launch should be at the angle halfway between the target and zenith (vector opposite to gravity). Total Path Length of the Trajectory The length of the parabolic arc traced by a projectile, L, given that the height of launch and landing is the same (there is no air resistance), is given by the formula: where is the initial velocity, is the launch angle and is the acceleration due to gravity as a positive value. The expression can be obtained by evaluating the arc length integral for the height-distance parabola between the bounds initial and final displacement (i.e. between 0 and the horizontal range of the projectile) such that: If the time-of-flight is t, Trajectory of a projectile with air resistance Air resistance creates a force that (for symmetric projectiles) is always directed against the direction of motion in the surrounding medium and has a magnitude that depends on the absolute speed: . The speed-dependence of the friction force is linear () at very low speeds (Stokes drag) and quadratic () at large speeds (Newton drag). The transition between these behaviours is determined by the Reynolds number, which depends on object speed and size, density and dynamic viscosity of the medium. For Reynolds numbers below about 1 the dependence is linear, above 1000 (turbulent flow) it becomes quadratic. In air, which has a kinematic viscosity around 0.15 cm2/s, this means that the drag force becomes quadratic in v when the product of object speed and diameter is more than about 0.015 m2/s, which is typically the case for projectiles. Stokes drag: (for ) Newton drag: (for ) The free body diagram on the right is for a projectile that experiences air resistance and the effects of gravity. Here, air resistance is assumed to be in the direction opposite of the projectile's velocity: Trajectory of a projectile with Stokes drag Stokes drag, where , only applies at very low speed in air, and is thus not the typical case for projectiles. However, the linear dependence of on causes a very simple differential equation of motion in which the 2 cartesian components become completely independent, and it is thus easier to solve. Here, , and will be used to denote the initial velocity, the velocity along the direction of x and the velocity along the direction of y, respectively. The mass of the projectile will be denoted by m, and . For the derivation only the case where is considered. Again, the projectile is fired from the origin (0,0). The relationships that represent the motion of the particle are derived by Newton's Second Law, both in the x and y directions. In the x direction and in the y direction . This implies that: (1), and (2) Solving (1) is an elementary differential equation, thus the steps leading to a unique solution for vx and, subsequently, x will not be enumerated. Given the initial conditions (where vx0 is understood to be the x component of the initial velocity) and for : (1a) (1b) While (1) is solved much in the same way, (2) is of distinct interest because of its non-homogeneous nature. Hence, we will be extensively solving (2). Note that in this case the initial conditions are used and when . (2) (2a) This first order, linear, non-homogeneous differential equation may be solved a number of ways; however, in this instance, it will be quicker to approach the solution via an integrating factor . (2c) (2d) (2e) (2f) (2g) And by integration we find: (3) Solving for our initial conditions: (2h) (3a) With a bit of algebra to simplify (3a): (3b) The total time of the journey in the presence of air resistance (more specifically, when ) can be calculated by the same strategy as above, namely, we solve the equation . While in the case of zero air resistance this equation can be solved elementarily, here we shall need the Lambert W function. The equation is of the form , and such an equation can be transformed into an equation solvable by the function (see an example of such a transformation here). Some algebra shows that the total time of flight, in closed form, is given as . Trajectory of a projectile with Newton drag The most typical case of air resistance, in case of Reynolds numbers above about 1000, is Newton drag with a drag force proportional to the speed squared, . In air, which has a kinematic viscosity around 0.15 cm2/s, this means that the product of object speed and diameter must be more than about 0.015 m2/s. Unfortunately, the equations of motion can not be easily solved analytically for this case. Therefore, a numerical solution will be examined. The following assumptions are made: Constant gravitational acceleration Air resistance is given by the following drag formula, Where: FD is the drag force, c is the drag coefficient, ρ is the air density, A is the cross sectional area of the projectile. Again . Compare this with theory/practice of the ballistic coefficient. Special cases Even though the general case of a projectile with Newton drag cannot be solved analytically, some special cases can. Here we denote the terminal velocity in free-fall as and the characteristic settling time constant . (Dimension of [m/s2], [1/m]) Near-horizontal motion: In case the motion is almost horizontal, , such as a flying bullet. The vertical velocity component has very little influence on the horizontal motion. In this case: The same pattern applies for motion with friction along a line in any direction, when gravity is negligible (relatively small ). It also applies when vertical motion is prevented, such as for a moving car with its engine off. Vertical motion upward: Here and and where is the initial upward velocity at and the initial position is . A projectile cannot rise longer than in the vertical direction, when it reaches the peak (0 m, ypeak) at 0 m/s. Vertical motion downward: With hyperbolic functions After a time at y=0, the projectile reaches almost terminal velocity . Numerical solution A projectile motion with drag can be computed generically by numerical integration of the ordinary differential equation, for instance by applying a reduction to a first-order system. The equation to be solved is . This approach also allows to add the effects of speed-dependent drag coefficient, altitude-dependent air density (in product ) and position-dependent gravity field (when , is linear decrease). Lofted trajectory A special case of a ballistic trajectory for a rocket is a lofted trajectory, a trajectory with an apogee greater than the minimum-energy trajectory to the same range. In other words, the rocket travels higher and by doing so it uses more energy to get to the same landing point. This may be done for various reasons such as increasing distance to the horizon to give greater viewing/communication range or for changing the angle with which a missile will impact on landing. Lofted trajectories are sometimes used in both missile rocketry and in spaceflight. Projectile motion on a planetary scale When a projectile travels a range that is significant compared to the Earth's radius (above ≈100 km), the curvature of the Earth and the non-uniform Earth's gravity have to be considered. This is, for example, the case with spacecrafts and intercontinental missiles. The trajectory then generalizes (without air resistance) from a parabola to a Kepler-ellipse with one focus at the center of the Earth (shown in fig. 3). The projectile motion then follows Kepler's laws of planetary motion. The trajectory's parameters have to be adapted from the values of a uniform gravity field stated above. The Earth radius is taken as R, and g as the standard surface gravity. Let be the launch velocity relative to the first cosmic or escape velocity. Total range d between launch and impact: (where launch angle ) Maximum range of a projectile for optimum launch angle θ=45o:       with , the first cosmic velocity Maximum height of a projectile above the planetary surface: Maximum height of a projectile for vertical launch ():       with , the second cosmic velocity, Time of flight:
Physical sciences
Classical mechanics
Physics
862860
https://en.wikipedia.org/wiki/Hypochlorite
Hypochlorite
In chemistry, hypochlorite, or chloroxide is an anion with the chemical formula ClO−. It combines with a number of cations to form hypochlorite salts. Common examples include sodium hypochlorite (household bleach) and calcium hypochlorite (a component of bleaching powder, swimming pool "chlorine"). The Cl-O distance in ClO− is 1.69 Å. The name can also refer to esters of hypochlorous acid, namely organic compounds with a ClO– group covalently bound to the rest of the molecule. The principal example is tert-butyl hypochlorite, which is a useful chlorinating agent. Most hypochlorite salts are handled as aqueous solutions. Their primary applications are as bleaching, disinfection, and water treatment agents. They are also used in chemistry for chlorination and oxidation reactions. Reactions Acid reaction Acidification of hypochlorites generates hypochlorous acid, which exists in an equilibrium with chlorine. A lowered pH (ie. towards acid) drives the following reaction to the right, liberating chlorine gas, which can be dangerous: 2  + + + Stability Hypochlorites are generally unstable and many compounds exist only in solution. Lithium hypochlorite LiOCl, calcium hypochlorite Ca(OCl)2 and barium hypochlorite Ba(ClO)2 have been isolated as pure anhydrous compounds. All are solids. A few more can be produced as aqueous solutions. In general the greater the dilution the greater their stability. It is not possible to determine trends for the alkaline earth metal salts, as many of them cannot be formed. Beryllium hypochlorite is unheard of. Pure magnesium hypochlorite cannot be prepared; however, solid Mg(OH)OCl is known. Calcium hypochlorite is produced on an industrial scale and has good stability. Strontium hypochlorite, Sr(OCl)2, is not well characterised and its stability has not yet been determined. Upon heating, hypochlorite degrades to a mixture of chloride, oxygen, and chlorates: 2  → 2  + 3  → 2  + This reaction is exothermic and in the case of concentrated hypochlorites, such as LiOCl and Ca(OCl)2, can lead to dangerous thermal runaway and is potentially explosive. The alkali metal hypochlorites decrease in stability down the group. Anhydrous lithium hypochlorite is stable at room temperature; however, sodium hypochlorite is explosive as an anhydrous solid. The pentahydrate (NaOCl·(H2O)5) is unstable above 0 °C; although the more dilute solutions encountered as household bleach are more stable. Potassium hypochlorite (KOCl) is known only in solution. Lanthanide hypochlorites are also unstable; however, they have been reported as being more stable in their anhydrous forms than in the presence of water. Hypochlorite has been used to oxidise cerium from its +3 to +4 oxidation state. Hypochlorous acid itself is not stable in isolation as it decomposes to form chlorine. Its decomposition also results in some form of oxygen. Reactions with ammonia Hypochlorites react with ammonia first giving monochloramine (), then dichloramine (), and finally nitrogen trichloride (). + → + Cl Cl + → + + → + Preparation Hypochlorite salts Hypochlorite salts formed by the reaction between chlorine and alkali and alkaline earth metal hydroxides. The reaction is performed at close to room temperature to suppress the formation of chlorates. This process is widely used for the industrial production of sodium hypochlorite (NaClO) and calcium hypochlorite (Ca(ClO)2). Cl2 + 2 NaOH → NaCl + NaClO + H2O 2 Cl2 + 2 Ca(OH)2 → CaCl2 + Ca(ClO)2 + 2 H2O Large amounts of sodium hypochlorite are also produced electrochemically via an un-separated chloralkali process. In this process brine is electrolyzed to form which dissociates in water to form hypochlorite. This reaction must be conducted in non-acidic conditions to prevent release of chlorine: 2  → + 2 e− + + + Some hypochlorites may also be obtained by a salt metathesis reaction between calcium hypochlorite and various metal sulfates. This reaction is performed in water and relies on the formation of insoluble calcium sulfate, which will precipitate out of solution, driving the reaction to completion. Ca(ClO)2 + MSO4 → M(ClO)2 + CaSO4 Organic hypochlorites Hypochlorite esters are in general formed from the corresponding alcohols, by treatment with any of a number of reagents (e.g. chlorine, hypochlorous acid, dichlorine monoxide and various acidified hypochlorite salts). Biochemistry Biosynthesis of organochlorine compounds Chloroperoxidases are enzymes that catalyzes the chlorination of organic compounds. This enzyme combines the inorganic substrates chloride and hydrogen peroxide to produce the equivalent of Cl+, which replaces a proton in hydrocarbon substrate: R-H + Cl− + H2O2 + H+ → R-Cl + 2 H2O The source of "Cl+" is hypochlorous acid (HOCl). Many organochlorine compounds are biosynthesized in this way. Immune response In response to infection, the human immune system generates minute quantities of hypochlorite within special white blood cells, called neutrophil granulocytes. These granulocytes engulf viruses and bacteria in an intracellular vacuole called the phagosome, where they are digested. Part of the digestion mechanism involves an enzyme-mediated respiratory burst, which produces reactive oxygen-derived compounds, including superoxide (which is produced by NADPH oxidase). Superoxide decays to oxygen and hydrogen peroxide, which is used in a myeloperoxidase-catalysed reaction to convert chloride to hypochlorite. Low concentrations of hypochlorite were also found to interact with a microbe's heat shock proteins, stimulating their role as intra-cellular chaperone and causing the bacteria to form into clumps (much like an egg that has been boiled) that will eventually die off. The same study found that low (micromolar) hypochlorite levels induce E. coli and Vibrio cholerae to activate a protective mechanism, although its implications were not clear. In some cases, the base acidity of hypochlorite compromises a bacterium's lipid membrane, a reaction similar to popping a balloon. Industrial and domestic uses Hypochlorites, especially of sodium ("liquid bleach", "Javel water") and calcium ("bleaching powder") are widely used, industrially and domestically, to whiten clothes, lighten hair color and remove stains. They were the first commercial bleaching products, developed soon after that property was discovered in 1785 by French chemist Claude Berthollet. Hypochlorites are also widely used as broad spectrum disinfectants and deodorizers. That application started soon after French chemist Labarraque discovered those properties, around 1820 (still before Pasteur formulated his germ theory of disease). Laboratory uses As oxidizing agents Hypochlorite is the strongest oxidizing agent of the chlorine oxyanions. This can be seen by comparing the standard half cell potentials across the series; the data also shows that the chlorine oxyanions are stronger oxidizers in acidic conditions. Hypochlorite is a sufficiently strong oxidiser to convert Mn(III) to Mn(V) during the Jacobsen epoxidation reaction and to convert to . This oxidising power is what makes them effective bleaching agents and disinfectants. In organic chemistry, hypochlorites can be used to oxidise primary alcohols to carboxylic acids. As chlorinating agents Hypochlorite salts can also serve as chlorinating agents. For example, they convert phenols to chlorophenols. Calcium hypochlorite converts piperidine to N-chloropiperidine. Related oxyanions Chlorine can be the nucleus of oxyanions with oxidation states of −1, +1, +3, +5, or +7. (The element can also assume oxidation state of +4 is seen in the neutral compound chlorine dioxide ClO2).
Physical sciences
Halide oxyanions
Chemistry
862898
https://en.wikipedia.org/wiki/Absolute%20space%20and%20time
Absolute space and time
Absolute space and time is a concept in physics and philosophy about the properties of the universe. In physics, absolute space and time may be a preferred frame. Early concept A version of the concept of absolute space (in the sense of a preferred frame) can be seen in Aristotelian physics. Robert S. Westman writes that a "whiff" of absolute space can be observed in Copernicus's De revolutionibus orbium coelestium, where Copernicus uses the concept of an immobile sphere of stars. Newton Originally introduced by Sir Isaac Newton in Philosophiæ Naturalis Principia Mathematica, the concepts of absolute time and space provided a theoretical foundation that facilitated Newtonian mechanics. According to Newton, absolute time and space respectively are independent aspects of objective reality: Absolute, true and mathematical time, of itself, and from its own nature flows equably without regard to anything external, and by another name is called duration: relative, apparent and common time, is some sensible and external (whether accurate or unequable) measure of duration by the means of motion, which is commonly used instead of true time ... According to Newton, absolute time exists independently of any perceiver and progresses at a consistent pace throughout the universe. Unlike relative time, Newton believed absolute time was imperceptible and could only be understood mathematically. According to Newton, humans are only capable of perceiving relative time, which is a measurement of perceivable objects in motion (like the Moon or Sun). From these movements, we infer the passage of time. These notions imply that absolute space and time do not depend upon physical events, but are a backdrop or stage setting within which physical phenomena occur. Thus, every object has an absolute state of motion relative to absolute space, so that an object must be either in a state of absolute rest, or moving at some absolute speed. To support his views, Newton provided some empirical examples: according to Newton, a solitary rotating sphere can be inferred to rotate about its axis relative to absolute space by observing the bulging of its equator, and a solitary pair of spheres tied by a rope can be inferred to be in absolute rotation about their center of gravity (barycenter) by observing the tension in the rope. Differing views Historically, there have been differing views on the concept of absolute space and time. Gottfried Leibniz was of the opinion that space made no sense except as the relative location of bodies, and time made no sense except as the relative movement of bodies. George Berkeley suggested that, lacking any point of reference, a sphere in an otherwise empty universe could not be conceived to rotate, and a pair of spheres could be conceived to rotate relative to one another, but not to rotate about their center of gravity, an example later raised by Albert Einstein in his development of general relativity. A more recent form of these objections was made by Ernst Mach. Mach's principle proposes that mechanics is entirely about relative motion of bodies and, in particular, mass is an expression of such relative motion. So, for example, a single particle in a universe with no other bodies would have zero mass. According to Mach, Newton's examples simply illustrate relative rotation of spheres and the bulk of the universe. When, accordingly, we say that a body preserves unchanged its direction and velocity in space, our assertion is nothing more or less than an abbreviated reference to the entire universe.—Ernst Mach These views opposing absolute space and time may be seen from a modern stance as an attempt to introduce operational definitions for space and time, a perspective made explicit in the special theory of relativity. Even within the context of Newtonian mechanics, the modern view is that absolute space is unnecessary. Instead, the notion of inertial frame of reference has taken precedence, that is, a preferred set of frames of reference that move uniformly with respect to one another. The laws of physics transform from one inertial frame to another according to Galilean relativity, leading to the following objections to absolute space, as outlined by Milutin Blagojević: The existence of absolute space contradicts the internal logic of classical mechanics since, according to Galilean principle of relativity, none of the inertial frames can be singled out. Absolute space does not explain inertial forces since they are related to acceleration with respect to any one of the inertial frames. Absolute space acts on physical objects by inducing their resistance to acceleration but it cannot be acted upon. Newton himself recognized the role of inertial frames. The motions of bodies included in a given space are the same among themselves, whether that space is at rest or moves uniformly forward in a straight line. As a practical matter, inertial frames often are taken as frames moving uniformly with respect to the fixed stars. See Inertial frame of reference for more discussion on this. Mathematical definitions Space, as understood in Newtonian mechanics, is three-dimensional and Euclidean, with a fixed orientation. It is denoted E3. If some point O in E3 is fixed and defined as an origin, the position of any point P in E3 is uniquely determined by its radius vector (the origin of this vector coincides with the point O and its end coincides with the point P). The three-dimensional linear vector space R3 is a set of all radius vectors. The space R3 is endowed with a scalar product ⟨ , ⟩. Time is a scalar which is the same in all space E3 and is denoted as t. The ordered set { t } is called a time axis. Motion (also path or trajectory) is a function r : Δ → R3 that maps a point in the interval Δ from the time axis to a position (radius vector) in R3. The above four concepts are the "well-known" objects mentioned by Isaac Newton in his Principia: I do not define time, space, place and motion, as being well known to all. Special relativity The concepts of space and time were separate in physical theory prior to the advent of special relativity theory, which connected the two and showed both to be dependent upon the reference frame's motion. In Einstein's theories, the ideas of absolute time and space were superseded by the notion of spacetime in special relativity, and curved spacetime in general relativity. Absolute simultaneity refers to the concurrence of events in time at different locations in space in a manner agreed upon in all frames of reference. The theory of relativity does not have a concept of absolute time because there is a relativity of simultaneity. An event that is simultaneous with another event in one frame of reference may be in the past or future of that event in a different frame of reference, which negates absolute simultaneity. Einstein Quoted below from his later papers, Einstein identified the term aether with "properties of space", a terminology that is not widely used. Einstein stated that in general relativity the "aether" is not absolute anymore, as the geodesic and therefore the structure of spacetime depends on the presence of matter. General relativity Special relativity eliminates absolute time (although Gödel and others suspect absolute time may be valid for some forms of general relativity) and general relativity further reduces the physical scope of absolute space and time through the concept of geodesics. There appears to be absolute space in relation to the distant stars because the local geodesics eventually channel information from these stars, but it is not necessary to invoke absolute space with respect to any system's physics, as its local geodesics are sufficient to describe its spacetime.
Physical sciences
Classical mechanics
Physics
863348
https://en.wikipedia.org/wiki/Caracas%20Metro
Caracas Metro
The Caracas Metro () is a mass rapid transit system serving Caracas, Venezuela. It was constructed and is operated by Compañía Anónima Metro de Caracas, a government-owned company that was founded in 1977 by José González-Lander who headed the project for more than thirty years since the early planning stages in the 1960s. Its motto is "" (translated as 'We are part of your life'). In 1978 MTA – New York City Transit's R46 #816 (now 5866) was shipped from the Pullman Standard's plant as a sample of rolling stock to be used for the new metro system that was under construction at the time. It was inaugurated on January 2, 1983 with and currently the total length of the railway reaches . Its purpose is to contribute to the development of collective transportation in Caracas and its immediate area, through the planning, construction, and commercial exploitation of an integrated transportation system. The C. A. Metro de Caracas is in charge of its construction, operation and exploitation as a decentralized public body attached to the Ministry of People's Power for Land Transportation. As a consequence of the crisis that the country is experiencing, by October 2018 it was estimated that 25% of the Caracas Metro trains were out of service due to lack of maintenance. In 2020, 9 of them remain operational. 48 trains on Line 1; 6 of 44 on Line 2; and 4 of 16 on Line 3; which, together with electrical failures, causes users to experience permanent delays. In 2022, the Caracas Metro only had 23 of the 169 trains operational. The system has 53 stations. The company is run by Major General Juan Carlos Du Bolay Perozo. Lines The Caracas Metro currently has the following lines in operation: These lines were built between 1978 and 2006. Line 2 has four terminal stations. Part of Line 2 was constructed as Line 4, but after its inauguration it was renamed Line 2. One must transfer on Line 3 at El Valle station to continue the ride. Construction of the first phase of Line 4 (now officially renamed Line 2) started in 2001; this line runs parallel to Line 1 to the south, and connects Plaza Venezuela station on Line 1 with Capuchinos station on Line 2. It is expected to provide much needed relief to congestion along this segment of Line 1 where most of Metro's ridership is concentrated. Commuter rail transfer points Construction was begun on the Los Teques Metro from Caracas Metro Las Adjuntas station (the expanded station with independent platforms connected by overhead walkways is now common to both metro systems) to the suburban city of Los Teques Alí Primera (formerly called El Tambor) station in 2001 and completed November 3, 2006. IFE Line 3 station La Rinconada is the interchange station between the Caracas Metro and the Caracas train station Libertador Simón Bolívar, where connections can be made to and from Charallave and Cúa. Guarenas/Guatire Metro The Guarenas / Guatire Metro is a new line with the intention of providing access to the eastern suburban communities. Both subsystems would allow for transfers at the Guaraira Repano (Petare North) station. In December 2006, the government awarded a 2 billion dollar contract for the construction of the new line between a soon-to-be-built Caracas Metro Parque del Este II station and the nearby twin cities of Guarenas/Guatire, with completion set for July 2012. However, by November 2012, only 7% of the metro project had been completed, and the completion date had slipped to at least 2016. , there is no official opening date. Services Metrobus The system possesses a complementary bus transit network called the Metrobus, which covers 20 urban routes and four suburban routes, with the aim of transporting users to other popular destinations in the Greater Caracas area that are not reached by the metro, including bedroom communities close to the city. External ticket sale A modality implanted by the enterprise is the wholesale of tickets used in the Metro and in the Metrobús. A number of tickets is sold to some middlemen, and from there to authorized points of sale, such as kiosks and other commercial establishments. This allows Metro users to buy tickets outside of the stations, thus making them more widely available. The points of sale formally authorized for these operations are identified with the Metroseñal (Metro-sign). Tickets sold at such locals have a price discount of 3%. Fares and types of tickets The values for tickets depend mostly on the number of travels the user has planned. There are also special prices for students, and fares differ for the Metrobús usage. The types of tickets and their pricings are listed below. In 2018, the metro became free to ride. While Metro de Caracas said this was because of a passenger number assessment, workers revealed that the government had not given the company hard money for over a year, and they could not import paper to print tickets, necessitating unlocked turnstiles. Future expansions The next phases of the Line 2 extension (also known as Line 5 during construction phase) were to be constructed with an opening planned for 2012. The first project, a extension, includes six new stations in Bello Monte, Las Mercedes, Tamanaco, Chuao, Bello Campo and Parque del Este II station. A separate project was to be carried out simultaneously, an additional Line 2 extension (also known as Line 5 or Metro Guarenas-Guatire Urban Route during the construction phase) with four additional stations in Montecristo, Boleíta, El Marques and terminal/transfer station La Urbina (Petare Norte). The La Urbina station was also destined to be the Caracas Guarenas-Guatire light rail transfer point. This long section was originally assigned funds for a 2012 completion. Long term proposals include expanding the system with two more lines: Line 5 ( long) to southeast Caracas, and Line 6 () that would run parallel to Line 1 to the north. Incidents On July 30, 2007, after 24 years without a single accident, a collision took place that took the life of a person and injured 11 others. It was on Line 1 at Plaza Sucre station at 9:09 a.m. when a train headed in the Propatria direction stopped on the platform. It was hit from behind by another travelling in the same direction. Although there has been much speculation about the cause of the accident, it is clear that there was a defect in the emergency braking system; the operational control centre from the La Hoyada station never activated the automatic braking mechanism when a train approaches a second train. On November 12, 2010, 33 people were arrested after staging a protest at Propatria Station over increasingly deteriorating service on the Metro. On 23 January 2017, several thousand Venezuelans protested throughout the country the government had also cordoned of the planned march areas with police and closed all subway and transportation systems to the area. During the 2017 protests, it was common for the government to close Metro stations. On 4 April, twelve subway stations were closed; on 8 April, 16 subway stations and 19 Caracas Metrobus routes were closed. On 13 April, 27 stations were closed, and on 26 April the Metro was closed completely after being open for two hours, along with the suspension of the Metrobus and Bus Caracas services. On 5 February 2018, after a protests because of the delay of the Metro, tear gas was fired in the subway, causing the service to be suspended for 25 minutes. On 14 February 2018, Metro users had to walk across the subway tunnels after an electric failure. In July 2018, the Metro stopped issuing tickets as they ran out of paper for printing the tickets. In October 2018, it was announced that 25% of Caracas Metro trains were out of service due to a lack of maintenance. In March 2019, the metro was out of service for several days due to an energy blackout caused by the poor political and economic situation in the country. Network map
Technology
Americas
null
26977166
https://en.wikipedia.org/wiki/Mantis
Mantis
Mantises are an order (Mantodea) of insects that contains over 2,400 species in about 460 genera in 33 families. The largest family is the Mantidae ("mantids"). Mantises are distributed worldwide in temperate and tropical habitats. They have triangular heads with bulging eyes supported on flexible necks. Their elongated bodies may or may not have wings, but all Mantodea have forelegs that are greatly enlarged and adapted for catching and gripping prey; their upright posture, while remaining stationary with forearms folded, has led to the common name praying mantis. The closest relatives of mantises are termites and cockroaches (Blattodea), which are all within the superorder Dictyoptera. Mantises are sometimes confused with stick insects (Phasmatodea), other elongated insects such as grasshoppers (Orthoptera), or other more distantly related insects with raptorial forelegs such as mantisflies (Mantispidae). Mantises are mostly ambush predators, but a few ground-dwelling species are found actively pursuing their prey. They normally live for about a year. In cooler climates, the adults lay eggs in autumn, then die. The eggs are protected by their hard capsules and hatch in the spring. Females sometimes practice sexual cannibalism, eating their mates after copulation. Mantises were considered to have supernatural powers by early civilizations, including ancient Greece, ancient Egypt, and Assyria. A cultural trope popular in cartoons imagines the female mantis as a femme fatale. Mantises are among the insects most commonly kept as pets. Etymology The name mantodea is formed from the Ancient Greek words (mantis) meaning "prophet", and (eidos) meaning "form" or "type". It was coined in 1838 by the German entomologist Hermann Burmeister. The name "mantid" properly refers only to members of the family Mantidae, which was, historically, the only family in the order. The other common name, praying mantis, applied to any species in the order (though in Europe mainly to Mantis religiosa), comes from the typical "prayer-like" posture with folded forelimbs. The vernacular plural "mantises" (used in this article) was confined largely to the US, with "mantids" predominantly used as the plural in the UK and elsewhere, until the family Mantidae was further split in 2002; at present, only some 80 out of 430 known genera are mantids, the rest are in other families. Taxonomy and evolution Over 2,400 species of mantis in about 430 genera are recognized. They are predominantly found in tropical regions, but some live in temperate areas. The systematics of mantises have long been disputed. Mantises, along with stick insects (Phasmatodea), were once placed in the order Orthoptera with the cockroaches (now Blattodea) and ice crawlers (now Grylloblattodea). Kristensen (1991) combined the Mantodea with the cockroaches and termites into the order Dictyoptera, suborder Mantodea. Phylogeny External Evolutionary relationships based on Evangelista et al. 2019 are shown in the cladogram: Internal One of the earliest classifications splitting an all-inclusive Mantidae into multiple families was that proposed by Beier in 1968, recognizing eight families, though it was not until Ehrmann's reclassification into 15 families in 2002 that a multiple-family classification became universally adopted. Klass, in 1997, studied the external male genitalia and postulated that the families Chaeteessidae and Metallyticidae diverged from the other families at an early date. However, as previously configured, the Mantidae and Thespidae especially were considered polyphyletic, so the Mantodea have been revised substantially as of 2019 and now includes 29 families. Fossil mantises Mantises are thought to have evolved from cockroach-like ancestors. The earliest confidently identified mantis fossils date to the Early Cretaceous. Fossils of the group are rare: by 2022, 37 fossil species are known. Fossil mantises, including one from Japan with spines on the front legs as in modern mantises, have been found in Cretaceous amber. Most fossils in amber are nymphs; compression fossils (in rock) include adults. Fossil mantises from the Crato Formation in Brazil include the long Santanmantis axelrodi, described in 2003; as in modern mantises, the front legs were adapted for catching prey. Well-preserved specimens yield details as small as 5 μm through X-ray computed tomography. Extinct families and genera include: †Baissomantidae †Gryllomantidae †Cretomantidae †Santanmantidae †Amelidae Incertae sedis: †Jersimantis †Chaeteessites †Cretophotina †Ambermantis Similar insects in the Neuroptera Because of the superficially similar raptorial forelegs, mantidflies may be confused with mantises, though they are unrelated. Their similarity is an example of convergent evolution; mantidflies do not have tegmina (leathery forewings) like mantises, their antennae are shorter and less thread-like, and the raptorial tibia is more muscular than that of a similar-sized mantis and bends back farther in preparation for shooting out to grasp prey. Biology Anatomy Mantises have large, triangular heads with a beak-like snout and mandibles. They have two bulbous compound eyes, three small simple eyes, and a pair of antennae. The articulation of the neck is also remarkably flexible; some species of mantis can rotate their heads nearly 180°. The mantis thorax consists of a prothorax, a mesothorax, and a metathorax. In all species apart from the genus Mantoida, the prothorax, which bears the head and forelegs, is much longer than the other two thoracic segments. The prothorax is also flexibly articulated, allowing for a wide range of movements of the head and fore limbs while the remainder of the body remains more or less immobile. Mantises also are unique to the Dictyoptera in that they have tympanate hearing, with two tympana in an auditory chamber in their metathorax. Most mantises can only hear ultrasound. Mantises have two spiked, grasping forelegs ("raptorial legs") in which prey items are caught and held securely. In most insect legs, including the posterior four legs of a mantis, the coxa and trochanter combine as an inconspicuous base of the leg; in the raptorial legs, however, the coxa and trochanter combine to form a segment about as long as the femur, which is a spiky part of the grasping apparatus (see illustration). Located at the base of the femur is a set of discoidal spines, usually four in number, but ranging from none to as many as five depending on the species. These spines are preceded by a number of tooth-like tubercles, which, along with a similar series of tubercles along the tibia and the apical claw near its tip, give the foreleg of the mantis its grasp on its prey. The foreleg ends in a delicate tarsus used as a walking appendage, made of four or five segments and ending in a two-toed claw with no arolium. Mantises can be loosely categorized as being macropterous (long-winged), brachypterous (short-winged), micropterous (vestigial-winged), or apterous (wingless). If not wingless, a mantis has two sets of wings: the outer wings, or tegmina, are usually narrow and leathery. They function as camouflage and as a shield for the hindwings, which are clearer and more delicate. The abdomen of all mantises consists of 10 tergites, with a corresponding set of nine sternites visible in males and seven visible in females. The abdomen tends to be slimmer in males than females, but ends in a pair of cerci in both sexes. Vision Mantises have stereo vision. They locate their prey by sight; their compound eyes contain up to 10,000 ommatidia. A small area at the front called the fovea has greater visual acuity than the rest of the eye, and can produce the high resolution necessary to examine potential prey. The peripheral ommatidia are concerned with perceiving motion; when a moving object is noticed, the head is rapidly rotated to bring the object into the visual field of the fovea. Further motions of the prey are then tracked by movements of the mantis's head so as to keep the image centered on the fovea. The use of stereoscopic vision differs from humans or primates because they specifically utilize this vision for capturing and spotting prey. The eyes are widely spaced and laterally situated, affording a wide binocular field of vision and precise stereoscopic vision at close range. The dark spot on each eye that moves as it rotates its head is a pseudopupil. This occurs because the ommatidia that are viewed "head-on" absorb the incident light, while those to the side reflect it. As their hunting relies heavily on vision, mantises are primarily diurnal. Many species, however, fly at night, and then may be attracted to artificial lights. They have good night vision. Mantises in the family Liturgusidae collected at night have been shown to be predominately males; this is probably true for most mantises. Nocturnal flight is especially important to males in locating less-mobile females by detecting their pheromones. Flying at night exposes mantises to fewer bird predators than diurnal flight would. Many mantises also have an auditory thoracic organ that helps them avoid bats by detecting their echolocation calls and responding evasively. Diet and hunting Mantises are generalist predators of arthropods. The majority of mantises are ambush predators that only feed upon live prey within their reach. They either camouflage themselves and remain stationary, waiting for prey to approach, or stalk their prey with slow, stealthy movements. Larger mantises sometimes eat smaller individuals of their own species, as well as small vertebrates such as lizards, frogs, fish, and particularly small birds. Most mantises stalk tempting prey if it strays close enough, and will go further when they are especially hungry. Once within reach, mantises strike rapidly to grasp the prey with their spiked raptorial forelegs. Some ground and bark species pursue their prey in a more active way. For example, members of a few genera such as the ground mantises Entella, Ligaria, and Ligariella run over dry ground seeking prey, much as tiger beetles do. Some mantis species such as Euantissa pulchra can discriminate between different types of prey, and approached spiders mimicking non-aggressive ant species much more than spiders that mimicked aggressive ant species. The fore gut of some species extends the whole length of the insect and can be used to store prey for digestion later. This may be advantageous in an insect that feeds intermittently. Chinese mantises live longer, grow faster, and produce more young when they are able to eat pollen. Antipredator adaptations Mantises are preyed on by vertebrates such as frogs, lizards, and birds, and by invertebrates such as spiders, large species of hornets, and ants. Some hunting wasps, such as some species of Tachytes also paralyze some species of mantis to feed their young. Generally, mantises protect themselves by camouflage, most species being cryptically colored to resemble foliage or other backgrounds, both to avoid predators and to better snare their prey. Those that live on uniformly colored surfaces such as bare earth or tree bark are dorsoventrally flattened so as to eliminate shadows that might reveal their presence. The species from different families called flower mantises are aggressive mimics: they resemble flowers convincingly enough to attract prey that come to collect pollen and nectar. Some species in Africa and Australia are able to turn black after a molt towards the end of the dry season; at this time of year, bush fires occur and this coloration enables them to blend in with the fire-ravaged landscape (fire melanism). When directly threatened, many mantis species stand tall and spread their forelegs, with their wings fanning out wide. The fanning of the wings makes the mantis seem larger and more threatening, with some species enhancing this effect with bright colors and patterns on their hindwings and inner surfaces of their front legs. If harassment persists, a mantis may strike with its forelegs and attempt to pinch or bite. As part of the bluffing (deimatic) threat display, some species may also produce a hissing sound by expelling air from the abdominal spiracles. Mantises lack chemical protection, so their displays are largely bluff. When flying at night, at least some mantises are able to detect the echolocation sounds produced by bats; when the frequency begins to increase rapidly, indicating an approaching bat, they stop flying horizontally and begin a descending spiral toward the safety of the ground, often preceded by an aerial loop or spin. If caught, they may slash captors with their raptorial legs. Mantises, like stick insects, show rocking behavior in which the insect makes rhythmic, repetitive side-to-side movements. Functions proposed for this behavior include the enhancement of crypsis by means of the resemblance to vegetation moving in the wind. However, the repetitive swaying movements may be most important in allowing the insects to discriminate objects from the background by their relative movement, a visual mechanism typical of animals with simpler sight systems. Rocking movements by these generally sedentary insects may replace flying or running as a source of relative motion of objects in the visual field. As ants may be predators of mantises, genera such as Loxomantis, Orthodera, and Statilia, like many other arthropods, avoid attacking them. A variety of arthropods, including some early-instar mantises, exploit this behavior and mimic ants to evade their predators. Reproduction and life history The mating season in temperate climates typically takes place in autumn, while in tropical areas, mating can occur at any time of the year. To mate following courtship, the male usually leaps onto the female's back, clasping her thorax and wing bases with his forelegs. He then arches his abdomen to deposit and store sperm in a special chamber near the tip of the female's abdomen. The female lays between 10 and 400 eggs, depending on the species. Eggs are typically deposited in a froth mass-produced by glands in the abdomen. This froth hardens, creating a protective capsule, which together with the egg mass is called an ootheca. Depending on the species, the ootheca can be attached to a flat surface, wrapped around a plant, or even deposited in the ground. Despite the versatility and durability of the eggs, they are often preyed on, especially by several species of parasitoid wasps. In a few species, mostly ground and bark mantises in the family Tarachodidae, the mother guards the eggs. The cryptic Tarachodes maurus positions herself on bark with her abdomen covering her egg capsule, ambushing passing prey and moving very little until the eggs hatch. An unusual reproductive strategy is adopted by Brunner's stick mantis from the southern United States: no males have ever been found in this species, and the females breed parthenogenetically. The ability to reproduce by parthenogenesis has been recorded in at least two other species, Sphodromantis viridis and Miomantis sp., although these species usually reproduce sexually. In temperate climates, adults do not survive the winter and the eggs undergo a diapause, hatching in the spring. As in closely related insect groups in the superorder Dictyoptera, mantises go through three life stages: egg, nymph, and adult (mantises are among the hemimetabolous insects). For smaller species, the eggs may hatch in 3–4 weeks as opposed to 4–6 weeks for larger species. The nymphs may be colored differently from the adult, and the early stages are often mimics of ants. A mantis nymph grows bigger as it molts its exoskeleton. Molting can happen five to 10 times before the adult stage is reached, depending on the species. After the final molt, most species have wings, though some species remain wingless or brachypterous ("short-winged"), particularly in the female sex. The lifespan of a mantis depends on the species; smaller ones may live 4–8 weeks, while larger species may live 4–6 months. Sexual cannibalism Sexual cannibalism is common among most predatory species of mantises in captivity. It has sometimes been observed in natural populations, where about a quarter of male–female encounters result in the male being eaten by the female. Around 90% of the predatory species of mantises exhibit sexual cannibalism. Adult males typically outnumber females at first, but their numbers may be fairly equivalent later in the adult stage, possibly because females selectively eat the smaller males. In Tenodera sinensis, 83% of males escape cannibalism after an encounter with a female, but since multiple matings occur, the probability of a male's being eaten increases cumulatively. The female may begin feeding by biting off the male's head (as they do with regular prey), and if mating has begun, the male's movements may become even more vigorous in its delivery of sperm. Early researchers thought that because copulatory movement is controlled by a ganglion in the abdomen, not the head, removal of the male's head was a reproductive strategy by females to enhance fertilization while obtaining sustenance. Later, this behavior appeared to be an artifact of intrusive laboratory observation. Whether the behavior is natural in the field or also the result of distractions caused by the human observer remains controversial. Mantises are highly visual organisms and notice any disturbance in the laboratory or field, such as bright lights or moving scientists. Chinese mantises that had been fed ad libitum (so that they were not hungry) actually displayed elaborate courtship behavior when left undisturbed. The male engages the female in a courtship dance, to change her interest from feeding to mating. Under such circumstances, the female has been known to respond with a defensive deimatic display by flashing the colored eyespots on the inside of her front legs. The reason for sexual cannibalism has been debated; experiments show that females on poor diets are likelier to engage in sexual cannibalism than those on good diets. Some hypothesize that submissive males gain a selective advantage by producing offspring; this is supported by a quantifiable increase in the duration of copulation among males which are cannibalized, in some cases doubling both the duration and the chance of fertilization. This is contrasted by a study where males were seen to approach hungry females with more caution, and were shown to remain mounted on hungry females for a longer time, indicating that males that actively avoid cannibalism may mate with multiple females. The same study also found that hungry females generally attracted fewer males than those that were well fed. The act of dismounting after copulation is dangerous for males, for it is the time that females most frequently cannibalize their mates. An increase in mounting duration appears to indicate that males wait for an opportune time to dismount a hungry female, who would be likely to cannibalize her mate. Experiments have revealed that the sex ratio in an environment determines male copulatory behavior of Mantis religiosa which in turn affects the cannibalistic tendencies of the female and support the sperm competition hypothesis because the polyandrous treatment recorded the highest copulation duration time and lowest cannibalism. This further suggests that dismounting the female can make males susceptible to cannibalism. Relationship with humans In culture, literature and art One of the earliest mantis references is in the ancient Chinese dictionary Erya, which gives its attributes in poetry, where it represents courage and fearlessness, and a brief description. A later text, the () from 1108, gives accurate details of the construction of the egg packages, the development cycle, anatomy, and the function of the antennae. Although mantises are rarely mentioned in Ancient Greek sources, a female mantis in threat posture is accurately illustrated on a series of fifth-century BC silver coins, including didrachms, from Metapontum in Lucania. In the 10th century AD, Byzantine era Adages, Suidas describes an insect resembling a slow-moving green locust with long front legs. He translates Zenobius 2.94 with the words seriphos (maybe a mantis) and graus, an old woman, implying a thin, dried-up stick of a body. Mantises are a common motif in Luna Polychrome ceramics of pre-Columbian Nicaragua, and are believed to represent a deity or spirit called "Madre Culebra". Western descriptions of the biology and morphology of the mantises became more accurate in the 18th century. Roesel von Rosenhof illustrated and described mantises and their cannibalistic behavior in the (Insect Entertainments). In the early 1900s, people in the United States Ozarks region referred to them as Devil's horses. Aldous Huxley made philosophical observations about the nature of death while two mantises mated in the sight of two characters in his 1962 novel Island (the species was Gongylus gongylodes). The naturalist Gerald Durrell's humorously autobiographical 1956 book My Family and Other Animals includes a four-page account of an almost evenly matched battle between a mantis and a gecko. Shortly before the fatal dénouement, Durrell narrates: M. C. Escher's woodcut Dream depicts a human-sized mantis standing on a sleeping bishop. A cultural trope imagines the female mantis as a femme fatale. The idea is propagated in cartoons by Cable, Guy and Rodd, LeLievre, T. McCracken, and Mark Parisi, among others. It ends Isabella Rossellini's short film about the life of a praying mantis in her 2008 Green Porno season for the Sundance Channel. The Deadly Mantis is a 1957 American science fiction monster film, with a giant mantis threatening mankind. Martial arts Two martial arts separately developed in China have movements and fighting strategies based on those of the mantis. As one of these arts was developed in northern China, and the other in southern parts of the country, the arts are today referred to (both in English and Chinese) as 'Northern Praying Mantis' and 'Southern Praying Mantis'. Both are very popular in China, and have also been exported to the West in recent decades. In mythology and religion According to local beliefs in Africa, this insect brings good luck. The mantis was revered by the southern African Khoi and San in whose cultures man and nature were intertwined; for its praying posture, the mantis was even named ("god of the Hottentots") in the Afrikaans language that had developed among the first European settlers. However, at least for the San, the mantis was only one of the manifestations of a trickster-deity, ǀKaggen, who could assume many other forms, such as a snake, hare or vulture. Several ancient civilizations did consider the insect to have supernatural powers; for the Greeks, it had the ability to show lost travelers the way home; in the Ancient Egyptian Book of the Dead, the "bird-fly" is a minor god that leads the souls of the dead to the underworld; in a list of 9th-century BC Nineveh grasshoppers (buru), the mantis is named necromancer (buru-enmeli) and soothsayer (buru-enmeli-ashaga). Some pre-Columbian cultures in western Nicaragua have preserved oral traditions of the mantis as "Madre Culebra", a powerful predator and symbol of female symbolic authority. As pets Mantises are among the insects most widely kept as pets. Because the lifespan of a mantis is only about a year, people who want to keep mantises often breed them. In 2013 at least 31 species were kept and bred in the United Kingdom, the Netherlands, and the United States. In 1996 at least 50 species were known to be kept in captivity by members of the Mantis Study Group. For pest control Naturally occurring mantis populations provide plant pest control. Gardeners who prefer to avoid pesticides may encourage mantises in the hope of controlling insect pests. However, mantises do not have key attributes of biological pest control agents; they do not specialize in a single pest insect, and do not multiply rapidly in response to an increase in such a prey species, but are general predators. They therefore have "negligible value" in biological control. Two species, the Chinese mantis and the European mantis, were deliberately introduced to North America in the hope that they would serve as pest controls for agriculture; they have spread widely in both the United States and Canada. Robotics In 2016, the Association for the Advancement of Artificial Intelligence had produced a prototype robot inspired by the forelegs of the praying mantis, with front legs that allow the robot to walk, climb steps, and grasp objects. The multi-jointed leg provides dexterity via a rotatable joint. Future models may include a more spiked foreleg to improve the grip and ability to support more weight.
Biology and health sciences
Insects and other hexapods
null
26978338
https://en.wikipedia.org/wiki/Gale%E2%80%93Shapley%20algorithm
Gale–Shapley algorithm
In mathematics, economics, and computer science, the Gale–Shapley algorithm (also known as the deferred acceptance algorithm, propose-and-reject algorithm, or Boston Pool algorithm) is an algorithm for finding a solution to the stable matching problem. It is named for David Gale and Lloyd Shapley, who published it in 1962, although it had been used for the National Resident Matching Program since the early 1950s. Shapley and Alvin E. Roth (who pointed out its prior application) won the 2012 Nobel Prize in Economics for work including this algorithm. The stable matching problem seeks to pair up equal numbers of participants of two types, using preferences from each participant. The pairing must be stable: no pair of unmatched participants should mutually prefer each other to their assigned match. In each round of the Gale–Shapley algorithm, unmatched participants of one type propose a match to the next participant on their preference list. Each proposal is accepted if its recipient prefers it to their current match. The resulting procedure is a truthful mechanism from the point of view of the proposing participants, who receive their most-preferred pairing consistent with stability. In contrast, the recipients of proposals receive their least-preferred pairing. The algorithm can be implemented to run in time quadratic in the number of participants, and linear in the size of the input to the algorithm. The stable matching problem, and the Gale–Shapley algorithm solving it, have widespread real-world applications, including matching American medical students to residencies and French university applicants to schools. For more, see . Background The stable matching problem, in its most basic form, takes as input equal numbers of two types of participants ( job applicants and employers, for example), and an ordering for each participant giving their preference for whom to be matched to among the participants of the other type. A matching pairs each participant of one type with a participant of the other type. A matching is not stable if: In other words, a matching is stable when there is no pair (A, B) where both participants prefer each other to their matched partners. If such a pair exists, the matching is not stable, in the sense that the members of this pair would prefer to leave the system and be matched to each other, possibly leaving other participants unmatched. A stable matching always exists, and the algorithmic problem solved by the Gale–Shapley algorithm is to find one. The stable matching problem has also been called the stable marriage problem, using a metaphor of marriage between men and women, and many sources describe the Gale–Shapley algorithm in terms of marriage proposals. However, this metaphor has been criticized as both sexist and unrealistic: the steps of the algorithm do not accurately reflect typical or even stereotypical human behavior. Solution In 1962, David Gale and Lloyd Shapley proved that, for any equal number of participants of each type, it is always possible to find a matching in which all pairs are stable. They presented an algorithm to do so. In 1984, Alvin E. Roth observed that essentially the same algorithm had already been in practical use since the early 1950s, as the "Boston Pool algorithm" used by the National Resident Matching Program. The Gale–Shapley algorithm involves a number of "rounds" (or "iterations"). In terms of job applicants and employers, it can be expressed as follows: In each round, one or more employers with open job positions each make a job offer to the applicant they prefer, among the ones they have not yet already made an offer to. Each applicant who has received an offer evaluates it against their current position (if they have one). If the applicant is not yet employed, or if they receive an offer from an employer they like better than their current employer, they accept the best new offer and become matched to the new employer (possibly leaving a previous employer with an open position). Otherwise, they reject the new offer. This process is repeated until all employers have either filled their positions or exhausted their lists of applicants. Implementation details and time analysis To implement the algorithm efficiently, each employer needs to be able to find its next applicant quickly, and each applicant needs to be able to compare employers quickly. One way to do this is to number each applicant and each employer from 1 to , where is the number of employers and applicants, and to store the following data structures: A set of employers with unfilled positions A one-dimensional array indexed by employers, specifying the preference index of the next applicant to whom the employer would send an offer, initially 1 for each employer A one-dimensional array indexed by applicants, specifying their current employer, initially a sentinel value such as 0 indicating they are unemployed A two-dimensional array indexed by an applicant and an employer, specifying the position of that employer in the applicant's preference list A two-dimensional array indexed by an employer and a number from 1 to , naming the applicant who is each employer's preference Setting up these data structures takes time. With these structures it is possible to find an employer with an unfilled position, make an offer from that employer to their next applicant, determine whether the offer is accepted, and update all of the data structures to reflect the results of these steps, in constant time per offer. Once the algorithm terminates, the resulting matching can be read off from the array of employers for each applicant. There can be offers before each employer runs out of offers to make, so the total time is . Although this time bound is quadratic in the number of participants, it may be considered as linear time when measured in terms of the size of the input, two matrices of preferences of size . Correctness guarantees This algorithm guarantees that: Everyone gets matched At the end, there cannot be an applicant and employer both unmatched. An employer left unmatched at the end of the process must have made an offer to all applicants. But an applicant who receives an offer remains employed for the rest of the process, so there can be no unemployed applicants. Since the numbers of applicants and job openings are equal, there can also be no open positions remaining. The matches are stable No applicant X and employer Y can prefer each other over their final match. If Y makes an offer to X, then X would only reject Y after receiving an even better offer, so X cannot prefer Y to their final match. And if Y stops making offers before reaching X in their preference list, Y cannot prefer X to their final match. In either case, X and Y do not form an unstable pair. Optimality of the solution There may be many stable matchings for the same system of preferences. This raises the question: which matching is returned by the Gale–Shapley algorithm? Is it the matching better for applicants, for employers, or an intermediate one? As it turns out, the Gale–Shapley algorithm in which employers make offers to applicants always yields the same stable matching (regardless of the order in which job offers are made), and its choice is the stable matching that is the best for all employers and worst for all applicants among all stable matchings. In a reversed form of the algorithm, each round consists of unemployed applicants writing a single job application to their preferred employer, and the employer either accepting the application (possibly firing an existing employee to do so) or rejecting it. This produces a matching that is best for all applicants and worst for all employers among all stable matchings. These two matchings are the top and bottom elements of the lattice of stable matchings. In both forms of the algorithm, one group of participants proposes matches, and the other group decides whether to accept or reject each proposal. The matching is always best for the group that makes the propositions, and worst for the group that decides how to handle each proposal. Strategic considerations The Gale–Shapley algorithm is a truthful mechanism from the point of view of the proposing side. This means that no proposer can get a better matching by misrepresenting their preferences. Moreover, the Gale–Shapley algorithm is even group-strategy proof for proposers, i.e., no coalition of proposers can coordinate a misrepresentation of their preferences such that all proposers in the coalition are strictly better-off. However, it is possible for some coalition to misrepresent their preferences such that some proposers are better-off, and the others retain the same partner. The Gale–Shapley algorithm is non-truthful for the non-proposing participants. Each may be able to misrepresent their preferences and get a better match. A particular form of manipulation is truncation: presenting only the topmost alternatives, implying that the bottom alternatives are not acceptable at all. Under complete information, it is sufficient to consider misrepresentations of the form of truncation strategies. However, successful misrepresentation requires knowledge of the other agents' preferences; without such knowledge, misrepresentation can give an agent a worse assignment. Moreover, even after an agent sees the final matching, they cannot deduce a strategy that would guarantee a better outcome in hindsight. This makes the Gale–Shapley algorithm a regret-free truth-telling mechanism. Moreover, in the Gale–Shapley algorithm, truth-telling is the only strategy that guarantees no regret. The Gale–Shapley algorithm is the only regret-free mechanism in the class of quantile-stable matching mechanisms. Generalizations In their original work on the problem, Gale and Shapley considered a more general form of the stable matching problem, suitable for university and college admission. In this problem, each university or college may have its own quota, a target number of students to admit, and the number of students applying for admission may differ from the sum of the quotas, necessarily causing either some students to remain unmatched or some quotas to remain unfilled. Additionally, preference lists may be incomplete: if a university omits a student from their list, it means they would prefer to leave their quota unfilled than to admit that student, and if a student omits a university from their list, it means they would prefer to remain unadmitted than to go to that university. Nevertheless, it is possible to define stable matchings for this more general problem, to prove that stable matchings always exist, and to apply the same algorithm to find one. A form of the Gale–Shapley algorithm, performed through a real-world protocol rather than calculated on computers, has been used for coordinating higher education admissions in France since 2018, through the Parcoursup system. In this process, over the course of the summer before the start of school, applicants receive offers of admission, and must choose in each round of the process whether to accept any new offer (and if so turn down any previous offer that they accepted). The method is complicated by additional constraints that make the problem it solves not exactly the stable matching problem. It has the advantage that the students do not need to commit to their preferences at the start of the process, but rather can determine their own preferences as the algorithm progresses, on the basis of head-to-head comparisons between offers that they have received. It is important that this process performs a small number of rounds of proposals, so that it terminates before the start date of the schools, but although high numbers of rounds can occur in theory, they tend not to occur in practice. It has been shown theoretically that, if the Gale–Shapley algorithm needs to be terminated early, after a small number of rounds in which every vacant position makes a new offer, it nevertheless produces matchings that have a high ratio of matched participants to unstable pairs. Recognition Shapley and Roth were awarded the 2012 Nobel Memorial Prize in Economic Sciences "for the theory of stable allocations and the practice of market design". Gale had died in 2008, making him ineligible for the prize.
Mathematics
Order theory
null
1324749
https://en.wikipedia.org/wiki/Shaving%20cream
Shaving cream
Shaving cream or shave cream is a category of cream cosmetics used for shaving preparation. The purpose of shaving cream is to soften the hair by providing lubrication. Different types of shaving creams include aerosol shaving cream (also known as shaving foam), latherless shaving cream (also called brushless shaving cream and non-aerosol shaving cream), and lather shaving cream or lathering shaving cream. The term shaving cream can also refer to the lather produced with a shaving brush from shaving soap or a lather shaving cream. Shaving creams commonly consist of an emulsion of oils, soaps or surfactants, and water. In addition to soap, lather shaving creams include a humectant for softer consistency and keeping the lather moisturised. Brushless shaving creams, on the other hand, don't contain soap and so don't produce lather. They are an oil-in-water mixture to which humectants, wetting agents, and other ingredients are added. Aerosol shaving creams are basically lather shaving cream in liquid form with propellants, vegetable waxes, and various oils added. History A rudimentary form of shaving cream was documented in Sumer around . This substance combined wood alkali and animal fat and was applied to a beard as a shaving preparation. Until the early 20th century, bars or sticks of hard shaving soap were used. Later, tubes containing compounds of oils and soft soap were sold. In 1919 Frank Shields, a former MIT professor developed the first shaving cream. The innovative product appeared on the American market under the name Barbasol and offered men an alternative to using a brush to work soap into lather. When it was first produced, Barbasol was filled and packaged entirely by hand in Indianapolis. The brand still exists and is currently available worldwide. The first can of pressurized shaving cream was Rise shaving cream, introduced in 1949. By the following decade this format attained two-thirds of the American market. Chlorofluorocarbons (CFCs) were used as propellants until they were banned in the late 1990s for destroying the ozone layer. Gaseous hydrocarbons such as mixtures of pentane, propane, butane and isobutane took their place. In the 1970s, shaving gel was developed. In 1993, The Procter & Gamble Company patented a post-foaming gel composition, which turns the gel into a foam after application to the skin, combining properties of both foams and gels. Contents Shaving creams and soaps are available as solids (bars); creams, generally in tubes; or aerosols. All forms may be applied with a shaving brush. Shaving creams contain 20–30% soap [potassium or triethanolamine (TEA)], up to about 10% glycerine, emollients, emulsifiers, and foaming agents. Aerosols are diluted creams dispensed from pressurized cans with the aid of hydrocarbon propellants (up to about 10%). The flammability of the hydrocarbons is offset by the large amounts of water in cream formulations. Beard-softening is due to hair hydration, which also depends on pH. In electric or dry shaving, swelling of the hairs is not desired, and such preparations use high amounts of alcohol (50–80%) to dry the skin and stiffen the hairs.
Biology and health sciences
Hygiene products
Health
1325294
https://en.wikipedia.org/wiki/Wild%20goat
Wild goat
The wild goat (Capra aegagrus) is a wild goat species, inhabiting forests, shrublands and rocky areas ranging from Turkey and the Caucasus in the west to Turkmenistan, Afghanistan and Pakistan in the east. It has been listed as near threatened on the IUCN Red List and is threatened by destruction and degradation of habitat. It is thought to be the ancestor of the domestic goat (C. hircus). Taxonomy Capra aegagrus was the first scientific name proposed by Johann Christian Polycarp Erxleben in 1777 for the wild goat populations of the Caucasus and Taurus Mountains. Capra blythi (proposed by Allan Octavian Hume in 1874) was given to wild goat horns found from Sindh. The following wild goat subspecies are considered valid taxa: Bezoar ibex C. a. aegagrus Sindh ibex C. a. blythi Chiltan ibex C. a. chialtanensis Turkmen wild goat C. a. turcmenica The Cretan goat (formerly C. a. pictus), or kri-kri, was once thought to be a subspecies of wild goat, but is now considered to be a feral descendant of the domestic goat (Capra hircus), now known as Capra hircus cretica. Distribution and habitat In Turkey, the wild goat occurs in the Aegean, Mediterranean, Black Sea, Southeastern and the Eastern Anatolia Regions up to in the Taurus and Anti-Taurus Mountains. In the Caucasus, it inhabits montane forests in the river basins of Andi Koysu and its tributaries in Dagestan, Chechnya and Georgia up to . In Armenia, wild goats were recorded in the Zangezur Mountains, in Khosrov State Reserve, and in highlands of the Syunik Province during field surveys from 2006 to 2007. In Azerbaijan, wild goats occur in Ordubad National Park, Daralayaz and Murovdag mountain areas in Nakhchivan Autonomous Republic. In Iran's Haftad Gholleh Protected Area, wild goat herds live foremost in west-facing areas with rocky substrates, water sources and steep slopes that are far from roads. In Turkmenistan, wild goat populations inhabit the mountain ranges of Uly Balkan and Kopet Dag. In Pakistan, wild goat herds occur in Kirthar National Park. Behaviour and ecology In Kirthar National Park, 283 wild goat groups were observed for 10 months in 1986. The group sizes ranged from two to 131 individuals but varied seasonally, with a mean ratio of two females per male. In Dagestan, male wild goats start courting females in mid December. The rutting season lasts until the third week of January. Females give birth to between one and three kids in late June to mid July. Older males drive younger males from the maternal herds. The gestation period averages 170 days. Kids are mobile almost immediately after birth. Kids are weaned after 6 months. Female goats reach sexual maturity at 1½–2½ years, males at 3½–4 years. The lifespan of a goat can be from 12 to 22 years. Threats Wild goat populations are threatened foremost by poaching, habitat loss due to logging, and competition with domestic livestock for food resources.
Biology and health sciences
Bovidae
Animals
1325869
https://en.wikipedia.org/wiki/Pliosauroidea
Pliosauroidea
Pliosauroidea is an extinct clade of plesiosaurs, known from the earliest Jurassic to early Late Cretaceous. They are best known for the subclade Thalassophonea, which contained crocodile-like short-necked forms with large heads and massive toothed jaws, commonly known as pliosaurs. More primitive non-thalassophonean pliosauroids resembled plesiosaurs in possessing relatively long necks and smaller heads. They originally included only members of the family Pliosauridae, of the order Plesiosauria, but several other genera and families are now also included, the number and details of which vary according to the classification used. The distinguishing characteristics are a short neck and an elongated head, with larger hind flippers compared to the fore flippers, the opposite of the plesiosaurs. They were carnivorous and their long and powerful jaws carried many sharp, conical teeth. Pliosaurs range from 4 to 10 meters or more in length. Their prey may have included fish, sharks, ichthyosaurs, dinosaurs and other plesiosaurs. The largest known species are Kronosaurus and Pliosaurus macromerus; other well known genera include Rhomaleosaurus, Peloneustes, and Macroplata. Fossil specimens have been found in Africa, Australia, China, Europe, North America and South America. Many very early (from the Early Jurassic and possibly Latest Triassic, i.e. Rhaetian) primitive pliosauroids were very like plesiosauroids in appearance and, indeed, used to be included in the family Plesiosauridae. Name Pliosauroidea was named by Welles in 1943. It is adapted from the name of the genus Pliosaurus, which is derived from the Greek (), meaning "more/closely", and () meaning "lizard"; it therefore means "more saurian". The name Pliosaurus was coined in 1841 by Richard Owen, who believed that it represented a link between plesiosauroids and crocodilians (considered a type of "saurian"), particularly due to their crocodile-like teeth. Classification Taxonomy The taxonomy presented here is mainly based on the plesiosaur cladistic analysis proposed by Hilary F. Ketchum and Roger B. J. Benson, 2011 unless otherwise noted. Suborder Pliosauroidea Eurysaurus? Sinopliosaurus? Family Rhomaleosauridae Archaeonectrus Bishanopliosaurus? Borealonectes Eurycleidus Macroplata Maresaurus Meyerasaurus Rhomaleosaurus Sthenarosaurus Yuzhoupliosaurus? Family Pliosauridae Anguanax Attenborosaurus Eardasaurus Gallardosaurus Hauffiosaurus Liopleurodon Marmornectes Megalneusaurus Pachycostasaurus Peloneustes Pliosaurus Rhaeticosaurus Simolestes Thalassiodracon Subfamily Brachaucheninae Brachauchenius Kronosaurus Luskhan Makhaira Megacephalosaurus Monquirasaurus Polyptychodon Sachicasaurus Stenorhynchosaurus Phylogeny Pliosauroidea is a stem-based taxon that was defined by Welles as "all taxa more closely related to Pliosaurus brachydeirus than to Plesiosaurus dolichodeirus". Pliosauridae and Rhomaleosauridae are stem-based taxa too. Pliosauridae is defined as "all taxa more closely related to Pliosaurus brachydeirus than to Leptocleidus superstes, Polycotylus latipinnis or Meyerasaurus victor". Rhomaleosauridae is defined as "all taxa more closely related to Meyerasaurus victor than to Leptocleidus superstes, Pliosaurus brachydeirus or Polycotylus latipinnis". The cladogram below follows a 2011 analysis by paleontologists Hilary F. Ketchum and Roger B. J. Benson, and reduced to genera only. Large pliosauroids In 2002, the discovery of a very large pliosauroid was announced in Mexico. This pliosauroid came to be known as the "Monster of Aramberri". Although widely reported as such, it does not belong to the genus Liopleurodon. The remains of this animal, consisting of a partial vertebral column, were dated to the Kimmeridgian of the La Caja Formation. The fossils were found much earlier, in 1985, by a geology student and were at first erroneously attributed to a theropod dinosaur by Hahnel. The remains originally contained part of a rostrum with teeth (now lost). In August 2006, palaeontologists of the University of Oslo discovered the first remains of a pliosaur on Norwegian soil. The remains were described as "very well preserved, as well as being unique in their completeness". The large animal was determined to be a new species of Pliosaurus. In the summer of 2008, the fossil remains of the huge pliosaur were dug up from the permafrost on Svalbard, a Norwegian island close to the North Pole. The excavation of the find is documented in the 2009 History television special Predator X. On 26 October 2009, palaeontologists reported the discovery of potentially the largest pliosauroid yet found. Found in cliffs near Weymouth, Dorset, on Britain's Jurassic Coast, the fossil had a skull length of . Palaeontologist Richard Forrest told the BBC: "I had heard rumours that something big was turning up. But seeing this thing in the flesh, so to speak, is just jaw dropping. It is simply enormous." It was determined that the specimen belonged to a new species that scientists named Pliosaurus kevani. In December 2023, the recent discovery of a pliosaur skull on the Dorset coast was described as "one of the most complete specimens of its type ever discovered". The discovery and research of the skull was covered in the PBS documentary Attenborough and the Jurassic Sea Monster hosted by David Attenborough.
Biology and health sciences
Prehistoric marine reptiles
Animals
1325949
https://en.wikipedia.org/wiki/Tautomer
Tautomer
In chemistry, tautomers () are structural isomers (constitutional isomers) of chemical compounds that readily interconvert. The chemical reaction interconverting the two is called tautomerization. This conversion commonly results from the relocation of a hydrogen atom within the compound. The phenomenon of tautomerization is called tautomerism, also called desmotropism. Tautomerism is for example relevant to the behavior of amino acids and nucleic acids, two of the fundamental building blocks of life. Care should be taken not to confuse tautomers with depictions of "contributing structures" in chemical resonance. Tautomers are distinct chemical species that can be distinguished by their differing atomic connectivities, molecular geometries, and physicochemical and spectroscopic properties, whereas resonance forms are merely alternative Lewis structure (valence bond theory) depictions of a single chemical species, whose true structure is a quantum superposition, essentially the "average" of the idealized, hypothetical geometries implied by these resonance forms. The term tautomer is derived . Examples Tautomerization is pervasive in organic chemistry. It is typically associated with polar molecules and ions containing functional groups that are at least weakly acidic. Most common tautomers exist in pairs, which means that the hydrogen is located at one of two positions, and even more specifically the most common form involves a hydrogen changing places with a double bond: . Common tautomeric pairs include: ketone – enol: , see keto–enol tautomerism enamine – imine: cyanamide – carbodiimide guanidine – guanidine – guanidine: With a central carbon surrounded by three nitrogens, a guanidine group allows this transform in three possible orientations amide – imidic acid: (e.g., the latter is encountered during nitrile hydrolysis reactions) lactam – lactim, a cyclic form of amide-imidic acid tautomerism in 2-pyridone and derived structures such as the nucleobases guanine, thymine, and cytosine imine – imine, e.g., during pyridoxal phosphate catalyzed enzymatic reactions nitro – aci-nitro (nitronic acid): nitroso – oxime: ketene – ynol, which involves a triple bond: amino acid – ammonium carboxylate, which applies to the building blocks of the proteins. This shifts the proton more than two atoms away, producing a zwitterion rather than shifting a double bond: phosphite – phosphonate: between trivalent and pentavalent phosphorus. Prototropy Prototropy is the most common form of tautomerism and refers to the relocation of a hydrogen atom. Prototropic tautomerism may be considered a subset of acid-base behavior. Prototropic tautomers are sets of isomeric protonation states with the same empirical formula and total charge. Tautomerizations are catalyzed by: bases, involving a series of steps: deprotonation, formation of a delocalized anion (e.g., an enolate), and protonation at a different position of the anion; and acids, involving a series of steps: protonation, formation of a delocalized cation, and deprotonation at a different position adjacent to the cation). Two specific further subcategories of tautomerizations: Annular tautomerism is a type of prototropic tautomerism wherein a proton can occupy two or more positions of the heterocyclic systems found in many drugs, for example, 1H- and 3H-imidazole; 1H-, 2H- and 4H- 1,2,4-triazole; 1H- and 2H- isoindole. Ring–chain tautomers occur when the movement of the proton is accompanied by a change from an open structure to a ring, such as the open chain and cyclic hemiacetal (typically pyranose or furanose forms) of many sugars. (See .) The tautomeric shift can be described as H−O ⋅ C=O ⇌ O−C−O−H, where the "⋅" indicates the initial absence of a bond. Valence tautomerism Valence tautomerism is a type of tautomerism in which single and/or double bonds are rapidly formed and ruptured, without migration of atoms or groups. It is distinct from prototropic tautomerism, and involves processes with rapid reorganisation of bonding electrons. A pair of valence tautomers with formula C6H6O are benzene oxide and oxepin. Other examples of this type of tautomerism can be found in bullvalene, and in open and closed forms of certain heterocycles, such as organic azides and tetrazoles, or mesoionic münchnone and acylamino ketene. Valence tautomerism requires a change in molecular geometry and should not be confused with canonical resonance structures or mesomers. Inorganic materials In inorganic extended solids, valence tautomerism can manifest itself in the change of oxidation states its spatial distribution upon the change of macroscopic thermodynamic conditions. Such effects have been called charge ordering or valence mixing to describe the behavior in inorganic oxides. Consequences for chemical databases The existence of multiple possible tautomers for individual chemical substances can lead to confusion. For example, samples of 2-pyridone and 2-hydroxypyridine do not exist as separate isolatable materials: the two tautomeric forms are interconvertible and the proportion of each depends on factors such as temperature, solvent, and additional substituents attached to the main ring. Historically, each form of the substance was entered into databases such as those maintained by the Chemical Abstracts Service and given separate CAS Registry Numbers. 2-Pyridone was assigned [142-08-5] and 2-hydroxypyridine [109-10-4]. The latter is now a "replaced" registry number so that look-up by either identifier reaches the same entry. The facility to automatically recognise such potential tautomerism and ensure that all tautomers are indexed together has been greatly facilitated by the creation of the International Chemical Identifier (InChI) and associated software. Thus the standard InChI for either tautomer is InChI=1S/C5H5NO/c7-5-3-1-2-4-6-5/h1-4H,(H,6,7).
Physical sciences
Substance
Chemistry
1326107
https://en.wikipedia.org/wiki/Steady%20state
Steady state
In systems theory, a system or a process is in a steady state if the variables (called state variables) which define the behavior of the system or the process are unchanging in time. In continuous time, this means that for those properties p of the system, the partial derivative with respect to time is zero and remains so: In discrete time, it means that the first difference of each property is zero and remains so: The concept of a steady state has relevance in many fields, in particular thermodynamics, economics, and engineering. If a system is in a steady state, then the recently observed behavior of the system will continue into the future. In stochastic systems, the probabilities that various states will be repeated will remain constant. For example, see for the derivation of the steady state. In many systems, a steady state is not achieved until some time after the system is started or initiated. This initial situation is often identified as a transient state, start-up or warm-up period. For example, while the flow of fluid through a tube or electricity through a network could be in a steady state because there is a constant flow of fluid or electricity, a tank or capacitor being drained or filled with fluid is a system in transient state, because its volume of fluid changes with time. Often, a steady state is approached asymptotically. An unstable system is one that diverges from the steady state. See for example Linear difference equation#Stability. In chemistry, a steady state is a more general situation than dynamic equilibrium. While a dynamic equilibrium occurs when two or more reversible processes occur at the same rate, and such a system can be said to be in a steady state, a system that is in a steady state may not necessarily be in a state of dynamic equilibrium, because some of the processes involved are not reversible. In other words, dynamic equilibrium is just one manifestation of a steady state. Applications Economics A steady state economy is an economy (especially a national economy but possibly that of a city, a region, or the world) of stable size featuring a stable population and stable consumption that remain at or below carrying capacity. In the economic growth model of Robert Solow and Trevor Swan, the steady state occurs when gross investment in physical capital equals depreciation and the economy reaches economic equilibrium, which may occur during a period of growth. Electrical engineering In electrical engineering and electronic engineering, steady state is an equilibrium condition of a circuit or network that occurs as the effects of transients are no longer important. Steady state is also used as an approximation in systems with on-going transient signals, such as audio systems, to allow simplified analysis of first order performance. Sinusoidal Steady State Analysis is a method for analyzing alternating current circuits using the same techniques as for solving DC circuits. The ability of an electrical machine or power system to regain its original/previous state is called Steady State Stability. The stability of a system refers to the ability of a system to return to its steady state when subjected to a disturbance. As mentioned before, power is generated by synchronous generators that operate in synchronism with the rest of the system. A generator is synchronized with a bus when both of them have same frequency, voltage and phase sequence. We can thus define the power system stability as the ability of the power system to return to steady state without losing synchronicity. Usually power system stability is categorized into steady state, transient and dynamic stability. Steady State Stability studies are restricted to small and gradual changes in the system operating conditions. In this we basically concentrate on restricting the bus voltages close to their nominal values. We also ensure that phase angles between two buses are not too large and check for the overloading of the power equipment and transmission lines. These checks are usually done using power flow studies. Transient Stability involves the study of the power system following a major disturbance. Following a large disturbance in the synchronous alternator the machine power (load) angle changes due to sudden acceleration of the rotor shaft. The objective of the transient stability study is to ascertain whether the load angle returns to a steady value following the clearance of the disturbance. The ability of a power system to maintain stability under continuous small disturbances is investigated under the name of Dynamic Stability (also known as small-signal stability). These small disturbances occur due to random fluctuations in loads and generation levels. In an interconnected power system, these random variations can lead catastrophic failure as this may force the rotor angle to increase steadily. Steady state determination is an important topic, because many design specifications of electronic systems are given in terms of the steady-state characteristics. Periodic steady-state solution is also a prerequisite for small signal dynamic modeling. Steady-state analysis is therefore an indispensable component of the design process. In some cases, it is useful to consider constant envelope vibration—vibration that never settles down to motionlessness, but continues to move at constant amplitude—a kind of steady-state condition. Chemical engineering In chemistry, thermodynamics, and other chemical engineering, a steady state is a situation in which all state variables are constant in spite of ongoing processes that strive to change them. For an entire system to be at steady state, i.e. for all state variables of a system to be constant, there must be a flow through the system (compare mass balance). One of the simplest examples of such a system is the case of a bathtub with the tap open but without the bottom plug: after a certain time the water flows in and out at the same rate, so the water level (the state variable being Volume) stabilizes and the system is at steady state. Of course the Volume stabilizing inside the tub depends on the size of the tub, the diameter of the exit hole and the flowrate of water in. Since the tub can overflow, eventually a steady state can be reached where the water flowing in equals the overflow plus the water out through the drain. A steady state flow process requires conditions at all points in an apparatus remain constant as time changes. There must be no accumulation of mass or energy over the time period of interest. The same mass flow rate will remain constant in the flow path through each element of the system. Thermodynamic properties may vary from point to point, but will remain unchanged at any given point. Mechanical engineering When a periodic force is applied to a mechanical system, it will typically reach a steady state after going through some transient behavior. This is often observed in vibrating systems, such as a clock pendulum, but can happen with any type of stable or semi-stable dynamic system. The length of the transient state will depend on the initial conditions of the system. Given certain initial conditions, a system may be in steady state from the beginning. Biochemistry In biochemistry, the study of biochemical pathways is an important topic. Such pathways will often display steady-state behavior where the chemical species are unchanging, but there is a continuous dissipation of flux through the pathway. Many, but not all, biochemical pathways evolve to stable, steady states. As a result, the steady state represents an important reference state to study. This is also related to the concept of homeostasis, however, in biochemistry, a steady state can be stable or unstable such as in the case of sustained oscillations or bistable behavior. Physiology Homeostasis (from Greek ὅμοιος, hómoios, "similar" and στάσις, stásis, "standing still") is the property of a system that regulates its internal environment and tends to maintain a stable, constant condition. Typically used to refer to a living organism, the concept came from that of milieu interieur that was created by Claude Bernard and published in 1865. Multiple dynamic equilibrium adjustment and regulation mechanisms make homeostasis possible. Fiber optics In fiber optics, "steady state" is a synonym for equilibrium mode distribution. Pharmacokinetics In pharmacokinetics, steady state is a dynamic equilibrium in the body where drug concentrations consistently stay within a therapeutic limit over time.
Physical sciences
Classical mechanics
Physics
1326373
https://en.wikipedia.org/wiki/Leptodactylidae
Leptodactylidae
The southern frogs form the Leptodactylidae, a name that comes from Greek meaning a bird or other animal having slender toes. They are a diverse family of frogs that most likely diverged from other hyloids during the Cretaceous. The family has undergone major taxonomic revisions in recent years, including the reclassification of the former subfamily Eleutherodactylinae into its own family the Eleutherodactylidae; the Leptodactylidae now number 206 species in 13 genera distributed throughout Mexico, the Caribbean, and Central and South America. The family includes terrestrial, burrowing, aquatic, and arboreal members, inhabiting a wide range of habitats. Several of the genera within the Leptodactylidae lay their eggs in foam nests. These can be in crevices, on the surface of water, or on forest floors. These foam nests are some of the most varied among frogs. When eggs hatch in nests on the forest floor, the tadpoles remain within the nest, without eating, until metamorphosis. Classification As of December 2019, the Amphibian Species of the World classifies the following genera in the family Leptodactylidae: Subfamily Leiuperinae Bonaparte, 1850 (90 species) Edalorhina Jiménez de la Espada, 1870 Engystomops Jiménez de la Espada, 1872 Physalaemus Fitzinger, 1826 Pleurodema Tschudi, 1838 Pseudopaludicola Miranda-Ribeiro, 1926 Subfamily Leptodactylinae Werner, 1896 (1838) (96 species) Adenomera Steindachner, 1867 Hydrolaetare Gallardo, 1963 Leptodactylus Fitzinger, 1826 Lithodytes Fitzinger, 1843 Subfamily Paratelmatobiinae Ohler and Dubois, 2012 (13 species) Crossodactylodes Cochran, 1938 Paratelmatobius Lutz and Carvalho, 1958 Rupirana Heyer, 1999 Scythrophrys Lynch, 1971 Incertae sedis "Leptodactylus" ochraceus Lutz, 1930
Biology and health sciences
Amphibians
null